Sep 12 17:40:15.959849 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 16:05:08 -00 2025 Sep 12 17:40:15.959872 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:40:15.959884 kernel: BIOS-provided physical RAM map: Sep 12 17:40:15.959890 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 12 17:40:15.959896 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 12 17:40:15.959903 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 12 17:40:15.959910 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 12 17:40:15.959916 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 12 17:40:15.959932 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 12 17:40:15.959962 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 12 17:40:15.959973 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 12 17:40:15.959980 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 12 17:40:15.959990 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 12 17:40:15.959996 kernel: NX (Execute Disable) protection: active Sep 12 17:40:15.960004 kernel: APIC: Static calls initialized Sep 12 17:40:15.960018 kernel: SMBIOS 2.8 present. Sep 12 17:40:15.960025 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 12 17:40:15.960032 kernel: Hypervisor detected: KVM Sep 12 17:40:15.960042 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 12 17:40:15.960050 kernel: kvm-clock: using sched offset of 2890660949 cycles Sep 12 17:40:15.960057 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 12 17:40:15.960064 kernel: tsc: Detected 2794.750 MHz processor Sep 12 17:40:15.960072 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 17:40:15.960079 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 17:40:15.960089 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 12 17:40:15.960096 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 12 17:40:15.960103 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 17:40:15.960110 kernel: Using GB pages for direct mapping Sep 12 17:40:15.960117 kernel: ACPI: Early table checksum verification disabled Sep 12 17:40:15.960124 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 12 17:40:15.960131 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:40:15.960138 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:40:15.960145 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:40:15.960154 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 12 17:40:15.960161 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:40:15.960168 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:40:15.960175 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:40:15.960182 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:40:15.960189 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 12 17:40:15.960196 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 12 17:40:15.960207 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 12 17:40:15.960216 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 12 17:40:15.960223 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 12 17:40:15.960231 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 12 17:40:15.960259 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 12 17:40:15.960272 kernel: No NUMA configuration found Sep 12 17:40:15.960281 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 12 17:40:15.960292 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Sep 12 17:40:15.960299 kernel: Zone ranges: Sep 12 17:40:15.960306 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 17:40:15.960314 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 12 17:40:15.960321 kernel: Normal empty Sep 12 17:40:15.960328 kernel: Movable zone start for each node Sep 12 17:40:15.960335 kernel: Early memory node ranges Sep 12 17:40:15.960342 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 12 17:40:15.960349 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 12 17:40:15.960356 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 12 17:40:15.960366 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 17:40:15.960376 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 12 17:40:15.960384 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 12 17:40:15.960391 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 12 17:40:15.960398 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 12 17:40:15.960405 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 12 17:40:15.960412 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 12 17:40:15.960420 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 12 17:40:15.960427 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 17:40:15.960436 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 12 17:40:15.960444 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 12 17:40:15.960451 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 17:40:15.960458 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 12 17:40:15.960465 kernel: TSC deadline timer available Sep 12 17:40:15.960472 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 12 17:40:15.960479 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 12 17:40:15.960487 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 12 17:40:15.960496 kernel: kvm-guest: setup PV sched yield Sep 12 17:40:15.960506 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 12 17:40:15.960513 kernel: Booting paravirtualized kernel on KVM Sep 12 17:40:15.960528 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 17:40:15.960535 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 12 17:40:15.960542 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u524288 Sep 12 17:40:15.960550 kernel: pcpu-alloc: s197160 r8192 d32216 u524288 alloc=1*2097152 Sep 12 17:40:15.960557 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 12 17:40:15.960565 kernel: kvm-guest: PV spinlocks enabled Sep 12 17:40:15.960574 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 12 17:40:15.960589 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:40:15.960599 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:40:15.960608 kernel: random: crng init done Sep 12 17:40:15.960616 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 17:40:15.960624 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 17:40:15.960631 kernel: Fallback order for Node 0: 0 Sep 12 17:40:15.960638 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Sep 12 17:40:15.960645 kernel: Policy zone: DMA32 Sep 12 17:40:15.960655 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:40:15.960663 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 136900K reserved, 0K cma-reserved) Sep 12 17:40:15.960670 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 12 17:40:15.960678 kernel: ftrace: allocating 37974 entries in 149 pages Sep 12 17:40:15.960685 kernel: ftrace: allocated 149 pages with 4 groups Sep 12 17:40:15.960692 kernel: Dynamic Preempt: voluntary Sep 12 17:40:15.960699 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:40:15.960707 kernel: rcu: RCU event tracing is enabled. Sep 12 17:40:15.960715 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 12 17:40:15.960725 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:40:15.960732 kernel: Rude variant of Tasks RCU enabled. Sep 12 17:40:15.960739 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:40:15.960747 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:40:15.960757 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 12 17:40:15.960764 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 12 17:40:15.960772 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:40:15.960779 kernel: Console: colour VGA+ 80x25 Sep 12 17:40:15.960786 kernel: printk: console [ttyS0] enabled Sep 12 17:40:15.960796 kernel: ACPI: Core revision 20230628 Sep 12 17:40:15.960803 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 12 17:40:15.960811 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 17:40:15.960818 kernel: x2apic enabled Sep 12 17:40:15.960825 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 17:40:15.960832 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 12 17:40:15.960840 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 12 17:40:15.960847 kernel: kvm-guest: setup PV IPIs Sep 12 17:40:15.960864 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 12 17:40:15.960872 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 12 17:40:15.960880 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Sep 12 17:40:15.960887 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 12 17:40:15.960899 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 12 17:40:15.960909 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 12 17:40:15.960919 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 17:40:15.960927 kernel: Spectre V2 : Mitigation: Retpolines Sep 12 17:40:15.960935 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 17:40:15.960945 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 12 17:40:15.960953 kernel: active return thunk: retbleed_return_thunk Sep 12 17:40:15.960963 kernel: RETBleed: Mitigation: untrained return thunk Sep 12 17:40:15.960971 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 12 17:40:15.960979 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 12 17:40:15.960986 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 12 17:40:15.960995 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 12 17:40:15.961002 kernel: active return thunk: srso_return_thunk Sep 12 17:40:15.961013 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 12 17:40:15.961021 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 17:40:15.961028 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 17:40:15.961036 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 17:40:15.961043 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 17:40:15.961051 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 12 17:40:15.961059 kernel: Freeing SMP alternatives memory: 32K Sep 12 17:40:15.961066 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:40:15.961074 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 17:40:15.961084 kernel: landlock: Up and running. Sep 12 17:40:15.961091 kernel: SELinux: Initializing. Sep 12 17:40:15.961099 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:40:15.961107 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:40:15.961114 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 12 17:40:15.961122 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 17:40:15.961130 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 17:40:15.961137 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 17:40:15.961147 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 12 17:40:15.961158 kernel: ... version: 0 Sep 12 17:40:15.961165 kernel: ... bit width: 48 Sep 12 17:40:15.961173 kernel: ... generic registers: 6 Sep 12 17:40:15.961180 kernel: ... value mask: 0000ffffffffffff Sep 12 17:40:15.961188 kernel: ... max period: 00007fffffffffff Sep 12 17:40:15.961195 kernel: ... fixed-purpose events: 0 Sep 12 17:40:15.961203 kernel: ... event mask: 000000000000003f Sep 12 17:40:15.961210 kernel: signal: max sigframe size: 1776 Sep 12 17:40:15.961218 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:40:15.961228 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:40:15.961236 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:40:15.961257 kernel: smpboot: x86: Booting SMP configuration: Sep 12 17:40:15.961264 kernel: .... node #0, CPUs: #1 #2 #3 Sep 12 17:40:15.961272 kernel: smp: Brought up 1 node, 4 CPUs Sep 12 17:40:15.961279 kernel: smpboot: Max logical packages: 1 Sep 12 17:40:15.961287 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Sep 12 17:40:15.961295 kernel: devtmpfs: initialized Sep 12 17:40:15.961302 kernel: x86/mm: Memory block size: 128MB Sep 12 17:40:15.961313 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:40:15.961320 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 12 17:40:15.961328 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:40:15.961336 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:40:15.961343 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:40:15.961351 kernel: audit: type=2000 audit(1757698814.591:1): state=initialized audit_enabled=0 res=1 Sep 12 17:40:15.961359 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:40:15.961366 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 17:40:15.961374 kernel: cpuidle: using governor menu Sep 12 17:40:15.961384 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:40:15.961391 kernel: dca service started, version 1.12.1 Sep 12 17:40:15.961399 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 12 17:40:15.961407 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 12 17:40:15.961414 kernel: PCI: Using configuration type 1 for base access Sep 12 17:40:15.961422 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 17:40:15.961430 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 17:40:15.961437 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 17:40:15.961445 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:40:15.961455 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:40:15.961462 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:40:15.961470 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:40:15.961478 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:40:15.961485 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 17:40:15.961493 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 12 17:40:15.961500 kernel: ACPI: Interpreter enabled Sep 12 17:40:15.961508 kernel: ACPI: PM: (supports S0 S3 S5) Sep 12 17:40:15.961515 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 17:40:15.961532 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 17:40:15.961541 kernel: PCI: Using E820 reservations for host bridge windows Sep 12 17:40:15.961551 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 12 17:40:15.961561 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 17:40:15.961780 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 17:40:15.961922 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 12 17:40:15.962051 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 12 17:40:15.962065 kernel: PCI host bridge to bus 0000:00 Sep 12 17:40:15.962213 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 17:40:15.962370 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 12 17:40:15.962499 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 17:40:15.962664 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 12 17:40:15.962788 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 12 17:40:15.962907 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 12 17:40:15.963037 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 17:40:15.963199 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 12 17:40:15.963377 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 12 17:40:15.963507 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Sep 12 17:40:15.963654 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Sep 12 17:40:15.963783 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Sep 12 17:40:15.963910 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 12 17:40:15.964075 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 12 17:40:15.964210 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Sep 12 17:40:15.964388 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Sep 12 17:40:15.964529 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Sep 12 17:40:15.964679 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 12 17:40:15.964809 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Sep 12 17:40:15.964938 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Sep 12 17:40:15.965078 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Sep 12 17:40:15.965224 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 12 17:40:15.965382 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Sep 12 17:40:15.965529 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Sep 12 17:40:15.965668 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 12 17:40:15.965798 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Sep 12 17:40:15.965956 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 12 17:40:15.966094 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 12 17:40:15.966267 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 12 17:40:15.966405 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Sep 12 17:40:15.966543 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Sep 12 17:40:15.966696 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 12 17:40:15.966827 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Sep 12 17:40:15.966843 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 12 17:40:15.966854 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 12 17:40:15.966864 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 12 17:40:15.966874 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 12 17:40:15.966881 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 12 17:40:15.966889 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 12 17:40:15.966897 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 12 17:40:15.966904 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 12 17:40:15.966912 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 12 17:40:15.966923 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 12 17:40:15.966930 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 12 17:40:15.966938 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 12 17:40:15.966945 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 12 17:40:15.966953 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 12 17:40:15.966960 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 12 17:40:15.966969 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 12 17:40:15.966979 kernel: iommu: Default domain type: Translated Sep 12 17:40:15.966991 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 17:40:15.967003 kernel: PCI: Using ACPI for IRQ routing Sep 12 17:40:15.967011 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 17:40:15.967018 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 12 17:40:15.967026 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 12 17:40:15.967160 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 12 17:40:15.967315 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 12 17:40:15.967448 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 12 17:40:15.967458 kernel: vgaarb: loaded Sep 12 17:40:15.967470 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 12 17:40:15.967478 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 12 17:40:15.967486 kernel: clocksource: Switched to clocksource kvm-clock Sep 12 17:40:15.967493 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:40:15.967501 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:40:15.967509 kernel: pnp: PnP ACPI init Sep 12 17:40:15.967671 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 12 17:40:15.967682 kernel: pnp: PnP ACPI: found 6 devices Sep 12 17:40:15.967694 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 17:40:15.967702 kernel: NET: Registered PF_INET protocol family Sep 12 17:40:15.967709 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 17:40:15.967717 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 17:40:15.967725 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:40:15.967732 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 17:40:15.967740 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 17:40:15.967748 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 17:40:15.967755 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:40:15.967766 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:40:15.967773 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:40:15.967781 kernel: NET: Registered PF_XDP protocol family Sep 12 17:40:15.967900 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 12 17:40:15.968018 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 12 17:40:15.968136 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 12 17:40:15.968278 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 12 17:40:15.968410 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 12 17:40:15.968545 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 12 17:40:15.968556 kernel: PCI: CLS 0 bytes, default 64 Sep 12 17:40:15.968564 kernel: Initialise system trusted keyrings Sep 12 17:40:15.968572 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 17:40:15.968579 kernel: Key type asymmetric registered Sep 12 17:40:15.968587 kernel: Asymmetric key parser 'x509' registered Sep 12 17:40:15.968594 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 12 17:40:15.968602 kernel: io scheduler mq-deadline registered Sep 12 17:40:15.968610 kernel: io scheduler kyber registered Sep 12 17:40:15.968621 kernel: io scheduler bfq registered Sep 12 17:40:15.968629 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 17:40:15.968637 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 12 17:40:15.968644 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 12 17:40:15.968652 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 12 17:40:15.968660 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:40:15.968667 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 17:40:15.968675 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 12 17:40:15.968683 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 12 17:40:15.968694 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 12 17:40:15.968849 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 12 17:40:15.968861 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 12 17:40:15.968982 kernel: rtc_cmos 00:04: registered as rtc0 Sep 12 17:40:15.969103 kernel: rtc_cmos 00:04: setting system clock to 2025-09-12T17:40:15 UTC (1757698815) Sep 12 17:40:15.969225 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 12 17:40:15.969235 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 12 17:40:15.969280 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:40:15.969293 kernel: Segment Routing with IPv6 Sep 12 17:40:15.969300 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:40:15.969308 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:40:15.969316 kernel: Key type dns_resolver registered Sep 12 17:40:15.969323 kernel: IPI shorthand broadcast: enabled Sep 12 17:40:15.969331 kernel: sched_clock: Marking stable (967003241, 145163286)->(1184236163, -72069636) Sep 12 17:40:15.969338 kernel: registered taskstats version 1 Sep 12 17:40:15.969346 kernel: Loading compiled-in X.509 certificates Sep 12 17:40:15.969353 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 449ba23cbe21e08b3bddb674b4885682335ee1f9' Sep 12 17:40:15.969364 kernel: Key type .fscrypt registered Sep 12 17:40:15.969371 kernel: Key type fscrypt-provisioning registered Sep 12 17:40:15.969379 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 17:40:15.969386 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:40:15.969394 kernel: ima: No architecture policies found Sep 12 17:40:15.969401 kernel: clk: Disabling unused clocks Sep 12 17:40:15.969409 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 12 17:40:15.969416 kernel: Write protecting the kernel read-only data: 36864k Sep 12 17:40:15.969424 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 12 17:40:15.969434 kernel: Run /init as init process Sep 12 17:40:15.969442 kernel: with arguments: Sep 12 17:40:15.969449 kernel: /init Sep 12 17:40:15.969456 kernel: with environment: Sep 12 17:40:15.969464 kernel: HOME=/ Sep 12 17:40:15.969471 kernel: TERM=linux Sep 12 17:40:15.969479 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:40:15.969492 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:40:15.969508 systemd[1]: Detected virtualization kvm. Sep 12 17:40:15.969516 systemd[1]: Detected architecture x86-64. Sep 12 17:40:15.969532 systemd[1]: Running in initrd. Sep 12 17:40:15.969540 systemd[1]: No hostname configured, using default hostname. Sep 12 17:40:15.969548 systemd[1]: Hostname set to . Sep 12 17:40:15.969557 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:40:15.969565 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:40:15.969573 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:40:15.969584 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:40:15.969593 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:40:15.969613 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:40:15.969625 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:40:15.969633 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:40:15.969645 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:40:15.969654 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:40:15.969663 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:40:15.969671 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:40:15.969679 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:40:15.969687 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:40:15.969696 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:40:15.969704 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:40:15.969715 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:40:15.969723 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:40:15.969731 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:40:15.969740 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 17:40:15.969748 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:40:15.969756 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:40:15.969765 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:40:15.969773 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:40:15.969782 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:40:15.969793 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:40:15.969801 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:40:15.969809 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:40:15.969818 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:40:15.969826 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:40:15.969835 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:40:15.969846 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:40:15.969857 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:40:15.969868 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:40:15.969898 systemd-journald[193]: Collecting audit messages is disabled. Sep 12 17:40:15.969919 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:40:15.969931 systemd-journald[193]: Journal started Sep 12 17:40:15.969951 systemd-journald[193]: Runtime Journal (/run/log/journal/4ea17ad913ba4d8ebaa03fd1d765522e) is 6.0M, max 48.4M, 42.3M free. Sep 12 17:40:15.957226 systemd-modules-load[194]: Inserted module 'overlay' Sep 12 17:40:15.996091 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:40:15.996109 kernel: Bridge firewalling registered Sep 12 17:40:15.996120 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:40:15.986902 systemd-modules-load[194]: Inserted module 'br_netfilter' Sep 12 17:40:15.996433 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:40:15.998379 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:40:15.999802 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:40:16.012653 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:40:16.013794 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:40:16.014970 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:40:16.019443 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:40:16.032699 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:40:16.036227 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:40:16.038930 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:40:16.044462 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:40:16.047004 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:40:16.061137 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:40:16.080171 dracut-cmdline[228]: dracut-dracut-053 Sep 12 17:40:16.083419 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:40:16.101730 systemd-resolved[226]: Positive Trust Anchors: Sep 12 17:40:16.101754 systemd-resolved[226]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:40:16.101799 systemd-resolved[226]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:40:16.105417 systemd-resolved[226]: Defaulting to hostname 'linux'. Sep 12 17:40:16.110763 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:40:16.110908 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:40:16.187294 kernel: SCSI subsystem initialized Sep 12 17:40:16.197271 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:40:16.209279 kernel: iscsi: registered transport (tcp) Sep 12 17:40:16.231300 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:40:16.231373 kernel: QLogic iSCSI HBA Driver Sep 12 17:40:16.289165 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:40:16.302441 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:40:16.331858 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:40:16.331931 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:40:16.331948 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 17:40:16.378303 kernel: raid6: avx2x4 gen() 28769 MB/s Sep 12 17:40:16.395285 kernel: raid6: avx2x2 gen() 29166 MB/s Sep 12 17:40:16.412395 kernel: raid6: avx2x1 gen() 23805 MB/s Sep 12 17:40:16.412478 kernel: raid6: using algorithm avx2x2 gen() 29166 MB/s Sep 12 17:40:16.430580 kernel: raid6: .... xor() 18099 MB/s, rmw enabled Sep 12 17:40:16.430670 kernel: raid6: using avx2x2 recovery algorithm Sep 12 17:40:16.453274 kernel: xor: automatically using best checksumming function avx Sep 12 17:40:16.614284 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:40:16.630702 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:40:16.645515 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:40:16.658726 systemd-udevd[412]: Using default interface naming scheme 'v255'. Sep 12 17:40:16.663850 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:40:16.675457 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:40:16.692255 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Sep 12 17:40:16.731294 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:40:16.745494 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:40:16.819049 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:40:16.827527 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:40:16.842527 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:40:16.846019 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:40:16.847329 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:40:16.848782 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:40:16.856535 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:40:16.872428 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:40:16.875345 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 12 17:40:16.880930 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 12 17:40:16.886403 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 17:40:16.886424 kernel: libata version 3.00 loaded. Sep 12 17:40:16.886439 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 17:40:16.886462 kernel: GPT:9289727 != 19775487 Sep 12 17:40:16.886477 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 17:40:16.886491 kernel: GPT:9289727 != 19775487 Sep 12 17:40:16.888352 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 17:40:16.888385 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:40:16.899858 kernel: AVX2 version of gcm_enc/dec engaged. Sep 12 17:40:16.899923 kernel: AES CTR mode by8 optimization enabled Sep 12 17:40:16.903992 kernel: ahci 0000:00:1f.2: version 3.0 Sep 12 17:40:16.904259 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 12 17:40:16.904275 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 12 17:40:16.905969 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 12 17:40:16.911259 kernel: scsi host0: ahci Sep 12 17:40:16.913285 kernel: scsi host1: ahci Sep 12 17:40:16.916287 kernel: scsi host2: ahci Sep 12 17:40:16.919306 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:40:16.924146 kernel: scsi host3: ahci Sep 12 17:40:16.924437 kernel: scsi host4: ahci Sep 12 17:40:16.919403 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:40:16.939056 kernel: scsi host5: ahci Sep 12 17:40:16.939366 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Sep 12 17:40:16.939405 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Sep 12 17:40:16.939421 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Sep 12 17:40:16.939434 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Sep 12 17:40:16.939447 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Sep 12 17:40:16.939461 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Sep 12 17:40:16.921686 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:40:16.923529 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:40:16.923664 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:40:16.945980 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (456) Sep 12 17:40:16.946005 kernel: BTRFS: device fsid 6dad227e-2c0d-42e6-b0d2-5c756384bc19 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (465) Sep 12 17:40:16.935794 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:40:16.950492 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:40:16.966588 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 12 17:40:16.979503 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 12 17:40:17.018991 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 17:40:17.020872 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:40:17.029154 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 12 17:40:17.048536 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 12 17:40:17.061403 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:40:17.063436 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:40:17.083742 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:40:17.242310 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 12 17:40:17.242415 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 12 17:40:17.287352 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 12 17:40:17.287445 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 12 17:40:17.287462 kernel: ata3.00: applying bridge limits Sep 12 17:40:17.287477 kernel: ata3.00: configured for UDMA/100 Sep 12 17:40:17.287504 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 12 17:40:17.289283 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 12 17:40:17.289316 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 12 17:40:17.290300 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 12 17:40:17.336837 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 12 17:40:17.337134 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 12 17:40:17.349588 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 12 17:40:17.407694 disk-uuid[553]: Primary Header is updated. Sep 12 17:40:17.407694 disk-uuid[553]: Secondary Entries is updated. Sep 12 17:40:17.407694 disk-uuid[553]: Secondary Header is updated. Sep 12 17:40:17.411645 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:40:17.487313 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:40:18.446949 disk-uuid[576]: The operation has completed successfully. Sep 12 17:40:18.448617 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:40:18.480647 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:40:18.480795 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:40:18.507479 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:40:18.511599 sh[590]: Success Sep 12 17:40:18.526275 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 12 17:40:18.565538 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:40:18.579804 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:40:18.583152 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:40:18.598584 kernel: BTRFS info (device dm-0): first mount of filesystem 6dad227e-2c0d-42e6-b0d2-5c756384bc19 Sep 12 17:40:18.598638 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:40:18.598652 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 17:40:18.599596 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:40:18.600345 kernel: BTRFS info (device dm-0): using free space tree Sep 12 17:40:18.606097 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:40:18.606876 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:40:18.607856 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:40:18.610552 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:40:18.627485 kernel: BTRFS info (device vda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:40:18.627546 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:40:18.627560 kernel: BTRFS info (device vda6): using free space tree Sep 12 17:40:18.631278 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 17:40:18.642388 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 12 17:40:18.644278 kernel: BTRFS info (device vda6): last unmount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:40:18.656855 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:40:18.664512 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:40:18.788163 ignition[690]: Ignition 2.19.0 Sep 12 17:40:18.788183 ignition[690]: Stage: fetch-offline Sep 12 17:40:18.788270 ignition[690]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:40:18.788283 ignition[690]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:40:18.788493 ignition[690]: parsed url from cmdline: "" Sep 12 17:40:18.788497 ignition[690]: no config URL provided Sep 12 17:40:18.788503 ignition[690]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:40:18.788514 ignition[690]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:40:18.788556 ignition[690]: op(1): [started] loading QEMU firmware config module Sep 12 17:40:18.788576 ignition[690]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 12 17:40:18.800909 ignition[690]: op(1): [finished] loading QEMU firmware config module Sep 12 17:40:18.829122 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:40:18.839486 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:40:18.844124 ignition[690]: parsing config with SHA512: 430c204b3b4a4e02bd27bcf5a8421c6baab3208a9bbe03d9b71ea0cac72f603fd4a7bb6bda2b41a7e8ed5a7275a62cfdff9538e677c7afd594526efef1ee9df1 Sep 12 17:40:18.850198 unknown[690]: fetched base config from "system" Sep 12 17:40:18.850214 unknown[690]: fetched user config from "qemu" Sep 12 17:40:18.851068 ignition[690]: fetch-offline: fetch-offline passed Sep 12 17:40:18.853533 ignition[690]: Ignition finished successfully Sep 12 17:40:18.856665 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:40:18.868985 systemd-networkd[780]: lo: Link UP Sep 12 17:40:18.869000 systemd-networkd[780]: lo: Gained carrier Sep 12 17:40:18.871480 systemd-networkd[780]: Enumeration completed Sep 12 17:40:18.871693 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:40:18.872042 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:40:18.872048 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:40:18.872662 systemd[1]: Reached target network.target - Network. Sep 12 17:40:18.873539 systemd-networkd[780]: eth0: Link UP Sep 12 17:40:18.873545 systemd-networkd[780]: eth0: Gained carrier Sep 12 17:40:18.873554 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:40:18.873681 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 12 17:40:18.887588 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:40:18.899508 systemd-networkd[780]: eth0: DHCPv4 address 10.0.0.139/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 17:40:18.921226 ignition[783]: Ignition 2.19.0 Sep 12 17:40:18.921256 ignition[783]: Stage: kargs Sep 12 17:40:18.921477 ignition[783]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:40:18.921490 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:40:18.925498 ignition[783]: kargs: kargs passed Sep 12 17:40:18.925562 ignition[783]: Ignition finished successfully Sep 12 17:40:18.929806 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:40:18.943486 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:40:18.963500 ignition[793]: Ignition 2.19.0 Sep 12 17:40:18.963513 ignition[793]: Stage: disks Sep 12 17:40:18.963688 ignition[793]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:40:18.963700 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:40:18.967377 ignition[793]: disks: disks passed Sep 12 17:40:18.967435 ignition[793]: Ignition finished successfully Sep 12 17:40:18.971650 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:40:18.973872 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:40:18.976151 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:40:18.978640 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:40:18.980719 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:40:18.982788 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:40:18.997642 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:40:19.054403 systemd-fsck[804]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 12 17:40:19.245173 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:40:19.263514 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:40:19.374304 kernel: EXT4-fs (vda9): mounted filesystem 791ad691-63ae-4dbc-8ce3-6c8819e56736 r/w with ordered data mode. Quota mode: none. Sep 12 17:40:19.375446 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:40:19.378546 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:40:19.396530 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:40:19.400687 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:40:19.404453 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 17:40:19.404527 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:40:19.414829 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (812) Sep 12 17:40:19.414864 kernel: BTRFS info (device vda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:40:19.414880 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:40:19.414895 kernel: BTRFS info (device vda6): using free space tree Sep 12 17:40:19.404562 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:40:19.417455 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:40:19.420339 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 17:40:19.421791 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:40:19.436708 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:40:19.481632 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:40:19.487495 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:40:19.493869 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:40:19.498967 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:40:19.625450 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:40:19.636409 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:40:19.639783 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:40:19.647936 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:40:19.656301 kernel: BTRFS info (device vda6): last unmount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:40:19.676128 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:40:19.900102 ignition[929]: INFO : Ignition 2.19.0 Sep 12 17:40:19.900102 ignition[929]: INFO : Stage: mount Sep 12 17:40:19.902217 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:40:19.902217 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:40:19.902217 ignition[929]: INFO : mount: mount passed Sep 12 17:40:19.902217 ignition[929]: INFO : Ignition finished successfully Sep 12 17:40:19.908727 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:40:19.921369 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:40:19.929859 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:40:19.948410 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (937) Sep 12 17:40:19.948496 kernel: BTRFS info (device vda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:40:19.948508 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:40:19.950268 kernel: BTRFS info (device vda6): using free space tree Sep 12 17:40:19.953258 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 17:40:19.955829 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:40:20.012402 ignition[954]: INFO : Ignition 2.19.0 Sep 12 17:40:20.012402 ignition[954]: INFO : Stage: files Sep 12 17:40:20.014820 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:40:20.014820 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:40:20.014820 ignition[954]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:40:20.014820 ignition[954]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:40:20.014820 ignition[954]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:40:20.022578 ignition[954]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:40:20.022578 ignition[954]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:40:20.022578 ignition[954]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:40:20.022578 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 12 17:40:20.022578 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 12 17:40:20.017529 unknown[954]: wrote ssh authorized keys file for user: core Sep 12 17:40:20.060094 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 17:40:20.323538 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 12 17:40:20.325994 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:40:20.325994 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:40:20.325994 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:40:20.325994 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:40:20.325994 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:40:20.325994 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:40:20.325994 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:40:20.325994 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:40:20.325994 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:40:20.325994 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:40:20.325994 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 17:40:20.325994 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 17:40:20.325994 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 17:40:20.353191 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 12 17:40:20.539980 systemd-networkd[780]: eth0: Gained IPv6LL Sep 12 17:40:20.815364 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 12 17:40:21.521919 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 17:40:21.521919 ignition[954]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 12 17:40:21.551456 ignition[954]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:40:21.553827 ignition[954]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:40:21.553827 ignition[954]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 12 17:40:21.557608 ignition[954]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 12 17:40:21.557608 ignition[954]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 17:40:21.561478 ignition[954]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 17:40:21.561478 ignition[954]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 12 17:40:21.565285 ignition[954]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 12 17:40:21.595344 ignition[954]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 17:40:21.605075 ignition[954]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 17:40:21.606904 ignition[954]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 12 17:40:21.606904 ignition[954]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:40:21.606904 ignition[954]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:40:21.606904 ignition[954]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:40:21.606904 ignition[954]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:40:21.606904 ignition[954]: INFO : files: files passed Sep 12 17:40:21.606904 ignition[954]: INFO : Ignition finished successfully Sep 12 17:40:21.642073 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:40:21.650477 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:40:21.653903 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:40:21.658335 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:40:21.658514 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:40:21.665011 initrd-setup-root-after-ignition[982]: grep: /sysroot/oem/oem-release: No such file or directory Sep 12 17:40:21.668963 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:40:21.668963 initrd-setup-root-after-ignition[984]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:40:21.672525 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:40:21.676027 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:40:21.693653 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:40:21.700422 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:40:21.733627 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:40:21.733760 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:40:21.747725 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:40:21.749588 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:40:21.750728 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:40:21.757447 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:40:21.773075 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:40:21.786423 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:40:21.796151 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:40:21.798956 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:40:21.802398 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:40:21.804298 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:40:21.805391 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:40:21.808014 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:40:21.810185 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:40:21.812146 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:40:21.814604 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:40:21.817407 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:40:21.819695 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:40:21.821822 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:40:21.824371 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:40:21.826536 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:40:21.828607 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:40:21.830269 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:40:21.831326 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:40:21.833626 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:40:21.836150 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:40:21.838609 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:40:21.839702 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:40:21.842474 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:40:21.843666 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:40:21.846230 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:40:21.847319 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:40:21.849635 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:40:21.851385 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:40:21.853316 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:40:21.868294 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:40:21.870090 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:40:21.872131 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:40:21.873064 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:40:21.875005 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:40:21.875883 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:40:21.877931 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:40:21.879116 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:40:21.881912 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:40:21.882893 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:40:21.899572 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:40:21.901746 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:40:21.904666 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:40:21.908514 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:40:21.910561 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:40:21.911881 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:40:21.914360 ignition[1009]: INFO : Ignition 2.19.0 Sep 12 17:40:21.914360 ignition[1009]: INFO : Stage: umount Sep 12 17:40:21.914360 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:40:21.914360 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:40:21.920261 ignition[1009]: INFO : umount: umount passed Sep 12 17:40:21.920261 ignition[1009]: INFO : Ignition finished successfully Sep 12 17:40:21.914616 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:40:21.917672 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:40:21.923333 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:40:21.923506 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:40:21.927151 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:40:21.927338 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:40:21.930431 systemd[1]: Stopped target network.target - Network. Sep 12 17:40:21.936033 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:40:21.936144 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:40:21.938906 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:40:21.938994 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:40:21.941170 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:40:21.941237 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:40:21.943509 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:40:21.943565 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:40:21.946607 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:40:21.949126 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:40:21.952829 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:40:21.955311 systemd-networkd[780]: eth0: DHCPv6 lease lost Sep 12 17:40:21.957878 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:40:21.958075 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:40:21.959956 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:40:21.960020 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:40:21.969414 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:40:21.971361 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:40:21.971465 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:40:21.973940 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:40:21.976354 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:40:21.976513 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:40:21.985314 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:40:21.985498 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:40:21.988928 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:40:21.989004 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:40:21.991528 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:40:21.991613 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:40:21.994616 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:40:21.994785 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:40:22.005623 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:40:22.005902 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:40:22.008005 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:40:22.008074 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:40:22.009842 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:40:22.009899 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:40:22.012065 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:40:22.012138 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:40:22.014685 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:40:22.014758 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:40:22.016962 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:40:22.017031 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:40:22.066579 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:40:22.067750 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:40:22.067842 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:40:22.070348 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:40:22.070429 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:40:22.084025 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:40:22.084169 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:40:22.677065 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:40:22.677220 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:40:22.687701 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:40:22.689589 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:40:22.689684 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:40:22.700434 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:40:22.710837 systemd[1]: Switching root. Sep 12 17:40:22.747360 systemd-journald[193]: Journal stopped Sep 12 17:40:24.853317 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Sep 12 17:40:24.853424 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 17:40:24.853452 kernel: SELinux: policy capability open_perms=1 Sep 12 17:40:24.853469 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 17:40:24.853488 kernel: SELinux: policy capability always_check_network=0 Sep 12 17:40:24.853504 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 17:40:24.853520 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 17:40:24.853610 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 17:40:24.853626 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 17:40:24.853643 kernel: audit: type=1403 audit(1757698823.715:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 17:40:24.853672 systemd[1]: Successfully loaded SELinux policy in 48.153ms. Sep 12 17:40:24.853714 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.302ms. Sep 12 17:40:24.853743 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:40:24.853761 systemd[1]: Detected virtualization kvm. Sep 12 17:40:24.853778 systemd[1]: Detected architecture x86-64. Sep 12 17:40:24.853797 systemd[1]: Detected first boot. Sep 12 17:40:24.853826 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:40:24.853843 zram_generator::config[1053]: No configuration found. Sep 12 17:40:24.853872 systemd[1]: Populated /etc with preset unit settings. Sep 12 17:40:24.853889 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 17:40:24.853906 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 17:40:24.853922 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 17:40:24.853940 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 17:40:24.853957 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 17:40:24.853986 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 17:40:24.854004 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 17:40:24.854020 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 17:40:24.854036 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 17:40:24.854052 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 17:40:24.854069 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 17:40:24.854086 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:40:24.854106 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:40:24.854122 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 17:40:24.854147 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 17:40:24.854164 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 17:40:24.854179 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:40:24.854198 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 17:40:24.854216 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:40:24.854235 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 17:40:24.854513 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 17:40:24.854532 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 17:40:24.854563 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 17:40:24.854586 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:40:24.854605 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:40:24.854624 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:40:24.854641 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:40:24.854656 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 17:40:24.854672 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 17:40:24.854687 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:40:24.854703 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:40:24.854733 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:40:24.854752 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 17:40:24.854768 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 17:40:24.854784 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 17:40:24.854804 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 17:40:24.854821 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:40:24.854838 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 17:40:24.854859 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 17:40:24.854878 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 17:40:24.854907 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 17:40:24.854924 systemd[1]: Reached target machines.target - Containers. Sep 12 17:40:24.854941 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 17:40:24.854958 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:40:24.854975 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:40:24.854992 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 17:40:24.855008 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:40:24.855023 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:40:24.855100 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:40:24.855121 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 17:40:24.855139 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:40:24.855151 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 17:40:24.855166 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 17:40:24.855184 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 17:40:24.855200 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 17:40:24.855216 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 17:40:24.855270 kernel: fuse: init (API version 7.39) Sep 12 17:40:24.855289 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:40:24.855313 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:40:24.855330 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:40:24.855346 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 17:40:24.855362 kernel: loop: module loaded Sep 12 17:40:24.855378 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:40:24.855395 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 17:40:24.855412 systemd[1]: Stopped verity-setup.service. Sep 12 17:40:24.855430 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:40:24.855487 systemd-journald[1123]: Collecting audit messages is disabled. Sep 12 17:40:24.855518 kernel: ACPI: bus type drm_connector registered Sep 12 17:40:24.855535 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 17:40:24.855551 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 17:40:24.855567 systemd-journald[1123]: Journal started Sep 12 17:40:24.855612 systemd-journald[1123]: Runtime Journal (/run/log/journal/4ea17ad913ba4d8ebaa03fd1d765522e) is 6.0M, max 48.4M, 42.3M free. Sep 12 17:40:24.591620 systemd[1]: Queued start job for default target multi-user.target. Sep 12 17:40:24.608705 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 12 17:40:24.609209 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 17:40:24.858430 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:40:24.860040 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 17:40:24.861272 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 17:40:24.862503 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 17:40:24.863737 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 17:40:24.865026 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 17:40:24.866557 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:40:24.868171 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 17:40:24.868591 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 17:40:24.870087 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:40:24.870284 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:40:24.871791 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:40:24.871988 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:40:24.873402 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:40:24.873610 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:40:24.875307 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 17:40:24.875505 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 17:40:24.876922 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:40:24.877106 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:40:24.878687 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:40:24.880199 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:40:24.881943 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 17:40:24.897423 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:40:24.908477 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 17:40:24.911684 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 17:40:24.912931 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 17:40:24.912973 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:40:24.915327 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 12 17:40:24.917908 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 17:40:24.921864 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 17:40:24.923228 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:40:24.931437 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 17:40:24.936351 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 17:40:24.939712 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:40:24.941646 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 17:40:24.944088 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:40:24.949596 systemd-journald[1123]: Time spent on flushing to /var/log/journal/4ea17ad913ba4d8ebaa03fd1d765522e is 17.383ms for 947 entries. Sep 12 17:40:24.949596 systemd-journald[1123]: System Journal (/var/log/journal/4ea17ad913ba4d8ebaa03fd1d765522e) is 8.0M, max 195.6M, 187.6M free. Sep 12 17:40:24.995690 systemd-journald[1123]: Received client request to flush runtime journal. Sep 12 17:40:25.047710 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:40:25.052490 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 17:40:25.055216 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 17:40:25.058450 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:40:25.061729 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 17:40:25.066606 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 17:40:25.071282 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 17:40:25.098646 kernel: loop0: detected capacity change from 0 to 142488 Sep 12 17:40:25.073392 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 17:40:25.100759 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 17:40:25.107127 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 17:40:25.117743 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 12 17:40:25.126440 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 17:40:25.130165 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:40:25.145597 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 17:40:25.155794 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 17:40:25.159110 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 12 17:40:25.165697 udevadm[1180]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 12 17:40:25.174806 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 17:40:25.177315 kernel: loop1: detected capacity change from 0 to 140768 Sep 12 17:40:25.185590 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:40:25.232484 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Sep 12 17:40:25.232506 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Sep 12 17:40:25.234273 kernel: loop2: detected capacity change from 0 to 229808 Sep 12 17:40:25.241845 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:40:25.279862 kernel: loop3: detected capacity change from 0 to 142488 Sep 12 17:40:25.299268 kernel: loop4: detected capacity change from 0 to 140768 Sep 12 17:40:25.310269 kernel: loop5: detected capacity change from 0 to 229808 Sep 12 17:40:25.316135 (sd-merge)[1191]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 12 17:40:25.317034 (sd-merge)[1191]: Merged extensions into '/usr'. Sep 12 17:40:25.330301 systemd[1]: Reloading requested from client PID 1168 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 17:40:25.330323 systemd[1]: Reloading... Sep 12 17:40:25.460293 ldconfig[1162]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 17:40:25.505363 zram_generator::config[1220]: No configuration found. Sep 12 17:40:25.631772 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:40:25.693674 systemd[1]: Reloading finished in 362 ms. Sep 12 17:40:25.741792 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 17:40:25.744021 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 17:40:25.761586 systemd[1]: Starting ensure-sysext.service... Sep 12 17:40:25.764123 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:40:25.774137 systemd[1]: Reloading requested from client PID 1254 ('systemctl') (unit ensure-sysext.service)... Sep 12 17:40:25.774304 systemd[1]: Reloading... Sep 12 17:40:25.804008 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 17:40:25.804962 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 17:40:25.806675 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 17:40:25.807137 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Sep 12 17:40:25.807382 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Sep 12 17:40:25.811668 systemd-tmpfiles[1255]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:40:25.811689 systemd-tmpfiles[1255]: Skipping /boot Sep 12 17:40:25.837294 systemd-tmpfiles[1255]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:40:25.837459 systemd-tmpfiles[1255]: Skipping /boot Sep 12 17:40:25.883280 zram_generator::config[1288]: No configuration found. Sep 12 17:40:26.003831 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:40:26.064980 systemd[1]: Reloading finished in 289 ms. Sep 12 17:40:26.086075 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 17:40:26.094130 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:40:26.104824 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:40:26.108339 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 17:40:26.111400 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 17:40:26.119323 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:40:26.123992 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:40:26.134582 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 17:40:26.139924 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:40:26.140116 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:40:26.142975 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:40:26.146181 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:40:26.153398 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:40:26.154881 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:40:26.157371 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 17:40:26.159309 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:40:26.161101 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:40:26.161577 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:40:26.164334 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:40:26.164596 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:40:26.167373 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 17:40:26.170046 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:40:26.170627 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:40:26.185431 systemd-udevd[1327]: Using default interface naming scheme 'v255'. Sep 12 17:40:26.189146 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:40:26.189516 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:40:26.190051 augenrules[1350]: No rules Sep 12 17:40:26.197944 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:40:26.203631 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:40:26.211387 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:40:26.212739 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:40:26.214620 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 17:40:26.215877 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:40:26.217439 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:40:26.219166 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 17:40:26.231047 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 17:40:26.234537 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:40:26.237074 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 17:40:26.239588 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:40:26.239861 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:40:26.242101 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:40:26.242396 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:40:26.245008 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:40:26.245277 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:40:26.247830 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 17:40:26.270517 systemd[1]: Finished ensure-sysext.service. Sep 12 17:40:26.274591 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:40:26.274850 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:40:26.288384 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:40:26.296540 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:40:26.303409 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:40:26.307711 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:40:26.309128 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:40:26.313467 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:40:26.319547 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 17:40:26.321331 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:40:26.321373 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:40:26.322047 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:40:26.322283 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:40:26.331953 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:40:26.332270 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:40:26.334391 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:40:26.334659 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:40:26.348919 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 17:40:26.350469 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:40:26.421344 systemd-resolved[1325]: Positive Trust Anchors: Sep 12 17:40:26.421362 systemd-resolved[1325]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:40:26.421404 systemd-resolved[1325]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:40:26.422130 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:40:26.422429 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:40:26.426393 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:40:26.432130 systemd-resolved[1325]: Defaulting to hostname 'linux'. Sep 12 17:40:26.437126 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:40:26.439091 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:40:26.452282 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1385) Sep 12 17:40:26.538292 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 12 17:40:26.537843 systemd-networkd[1395]: lo: Link UP Sep 12 17:40:26.537854 systemd-networkd[1395]: lo: Gained carrier Sep 12 17:40:26.541111 systemd-networkd[1395]: Enumeration completed Sep 12 17:40:26.541668 systemd-networkd[1395]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:40:26.541674 systemd-networkd[1395]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:40:26.541843 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:40:26.543465 systemd[1]: Reached target network.target - Network. Sep 12 17:40:26.545330 kernel: ACPI: button: Power Button [PWRF] Sep 12 17:40:26.545455 systemd-networkd[1395]: eth0: Link UP Sep 12 17:40:26.545467 systemd-networkd[1395]: eth0: Gained carrier Sep 12 17:40:26.545488 systemd-networkd[1395]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:40:26.554553 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 17:40:26.560041 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 17:40:26.562371 systemd-networkd[1395]: eth0: DHCPv4 address 10.0.0.139/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 17:40:26.563825 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 17:40:26.568869 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 17:40:26.570395 systemd-timesyncd[1397]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 12 17:40:26.570773 systemd-timesyncd[1397]: Initial clock synchronization to Fri 2025-09-12 17:40:26.548297 UTC. Sep 12 17:40:26.571076 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 17:40:26.583335 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 12 17:40:26.587006 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 12 17:40:26.587467 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 12 17:40:26.597116 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 17:40:26.601317 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Sep 12 17:40:26.628534 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:40:26.673814 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 17:40:26.753308 kernel: kvm_amd: TSC scaling supported Sep 12 17:40:26.753588 kernel: kvm_amd: Nested Virtualization enabled Sep 12 17:40:26.753620 kernel: kvm_amd: Nested Paging enabled Sep 12 17:40:26.753661 kernel: kvm_amd: LBR virtualization supported Sep 12 17:40:26.753686 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 12 17:40:26.753706 kernel: kvm_amd: Virtual GIF supported Sep 12 17:40:26.777350 kernel: EDAC MC: Ver: 3.0.0 Sep 12 17:40:26.795153 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:40:26.822629 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 17:40:26.839595 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 17:40:26.852928 lvm[1426]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:40:26.901741 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 17:40:26.903801 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:40:26.905372 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:40:26.907050 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 17:40:26.908768 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 17:40:26.910847 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 17:40:26.912368 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 17:40:26.913790 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 17:40:26.915139 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 17:40:26.915203 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:40:26.916163 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:40:26.919168 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 17:40:26.923974 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 17:40:26.936452 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 17:40:26.939629 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 17:40:26.941595 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 17:40:26.942788 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:40:26.943762 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:40:26.944747 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:40:26.944787 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:40:26.946148 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 17:40:26.948994 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 17:40:26.952751 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 17:40:26.956549 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 17:40:26.957748 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 17:40:26.960568 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 17:40:26.963349 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:40:26.965478 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 17:40:26.969026 jq[1433]: false Sep 12 17:40:26.971477 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 17:40:26.979352 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 17:40:26.988308 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 17:40:26.990300 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 17:40:26.990988 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 17:40:26.992007 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 17:40:26.997440 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 17:40:27.002158 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 17:40:27.006103 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 17:40:27.007610 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 17:40:27.008103 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 17:40:27.008442 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 17:40:27.011173 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 17:40:27.012306 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 17:40:27.012415 extend-filesystems[1434]: Found loop3 Sep 12 17:40:27.014894 jq[1447]: true Sep 12 17:40:27.015799 extend-filesystems[1434]: Found loop4 Sep 12 17:40:27.017057 extend-filesystems[1434]: Found loop5 Sep 12 17:40:27.017057 extend-filesystems[1434]: Found sr0 Sep 12 17:40:27.017057 extend-filesystems[1434]: Found vda Sep 12 17:40:27.017057 extend-filesystems[1434]: Found vda1 Sep 12 17:40:27.017057 extend-filesystems[1434]: Found vda2 Sep 12 17:40:27.017057 extend-filesystems[1434]: Found vda3 Sep 12 17:40:27.016768 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 17:40:27.016170 dbus-daemon[1432]: [system] SELinux support is enabled Sep 12 17:40:27.022259 extend-filesystems[1434]: Found usr Sep 12 17:40:27.022259 extend-filesystems[1434]: Found vda4 Sep 12 17:40:27.022259 extend-filesystems[1434]: Found vda6 Sep 12 17:40:27.022259 extend-filesystems[1434]: Found vda7 Sep 12 17:40:27.022259 extend-filesystems[1434]: Found vda9 Sep 12 17:40:27.022259 extend-filesystems[1434]: Checking size of /dev/vda9 Sep 12 17:40:27.034763 update_engine[1446]: I20250912 17:40:27.034472 1446 main.cc:92] Flatcar Update Engine starting Sep 12 17:40:27.037174 update_engine[1446]: I20250912 17:40:27.035844 1446 update_check_scheduler.cc:74] Next update check in 10m53s Sep 12 17:40:27.037906 jq[1451]: true Sep 12 17:40:27.043026 (ntainerd)[1456]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 17:40:27.048688 extend-filesystems[1434]: Resized partition /dev/vda9 Sep 12 17:40:27.063741 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 17:40:27.069503 extend-filesystems[1468]: resize2fs 1.47.1 (20-May-2024) Sep 12 17:40:27.072914 tar[1450]: linux-amd64/LICENSE Sep 12 17:40:27.072914 tar[1450]: linux-amd64/helm Sep 12 17:40:27.063824 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 17:40:27.064003 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 17:40:27.064029 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 17:40:27.069450 systemd[1]: Started update-engine.service - Update Engine. Sep 12 17:40:27.075292 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 12 17:40:27.081480 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 17:40:27.114277 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1374) Sep 12 17:40:27.154272 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 12 17:40:27.189035 extend-filesystems[1468]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 12 17:40:27.189035 extend-filesystems[1468]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 17:40:27.189035 extend-filesystems[1468]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 12 17:40:27.215394 extend-filesystems[1434]: Resized filesystem in /dev/vda9 Sep 12 17:40:27.218762 bash[1486]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:40:27.194888 systemd-logind[1445]: Watching system buttons on /dev/input/event1 (Power Button) Sep 12 17:40:27.194916 systemd-logind[1445]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 17:40:27.196103 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 17:40:27.197743 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 17:40:27.201509 locksmithd[1471]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 17:40:27.201703 systemd-logind[1445]: New seat seat0. Sep 12 17:40:27.216401 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 17:40:27.220101 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 12 17:40:27.220197 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 17:40:27.342571 sshd_keygen[1455]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 17:40:27.379881 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 17:40:27.389719 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 17:40:27.400096 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 17:40:27.400448 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 17:40:27.414597 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 17:40:27.442445 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 17:40:27.446445 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 17:40:27.454202 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 17:40:27.455971 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 17:40:27.680973 containerd[1456]: time="2025-09-12T17:40:27.680363543Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 12 17:40:27.794830 containerd[1456]: time="2025-09-12T17:40:27.794741235Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:40:27.797403 containerd[1456]: time="2025-09-12T17:40:27.797350840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:40:27.797403 containerd[1456]: time="2025-09-12T17:40:27.797382646Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 17:40:27.797403 containerd[1456]: time="2025-09-12T17:40:27.797403558Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 17:40:27.797676 containerd[1456]: time="2025-09-12T17:40:27.797642612Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 17:40:27.797676 containerd[1456]: time="2025-09-12T17:40:27.797667685Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 17:40:27.797788 containerd[1456]: time="2025-09-12T17:40:27.797762885Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:40:27.797825 containerd[1456]: time="2025-09-12T17:40:27.797785366Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:40:27.798061 containerd[1456]: time="2025-09-12T17:40:27.798027424Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:40:27.798105 containerd[1456]: time="2025-09-12T17:40:27.798076840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 17:40:27.798134 containerd[1456]: time="2025-09-12T17:40:27.798099201Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:40:27.798134 containerd[1456]: time="2025-09-12T17:40:27.798113859Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 17:40:27.798302 containerd[1456]: time="2025-09-12T17:40:27.798276704Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:40:27.798636 containerd[1456]: time="2025-09-12T17:40:27.798600984Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:40:27.798810 containerd[1456]: time="2025-09-12T17:40:27.798774764Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:40:27.798810 containerd[1456]: time="2025-09-12T17:40:27.798799777Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 17:40:27.798957 containerd[1456]: time="2025-09-12T17:40:27.798933707Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 17:40:27.799035 containerd[1456]: time="2025-09-12T17:40:27.799014099Z" level=info msg="metadata content store policy set" policy=shared Sep 12 17:40:27.808292 containerd[1456]: time="2025-09-12T17:40:27.807674800Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 17:40:27.808292 containerd[1456]: time="2025-09-12T17:40:27.807735502Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 17:40:27.808292 containerd[1456]: time="2025-09-12T17:40:27.807762847Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 17:40:27.808292 containerd[1456]: time="2025-09-12T17:40:27.807788799Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 17:40:27.808292 containerd[1456]: time="2025-09-12T17:40:27.807820486Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 17:40:27.808292 containerd[1456]: time="2025-09-12T17:40:27.808002921Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 17:40:27.808510 containerd[1456]: time="2025-09-12T17:40:27.808381679Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 17:40:27.808545 containerd[1456]: time="2025-09-12T17:40:27.808523694Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 17:40:27.808571 containerd[1456]: time="2025-09-12T17:40:27.808542594Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 17:40:27.808571 containerd[1456]: time="2025-09-12T17:40:27.808558643Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 17:40:27.808692 containerd[1456]: time="2025-09-12T17:40:27.808581314Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 17:40:27.808692 containerd[1456]: time="2025-09-12T17:40:27.808680236Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 17:40:27.808751 containerd[1456]: time="2025-09-12T17:40:27.808695524Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 17:40:27.808751 containerd[1456]: time="2025-09-12T17:40:27.808712863Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 17:40:27.808751 containerd[1456]: time="2025-09-12T17:40:27.808729272Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 17:40:27.808751 containerd[1456]: time="2025-09-12T17:40:27.808745410Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 17:40:27.808832 containerd[1456]: time="2025-09-12T17:40:27.808760668Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 17:40:27.808832 containerd[1456]: time="2025-09-12T17:40:27.808774595Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 17:40:27.808832 containerd[1456]: time="2025-09-12T17:40:27.808810624Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 17:40:27.808832 containerd[1456]: time="2025-09-12T17:40:27.808829254Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 17:40:27.808904 containerd[1456]: time="2025-09-12T17:40:27.808845482Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 17:40:27.808904 containerd[1456]: time="2025-09-12T17:40:27.808861671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 17:40:27.808904 containerd[1456]: time="2025-09-12T17:40:27.808878479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 17:40:27.808904 containerd[1456]: time="2025-09-12T17:40:27.808897719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 17:40:27.808978 containerd[1456]: time="2025-09-12T17:40:27.808912788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 17:40:27.808978 containerd[1456]: time="2025-09-12T17:40:27.808929166Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 17:40:27.808978 containerd[1456]: time="2025-09-12T17:40:27.808945194Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 17:40:27.808978 containerd[1456]: time="2025-09-12T17:40:27.808963394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 17:40:27.809049 containerd[1456]: time="2025-09-12T17:40:27.808978382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 17:40:27.809049 containerd[1456]: time="2025-09-12T17:40:27.808994610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 17:40:27.809049 containerd[1456]: time="2025-09-12T17:40:27.809010399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 17:40:27.809049 containerd[1456]: time="2025-09-12T17:40:27.809033030Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 17:40:27.809165 containerd[1456]: time="2025-09-12T17:40:27.809057013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 17:40:27.809165 containerd[1456]: time="2025-09-12T17:40:27.809071671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 17:40:27.809165 containerd[1456]: time="2025-09-12T17:40:27.809093122Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 17:40:27.809277 containerd[1456]: time="2025-09-12T17:40:27.809174925Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 17:40:27.809277 containerd[1456]: time="2025-09-12T17:40:27.809197546Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 17:40:27.809277 containerd[1456]: time="2025-09-12T17:40:27.809212075Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 17:40:27.809277 containerd[1456]: time="2025-09-12T17:40:27.809234206Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 17:40:27.809277 containerd[1456]: time="2025-09-12T17:40:27.809265272Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 17:40:27.809394 containerd[1456]: time="2025-09-12T17:40:27.809281401Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 17:40:27.809394 containerd[1456]: time="2025-09-12T17:40:27.809294377Z" level=info msg="NRI interface is disabled by configuration." Sep 12 17:40:27.809394 containerd[1456]: time="2025-09-12T17:40:27.809307084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 17:40:27.809740 containerd[1456]: time="2025-09-12T17:40:27.809651234Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 17:40:27.809740 containerd[1456]: time="2025-09-12T17:40:27.809734628Z" level=info msg="Connect containerd service" Sep 12 17:40:27.809959 containerd[1456]: time="2025-09-12T17:40:27.809784884Z" level=info msg="using legacy CRI server" Sep 12 17:40:27.809959 containerd[1456]: time="2025-09-12T17:40:27.809793558Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 17:40:27.809959 containerd[1456]: time="2025-09-12T17:40:27.809909989Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 17:40:27.810706 containerd[1456]: time="2025-09-12T17:40:27.810669706Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:40:27.810926 containerd[1456]: time="2025-09-12T17:40:27.810847430Z" level=info msg="Start subscribing containerd event" Sep 12 17:40:27.810966 containerd[1456]: time="2025-09-12T17:40:27.810941219Z" level=info msg="Start recovering state" Sep 12 17:40:27.811112 containerd[1456]: time="2025-09-12T17:40:27.811084113Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 17:40:27.811112 containerd[1456]: time="2025-09-12T17:40:27.811101042Z" level=info msg="Start event monitor" Sep 12 17:40:27.811183 containerd[1456]: time="2025-09-12T17:40:27.811139902Z" level=info msg="Start snapshots syncer" Sep 12 17:40:27.811183 containerd[1456]: time="2025-09-12T17:40:27.811151768Z" level=info msg="Start cni network conf syncer for default" Sep 12 17:40:27.811183 containerd[1456]: time="2025-09-12T17:40:27.811159993Z" level=info msg="Start streaming server" Sep 12 17:40:27.811183 containerd[1456]: time="2025-09-12T17:40:27.811163615Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 17:40:27.811375 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 17:40:27.812573 containerd[1456]: time="2025-09-12T17:40:27.811517690Z" level=info msg="containerd successfully booted in 0.134057s" Sep 12 17:40:27.897865 tar[1450]: linux-amd64/README.md Sep 12 17:40:27.917252 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 17:40:28.283493 systemd-networkd[1395]: eth0: Gained IPv6LL Sep 12 17:40:28.287427 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 17:40:28.289459 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 17:40:28.297528 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 12 17:40:28.299256 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:40:28.301601 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 17:40:28.326656 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 17:40:28.328460 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 12 17:40:28.328715 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 12 17:40:28.331193 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 17:40:29.888010 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 17:40:29.891646 systemd[1]: Started sshd@0-10.0.0.139:22-10.0.0.1:54402.service - OpenSSH per-connection server daemon (10.0.0.1:54402). Sep 12 17:40:30.017142 sshd[1541]: Accepted publickey for core from 10.0.0.1 port 54402 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:40:30.020195 sshd[1541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:40:30.033769 systemd-logind[1445]: New session 1 of user core. Sep 12 17:40:30.035672 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 17:40:30.045990 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 17:40:30.075933 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 17:40:30.086637 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 17:40:30.092909 (systemd)[1545]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 17:40:30.539037 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:40:30.541050 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 17:40:30.555332 systemd[1545]: Queued start job for default target default.target. Sep 12 17:40:30.555683 (kubelet)[1556]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:40:30.557407 systemd[1545]: Created slice app.slice - User Application Slice. Sep 12 17:40:30.557439 systemd[1545]: Reached target paths.target - Paths. Sep 12 17:40:30.557456 systemd[1545]: Reached target timers.target - Timers. Sep 12 17:40:30.559668 systemd[1545]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 17:40:30.577164 systemd[1545]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 17:40:30.577358 systemd[1545]: Reached target sockets.target - Sockets. Sep 12 17:40:30.577374 systemd[1545]: Reached target basic.target - Basic System. Sep 12 17:40:30.577437 systemd[1545]: Reached target default.target - Main User Target. Sep 12 17:40:30.577484 systemd[1545]: Startup finished in 451ms. Sep 12 17:40:30.578282 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 17:40:30.589485 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 17:40:30.590992 systemd[1]: Startup finished in 1.128s (kernel) + 7.974s (initrd) + 6.922s (userspace) = 16.025s. Sep 12 17:40:30.665771 systemd[1]: Started sshd@1-10.0.0.139:22-10.0.0.1:51566.service - OpenSSH per-connection server daemon (10.0.0.1:51566). Sep 12 17:40:30.705737 sshd[1566]: Accepted publickey for core from 10.0.0.1 port 51566 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:40:30.708133 sshd[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:40:30.715767 systemd-logind[1445]: New session 2 of user core. Sep 12 17:40:30.723637 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 17:40:30.783864 sshd[1566]: pam_unix(sshd:session): session closed for user core Sep 12 17:40:30.795976 systemd[1]: sshd@1-10.0.0.139:22-10.0.0.1:51566.service: Deactivated successfully. Sep 12 17:40:30.798255 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 17:40:30.799984 systemd-logind[1445]: Session 2 logged out. Waiting for processes to exit. Sep 12 17:40:30.801616 systemd[1]: Started sshd@2-10.0.0.139:22-10.0.0.1:51578.service - OpenSSH per-connection server daemon (10.0.0.1:51578). Sep 12 17:40:30.802877 systemd-logind[1445]: Removed session 2. Sep 12 17:40:30.851555 sshd[1578]: Accepted publickey for core from 10.0.0.1 port 51578 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:40:30.853508 sshd[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:40:30.858785 systemd-logind[1445]: New session 3 of user core. Sep 12 17:40:30.871512 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 17:40:30.925165 sshd[1578]: pam_unix(sshd:session): session closed for user core Sep 12 17:40:30.933323 systemd[1]: sshd@2-10.0.0.139:22-10.0.0.1:51578.service: Deactivated successfully. Sep 12 17:40:30.935098 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 17:40:30.936855 systemd-logind[1445]: Session 3 logged out. Waiting for processes to exit. Sep 12 17:40:30.941596 systemd[1]: Started sshd@3-10.0.0.139:22-10.0.0.1:51580.service - OpenSSH per-connection server daemon (10.0.0.1:51580). Sep 12 17:40:30.942644 systemd-logind[1445]: Removed session 3. Sep 12 17:40:30.982661 sshd[1586]: Accepted publickey for core from 10.0.0.1 port 51580 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:40:30.984994 sshd[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:40:30.990304 systemd-logind[1445]: New session 4 of user core. Sep 12 17:40:30.995407 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 17:40:31.054503 sshd[1586]: pam_unix(sshd:session): session closed for user core Sep 12 17:40:31.138820 systemd[1]: sshd@3-10.0.0.139:22-10.0.0.1:51580.service: Deactivated successfully. Sep 12 17:40:31.142417 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 17:40:31.144205 systemd-logind[1445]: Session 4 logged out. Waiting for processes to exit. Sep 12 17:40:31.150670 systemd[1]: Started sshd@4-10.0.0.139:22-10.0.0.1:51596.service - OpenSSH per-connection server daemon (10.0.0.1:51596). Sep 12 17:40:31.151911 systemd-logind[1445]: Removed session 4. Sep 12 17:40:31.188583 sshd[1593]: Accepted publickey for core from 10.0.0.1 port 51596 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:40:31.191189 sshd[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:40:31.196416 systemd-logind[1445]: New session 5 of user core. Sep 12 17:40:31.206477 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 17:40:31.224004 kubelet[1556]: E0912 17:40:31.223899 1556 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:40:31.228613 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:40:31.228849 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:40:31.229302 systemd[1]: kubelet.service: Consumed 2.713s CPU time. Sep 12 17:40:31.271321 sudo[1597]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 17:40:31.271731 sudo[1597]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:40:31.298541 sudo[1597]: pam_unix(sudo:session): session closed for user root Sep 12 17:40:31.301306 sshd[1593]: pam_unix(sshd:session): session closed for user core Sep 12 17:40:31.313525 systemd[1]: sshd@4-10.0.0.139:22-10.0.0.1:51596.service: Deactivated successfully. Sep 12 17:40:31.315715 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 17:40:31.317390 systemd-logind[1445]: Session 5 logged out. Waiting for processes to exit. Sep 12 17:40:31.331803 systemd[1]: Started sshd@5-10.0.0.139:22-10.0.0.1:51610.service - OpenSSH per-connection server daemon (10.0.0.1:51610). Sep 12 17:40:31.333083 systemd-logind[1445]: Removed session 5. Sep 12 17:40:31.369480 sshd[1602]: Accepted publickey for core from 10.0.0.1 port 51610 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:40:31.371981 sshd[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:40:31.376866 systemd-logind[1445]: New session 6 of user core. Sep 12 17:40:31.392602 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 17:40:31.452501 sudo[1606]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 17:40:31.452921 sudo[1606]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:40:31.458742 sudo[1606]: pam_unix(sudo:session): session closed for user root Sep 12 17:40:31.467509 sudo[1605]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 12 17:40:31.467890 sudo[1605]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:40:31.487650 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 12 17:40:31.490637 auditctl[1609]: No rules Sep 12 17:40:31.492667 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:40:31.493102 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 12 17:40:31.495607 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:40:31.534907 augenrules[1627]: No rules Sep 12 17:40:31.537017 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:40:31.539233 sudo[1605]: pam_unix(sudo:session): session closed for user root Sep 12 17:40:31.541881 sshd[1602]: pam_unix(sshd:session): session closed for user core Sep 12 17:40:31.550623 systemd[1]: sshd@5-10.0.0.139:22-10.0.0.1:51610.service: Deactivated successfully. Sep 12 17:40:31.552664 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 17:40:31.554641 systemd-logind[1445]: Session 6 logged out. Waiting for processes to exit. Sep 12 17:40:31.556229 systemd[1]: Started sshd@6-10.0.0.139:22-10.0.0.1:51624.service - OpenSSH per-connection server daemon (10.0.0.1:51624). Sep 12 17:40:31.557222 systemd-logind[1445]: Removed session 6. Sep 12 17:40:31.608237 sshd[1635]: Accepted publickey for core from 10.0.0.1 port 51624 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:40:31.610690 sshd[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:40:31.616199 systemd-logind[1445]: New session 7 of user core. Sep 12 17:40:31.626479 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 17:40:31.688607 sudo[1638]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 17:40:31.688998 sudo[1638]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:40:32.428749 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 17:40:32.428931 (dockerd)[1657]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 17:40:33.385636 dockerd[1657]: time="2025-09-12T17:40:33.385535696Z" level=info msg="Starting up" Sep 12 17:40:35.358151 dockerd[1657]: time="2025-09-12T17:40:35.358054376Z" level=info msg="Loading containers: start." Sep 12 17:40:35.531305 kernel: Initializing XFRM netlink socket Sep 12 17:40:35.631459 systemd-networkd[1395]: docker0: Link UP Sep 12 17:40:35.657483 dockerd[1657]: time="2025-09-12T17:40:35.657427556Z" level=info msg="Loading containers: done." Sep 12 17:40:35.683008 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck731870327-merged.mount: Deactivated successfully. Sep 12 17:40:35.685440 dockerd[1657]: time="2025-09-12T17:40:35.685357547Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 17:40:35.685601 dockerd[1657]: time="2025-09-12T17:40:35.685519820Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 12 17:40:35.685694 dockerd[1657]: time="2025-09-12T17:40:35.685666307Z" level=info msg="Daemon has completed initialization" Sep 12 17:40:35.743379 dockerd[1657]: time="2025-09-12T17:40:35.742991862Z" level=info msg="API listen on /run/docker.sock" Sep 12 17:40:35.743541 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 17:40:36.758318 containerd[1456]: time="2025-09-12T17:40:36.758236383Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 12 17:40:37.887018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1974734049.mount: Deactivated successfully. Sep 12 17:40:40.315392 containerd[1456]: time="2025-09-12T17:40:40.312191541Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:40.345383 containerd[1456]: time="2025-09-12T17:40:40.345281548Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Sep 12 17:40:40.381501 containerd[1456]: time="2025-09-12T17:40:40.381413994Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:40.436237 containerd[1456]: time="2025-09-12T17:40:40.436149329Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:40.437920 containerd[1456]: time="2025-09-12T17:40:40.437838411Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 3.679513335s" Sep 12 17:40:40.437920 containerd[1456]: time="2025-09-12T17:40:40.437909132Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Sep 12 17:40:40.439400 containerd[1456]: time="2025-09-12T17:40:40.439342313Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 12 17:40:41.438888 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 17:40:41.461821 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:40:41.825126 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:40:41.830105 (kubelet)[1874]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:40:42.287809 kubelet[1874]: E0912 17:40:42.287746 1874 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:40:42.296530 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:40:42.296783 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:40:43.344851 containerd[1456]: time="2025-09-12T17:40:43.344670641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:43.345992 containerd[1456]: time="2025-09-12T17:40:43.345884944Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Sep 12 17:40:43.347626 containerd[1456]: time="2025-09-12T17:40:43.347582740Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:43.352015 containerd[1456]: time="2025-09-12T17:40:43.351938422Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:43.353212 containerd[1456]: time="2025-09-12T17:40:43.353159845Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 2.913751296s" Sep 12 17:40:43.353212 containerd[1456]: time="2025-09-12T17:40:43.353206821Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Sep 12 17:40:43.353998 containerd[1456]: time="2025-09-12T17:40:43.353954723Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 12 17:40:45.183157 containerd[1456]: time="2025-09-12T17:40:45.183094351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:45.184102 containerd[1456]: time="2025-09-12T17:40:45.184004142Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Sep 12 17:40:45.185524 containerd[1456]: time="2025-09-12T17:40:45.185493275Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:45.190042 containerd[1456]: time="2025-09-12T17:40:45.189994592Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:45.191172 containerd[1456]: time="2025-09-12T17:40:45.191113227Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.83710593s" Sep 12 17:40:45.191172 containerd[1456]: time="2025-09-12T17:40:45.191152224Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Sep 12 17:40:45.191877 containerd[1456]: time="2025-09-12T17:40:45.191817149Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 12 17:40:46.485075 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2994318719.mount: Deactivated successfully. Sep 12 17:40:47.620147 containerd[1456]: time="2025-09-12T17:40:47.620048504Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:47.621099 containerd[1456]: time="2025-09-12T17:40:47.621051111Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Sep 12 17:40:47.622418 containerd[1456]: time="2025-09-12T17:40:47.622387040Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:47.624979 containerd[1456]: time="2025-09-12T17:40:47.624940730Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:47.625845 containerd[1456]: time="2025-09-12T17:40:47.625810458Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 2.433914252s" Sep 12 17:40:47.625921 containerd[1456]: time="2025-09-12T17:40:47.625849967Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Sep 12 17:40:47.626423 containerd[1456]: time="2025-09-12T17:40:47.626395637Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 12 17:40:48.355043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3398421591.mount: Deactivated successfully. Sep 12 17:40:50.719841 containerd[1456]: time="2025-09-12T17:40:50.719756914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:50.760282 containerd[1456]: time="2025-09-12T17:40:50.760169849Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Sep 12 17:40:50.786617 containerd[1456]: time="2025-09-12T17:40:50.786537038Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:50.848743 containerd[1456]: time="2025-09-12T17:40:50.848601772Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:50.850176 containerd[1456]: time="2025-09-12T17:40:50.850121346Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 3.223692298s" Sep 12 17:40:50.850176 containerd[1456]: time="2025-09-12T17:40:50.850154448Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 12 17:40:50.850989 containerd[1456]: time="2025-09-12T17:40:50.850811800Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 17:40:52.146372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount949116498.mount: Deactivated successfully. Sep 12 17:40:52.156475 containerd[1456]: time="2025-09-12T17:40:52.155870225Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:52.157531 containerd[1456]: time="2025-09-12T17:40:52.157478888Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 12 17:40:52.159107 containerd[1456]: time="2025-09-12T17:40:52.159064805Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:52.162718 containerd[1456]: time="2025-09-12T17:40:52.162638614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:52.163383 containerd[1456]: time="2025-09-12T17:40:52.163328502Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.312486024s" Sep 12 17:40:52.163467 containerd[1456]: time="2025-09-12T17:40:52.163383841Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 17:40:52.164294 containerd[1456]: time="2025-09-12T17:40:52.164192741Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 12 17:40:52.438616 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 17:40:52.451403 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:40:52.625935 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:40:52.630900 (kubelet)[1962]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:40:52.959535 kubelet[1962]: E0912 17:40:52.959456 1962 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:40:52.964568 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:40:52.964784 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:40:53.520021 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1051943873.mount: Deactivated successfully. Sep 12 17:40:55.923351 containerd[1456]: time="2025-09-12T17:40:55.923270062Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:55.924798 containerd[1456]: time="2025-09-12T17:40:55.924750600Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Sep 12 17:40:55.927343 containerd[1456]: time="2025-09-12T17:40:55.927299685Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:55.931615 containerd[1456]: time="2025-09-12T17:40:55.931532643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:55.933137 containerd[1456]: time="2025-09-12T17:40:55.933075304Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.768777435s" Sep 12 17:40:55.933137 containerd[1456]: time="2025-09-12T17:40:55.933128482Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 12 17:40:59.654031 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:40:59.665459 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:40:59.689305 systemd[1]: Reloading requested from client PID 2055 ('systemctl') (unit session-7.scope)... Sep 12 17:40:59.689321 systemd[1]: Reloading... Sep 12 17:40:59.783288 zram_generator::config[2097]: No configuration found. Sep 12 17:41:00.248342 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:41:00.339011 systemd[1]: Reloading finished in 649 ms. Sep 12 17:41:00.394578 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:41:00.397697 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:41:00.397968 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:41:00.399892 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:41:00.582114 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:41:00.587766 (kubelet)[2144]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:41:00.691672 kubelet[2144]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:41:00.691672 kubelet[2144]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:41:00.691672 kubelet[2144]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:41:00.692140 kubelet[2144]: I0912 17:41:00.691712 2144 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:41:01.650381 kubelet[2144]: I0912 17:41:01.650313 2144 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 12 17:41:01.650381 kubelet[2144]: I0912 17:41:01.650352 2144 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:41:01.651087 kubelet[2144]: I0912 17:41:01.651050 2144 server.go:956] "Client rotation is on, will bootstrap in background" Sep 12 17:41:01.680623 kubelet[2144]: I0912 17:41:01.680533 2144 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:41:01.681345 kubelet[2144]: E0912 17:41:01.681283 2144 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.139:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 12 17:41:01.690126 kubelet[2144]: E0912 17:41:01.690067 2144 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:41:01.690126 kubelet[2144]: I0912 17:41:01.690120 2144 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:41:01.698400 kubelet[2144]: I0912 17:41:01.698345 2144 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:41:01.698837 kubelet[2144]: I0912 17:41:01.698688 2144 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:41:01.698930 kubelet[2144]: I0912 17:41:01.698729 2144 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:41:01.699099 kubelet[2144]: I0912 17:41:01.698932 2144 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:41:01.699099 kubelet[2144]: I0912 17:41:01.698945 2144 container_manager_linux.go:303] "Creating device plugin manager" Sep 12 17:41:01.699151 kubelet[2144]: I0912 17:41:01.699135 2144 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:41:01.704339 kubelet[2144]: I0912 17:41:01.704305 2144 kubelet.go:480] "Attempting to sync node with API server" Sep 12 17:41:01.704339 kubelet[2144]: I0912 17:41:01.704343 2144 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:41:01.704424 kubelet[2144]: I0912 17:41:01.704377 2144 kubelet.go:386] "Adding apiserver pod source" Sep 12 17:41:01.704424 kubelet[2144]: I0912 17:41:01.704401 2144 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:41:01.710216 kubelet[2144]: E0912 17:41:01.709945 2144 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.139:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 12 17:41:01.710216 kubelet[2144]: E0912 17:41:01.710095 2144 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 12 17:41:01.711503 kubelet[2144]: I0912 17:41:01.711481 2144 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:41:01.712116 kubelet[2144]: I0912 17:41:01.712085 2144 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 12 17:41:01.713219 kubelet[2144]: W0912 17:41:01.713192 2144 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 17:41:01.717353 kubelet[2144]: I0912 17:41:01.717306 2144 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:41:01.719797 kubelet[2144]: I0912 17:41:01.719564 2144 server.go:1289] "Started kubelet" Sep 12 17:41:01.719797 kubelet[2144]: I0912 17:41:01.718820 2144 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:41:01.721489 kubelet[2144]: I0912 17:41:01.721225 2144 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:41:01.721662 kubelet[2144]: I0912 17:41:01.721604 2144 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:41:01.721867 kubelet[2144]: I0912 17:41:01.721811 2144 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:41:01.722955 kubelet[2144]: I0912 17:41:01.721675 2144 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:41:01.724007 kubelet[2144]: I0912 17:41:01.723950 2144 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:41:01.724148 kubelet[2144]: E0912 17:41:01.724121 2144 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:41:01.725914 kubelet[2144]: I0912 17:41:01.725324 2144 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:41:01.725914 kubelet[2144]: I0912 17:41:01.725408 2144 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:41:01.730468 kubelet[2144]: E0912 17:41:01.727869 2144 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.139:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.139:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186499cd52c591d3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-12 17:41:01.717361107 +0000 UTC m=+1.125020963,LastTimestamp:2025-09-12 17:41:01.717361107 +0000 UTC m=+1.125020963,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 12 17:41:01.730468 kubelet[2144]: E0912 17:41:01.729743 2144 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.139:6443: connect: connection refused" interval="200ms" Sep 12 17:41:01.730468 kubelet[2144]: E0912 17:41:01.729877 2144 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 12 17:41:01.730468 kubelet[2144]: I0912 17:41:01.730198 2144 server.go:317] "Adding debug handlers to kubelet server" Sep 12 17:41:01.730840 kubelet[2144]: I0912 17:41:01.730801 2144 factory.go:223] Registration of the systemd container factory successfully Sep 12 17:41:01.731109 kubelet[2144]: I0912 17:41:01.731061 2144 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:41:01.732071 kubelet[2144]: E0912 17:41:01.732040 2144 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:41:01.733069 kubelet[2144]: I0912 17:41:01.733008 2144 factory.go:223] Registration of the containerd container factory successfully Sep 12 17:41:01.734593 kubelet[2144]: I0912 17:41:01.734522 2144 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 12 17:41:01.749922 kubelet[2144]: I0912 17:41:01.749875 2144 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:41:01.749922 kubelet[2144]: I0912 17:41:01.749911 2144 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:41:01.749922 kubelet[2144]: I0912 17:41:01.749932 2144 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:41:01.755972 kubelet[2144]: I0912 17:41:01.755917 2144 policy_none.go:49] "None policy: Start" Sep 12 17:41:01.755972 kubelet[2144]: I0912 17:41:01.755951 2144 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:41:01.755972 kubelet[2144]: I0912 17:41:01.755968 2144 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:41:01.756341 kubelet[2144]: I0912 17:41:01.756313 2144 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 12 17:41:01.756506 kubelet[2144]: I0912 17:41:01.756461 2144 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 12 17:41:01.756563 kubelet[2144]: I0912 17:41:01.756519 2144 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:41:01.756563 kubelet[2144]: I0912 17:41:01.756538 2144 kubelet.go:2436] "Starting kubelet main sync loop" Sep 12 17:41:01.756703 kubelet[2144]: E0912 17:41:01.756598 2144 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:41:01.757292 kubelet[2144]: E0912 17:41:01.757232 2144 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 12 17:41:01.764599 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 17:41:01.781121 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 17:41:01.785687 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 17:41:01.797601 kubelet[2144]: E0912 17:41:01.797546 2144 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 12 17:41:01.797949 kubelet[2144]: I0912 17:41:01.797820 2144 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:41:01.797949 kubelet[2144]: I0912 17:41:01.797837 2144 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:41:01.798165 kubelet[2144]: I0912 17:41:01.798120 2144 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:41:01.798985 kubelet[2144]: E0912 17:41:01.798945 2144 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:41:01.799046 kubelet[2144]: E0912 17:41:01.799017 2144 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 12 17:41:01.871937 systemd[1]: Created slice kubepods-burstable-pod45cc6e53c31974fae267787b4be28d8b.slice - libcontainer container kubepods-burstable-pod45cc6e53c31974fae267787b4be28d8b.slice. Sep 12 17:41:01.902751 kubelet[2144]: I0912 17:41:01.900832 2144 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:41:01.902751 kubelet[2144]: E0912 17:41:01.901316 2144 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.139:6443/api/v1/nodes\": dial tcp 10.0.0.139:6443: connect: connection refused" node="localhost" Sep 12 17:41:01.902969 kubelet[2144]: E0912 17:41:01.902939 2144 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:41:01.906922 systemd[1]: Created slice kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice - libcontainer container kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice. Sep 12 17:41:01.925046 kubelet[2144]: E0912 17:41:01.925005 2144 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:41:01.926994 kubelet[2144]: I0912 17:41:01.926952 2144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/45cc6e53c31974fae267787b4be28d8b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"45cc6e53c31974fae267787b4be28d8b\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:41:01.926994 kubelet[2144]: I0912 17:41:01.926982 2144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/45cc6e53c31974fae267787b4be28d8b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"45cc6e53c31974fae267787b4be28d8b\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:41:01.927320 kubelet[2144]: I0912 17:41:01.927030 2144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:41:01.927320 kubelet[2144]: I0912 17:41:01.927053 2144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:41:01.927320 kubelet[2144]: I0912 17:41:01.927072 2144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:41:01.927320 kubelet[2144]: I0912 17:41:01.927093 2144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:41:01.927320 kubelet[2144]: I0912 17:41:01.927112 2144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 12 17:41:01.927462 kubelet[2144]: I0912 17:41:01.927129 2144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/45cc6e53c31974fae267787b4be28d8b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"45cc6e53c31974fae267787b4be28d8b\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:41:01.927462 kubelet[2144]: I0912 17:41:01.927144 2144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:41:01.928047 systemd[1]: Created slice kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice - libcontainer container kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice. Sep 12 17:41:01.929851 kubelet[2144]: E0912 17:41:01.929815 2144 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:41:01.930468 kubelet[2144]: E0912 17:41:01.930426 2144 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.139:6443: connect: connection refused" interval="400ms" Sep 12 17:41:02.103816 kubelet[2144]: I0912 17:41:02.103776 2144 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:41:02.104215 kubelet[2144]: E0912 17:41:02.104157 2144 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.139:6443/api/v1/nodes\": dial tcp 10.0.0.139:6443: connect: connection refused" node="localhost" Sep 12 17:41:02.203847 kubelet[2144]: E0912 17:41:02.203802 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:02.204997 containerd[1456]: time="2025-09-12T17:41:02.204932726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:45cc6e53c31974fae267787b4be28d8b,Namespace:kube-system,Attempt:0,}" Sep 12 17:41:02.226547 kubelet[2144]: E0912 17:41:02.226522 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:02.227414 containerd[1456]: time="2025-09-12T17:41:02.227373678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,}" Sep 12 17:41:02.230678 kubelet[2144]: E0912 17:41:02.230657 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:02.231168 containerd[1456]: time="2025-09-12T17:41:02.231123440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,}" Sep 12 17:41:02.331140 kubelet[2144]: E0912 17:41:02.331068 2144 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.139:6443: connect: connection refused" interval="800ms" Sep 12 17:41:02.506412 kubelet[2144]: I0912 17:41:02.506289 2144 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:41:02.506736 kubelet[2144]: E0912 17:41:02.506702 2144 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.139:6443/api/v1/nodes\": dial tcp 10.0.0.139:6443: connect: connection refused" node="localhost" Sep 12 17:41:02.729045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1004626405.mount: Deactivated successfully. Sep 12 17:41:02.740224 containerd[1456]: time="2025-09-12T17:41:02.740156711Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:41:02.741562 containerd[1456]: time="2025-09-12T17:41:02.741521247Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:41:02.744215 containerd[1456]: time="2025-09-12T17:41:02.744136541Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 12 17:41:02.746485 containerd[1456]: time="2025-09-12T17:41:02.746442389Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:41:02.747571 containerd[1456]: time="2025-09-12T17:41:02.747495516Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:41:02.748612 containerd[1456]: time="2025-09-12T17:41:02.748566753Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:41:02.749636 containerd[1456]: time="2025-09-12T17:41:02.749587784Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:41:02.753570 containerd[1456]: time="2025-09-12T17:41:02.753529969Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:41:02.754341 containerd[1456]: time="2025-09-12T17:41:02.754294986Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 526.832333ms" Sep 12 17:41:02.755774 containerd[1456]: time="2025-09-12T17:41:02.755736866Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 550.6953ms" Sep 12 17:41:02.758811 containerd[1456]: time="2025-09-12T17:41:02.758503052Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 527.284216ms" Sep 12 17:41:02.849138 kubelet[2144]: E0912 17:41:02.849056 2144 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 12 17:41:03.000462 kubelet[2144]: E0912 17:41:03.000396 2144 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 12 17:41:03.011049 containerd[1456]: time="2025-09-12T17:41:03.010509555Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:41:03.011049 containerd[1456]: time="2025-09-12T17:41:03.010650610Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:41:03.011049 containerd[1456]: time="2025-09-12T17:41:03.010664214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:41:03.011049 containerd[1456]: time="2025-09-12T17:41:03.010342524Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:41:03.011049 containerd[1456]: time="2025-09-12T17:41:03.010424206Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:41:03.011049 containerd[1456]: time="2025-09-12T17:41:03.010443659Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:41:03.011049 containerd[1456]: time="2025-09-12T17:41:03.010614067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:41:03.011049 containerd[1456]: time="2025-09-12T17:41:03.010823582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:41:03.016310 containerd[1456]: time="2025-09-12T17:41:03.016170290Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:41:03.016310 containerd[1456]: time="2025-09-12T17:41:03.016279911Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:41:03.016310 containerd[1456]: time="2025-09-12T17:41:03.016296159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:41:03.019263 containerd[1456]: time="2025-09-12T17:41:03.016379735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:41:03.039534 systemd[1]: Started cri-containerd-a4e6eeab0f578545a5f79c9a0e72303224983e50564b30b5b74db71578e6e8a4.scope - libcontainer container a4e6eeab0f578545a5f79c9a0e72303224983e50564b30b5b74db71578e6e8a4. Sep 12 17:41:03.046363 systemd[1]: Started cri-containerd-b2c69e2fd2709bf3b7e01d9830e1a8a828fb550a925216da8b16dc153b0cd316.scope - libcontainer container b2c69e2fd2709bf3b7e01d9830e1a8a828fb550a925216da8b16dc153b0cd316. Sep 12 17:41:03.049845 systemd[1]: Started cri-containerd-dcfe3718eb6b393aacc8485e82ec32cc98ff3a768e66d35243a64f0a62e90367.scope - libcontainer container dcfe3718eb6b393aacc8485e82ec32cc98ff3a768e66d35243a64f0a62e90367. Sep 12 17:41:03.092577 containerd[1456]: time="2025-09-12T17:41:03.092532671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:45cc6e53c31974fae267787b4be28d8b,Namespace:kube-system,Attempt:0,} returns sandbox id \"a4e6eeab0f578545a5f79c9a0e72303224983e50564b30b5b74db71578e6e8a4\"" Sep 12 17:41:03.094865 kubelet[2144]: E0912 17:41:03.094659 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:03.100640 containerd[1456]: time="2025-09-12T17:41:03.100453044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,} returns sandbox id \"b2c69e2fd2709bf3b7e01d9830e1a8a828fb550a925216da8b16dc153b0cd316\"" Sep 12 17:41:03.101157 kubelet[2144]: E0912 17:41:03.101030 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:03.102824 containerd[1456]: time="2025-09-12T17:41:03.102797059Z" level=info msg="CreateContainer within sandbox \"a4e6eeab0f578545a5f79c9a0e72303224983e50564b30b5b74db71578e6e8a4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 17:41:03.103446 containerd[1456]: time="2025-09-12T17:41:03.102785539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,} returns sandbox id \"dcfe3718eb6b393aacc8485e82ec32cc98ff3a768e66d35243a64f0a62e90367\"" Sep 12 17:41:03.104535 kubelet[2144]: E0912 17:41:03.104504 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:03.106542 containerd[1456]: time="2025-09-12T17:41:03.106498881Z" level=info msg="CreateContainer within sandbox \"b2c69e2fd2709bf3b7e01d9830e1a8a828fb550a925216da8b16dc153b0cd316\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 17:41:03.121486 containerd[1456]: time="2025-09-12T17:41:03.121447532Z" level=info msg="CreateContainer within sandbox \"dcfe3718eb6b393aacc8485e82ec32cc98ff3a768e66d35243a64f0a62e90367\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 17:41:03.124922 containerd[1456]: time="2025-09-12T17:41:03.124873473Z" level=info msg="CreateContainer within sandbox \"a4e6eeab0f578545a5f79c9a0e72303224983e50564b30b5b74db71578e6e8a4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5d514ed784e827c269dfdd39aa165cd6fa2f18acba0dc64f9dc48f0a69a4d800\"" Sep 12 17:41:03.125580 containerd[1456]: time="2025-09-12T17:41:03.125550021Z" level=info msg="StartContainer for \"5d514ed784e827c269dfdd39aa165cd6fa2f18acba0dc64f9dc48f0a69a4d800\"" Sep 12 17:41:03.132207 kubelet[2144]: E0912 17:41:03.132140 2144 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.139:6443: connect: connection refused" interval="1.6s" Sep 12 17:41:03.138947 containerd[1456]: time="2025-09-12T17:41:03.138890952Z" level=info msg="CreateContainer within sandbox \"b2c69e2fd2709bf3b7e01d9830e1a8a828fb550a925216da8b16dc153b0cd316\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e50010bde6332207b1725c03c3044b5e9ad82b2b79e4efba8f1b2df763cb369d\"" Sep 12 17:41:03.140674 containerd[1456]: time="2025-09-12T17:41:03.139480369Z" level=info msg="StartContainer for \"e50010bde6332207b1725c03c3044b5e9ad82b2b79e4efba8f1b2df763cb369d\"" Sep 12 17:41:03.150021 containerd[1456]: time="2025-09-12T17:41:03.149957749Z" level=info msg="CreateContainer within sandbox \"dcfe3718eb6b393aacc8485e82ec32cc98ff3a768e66d35243a64f0a62e90367\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"798a9e9ffb900067b9b6ae88e46417d4311f9efc35b5b4962b908666001806c3\"" Sep 12 17:41:03.151121 containerd[1456]: time="2025-09-12T17:41:03.151031710Z" level=info msg="StartContainer for \"798a9e9ffb900067b9b6ae88e46417d4311f9efc35b5b4962b908666001806c3\"" Sep 12 17:41:03.158498 systemd[1]: Started cri-containerd-5d514ed784e827c269dfdd39aa165cd6fa2f18acba0dc64f9dc48f0a69a4d800.scope - libcontainer container 5d514ed784e827c269dfdd39aa165cd6fa2f18acba0dc64f9dc48f0a69a4d800. Sep 12 17:41:03.162533 kubelet[2144]: E0912 17:41:03.162478 2144 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 12 17:41:03.170438 systemd[1]: Started cri-containerd-e50010bde6332207b1725c03c3044b5e9ad82b2b79e4efba8f1b2df763cb369d.scope - libcontainer container e50010bde6332207b1725c03c3044b5e9ad82b2b79e4efba8f1b2df763cb369d. Sep 12 17:41:03.192551 systemd[1]: Started cri-containerd-798a9e9ffb900067b9b6ae88e46417d4311f9efc35b5b4962b908666001806c3.scope - libcontainer container 798a9e9ffb900067b9b6ae88e46417d4311f9efc35b5b4962b908666001806c3. Sep 12 17:41:03.215214 containerd[1456]: time="2025-09-12T17:41:03.215156123Z" level=info msg="StartContainer for \"5d514ed784e827c269dfdd39aa165cd6fa2f18acba0dc64f9dc48f0a69a4d800\" returns successfully" Sep 12 17:41:03.242195 containerd[1456]: time="2025-09-12T17:41:03.242116298Z" level=info msg="StartContainer for \"e50010bde6332207b1725c03c3044b5e9ad82b2b79e4efba8f1b2df763cb369d\" returns successfully" Sep 12 17:41:03.242344 containerd[1456]: time="2025-09-12T17:41:03.242223275Z" level=info msg="StartContainer for \"798a9e9ffb900067b9b6ae88e46417d4311f9efc35b5b4962b908666001806c3\" returns successfully" Sep 12 17:41:03.310278 kubelet[2144]: I0912 17:41:03.309682 2144 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:41:03.765324 kubelet[2144]: E0912 17:41:03.765272 2144 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:41:03.765324 kubelet[2144]: E0912 17:41:03.765305 2144 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:41:03.765548 kubelet[2144]: E0912 17:41:03.765418 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:03.765548 kubelet[2144]: E0912 17:41:03.765418 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:03.767293 kubelet[2144]: E0912 17:41:03.767177 2144 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:41:03.767293 kubelet[2144]: E0912 17:41:03.767287 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:04.652210 kubelet[2144]: I0912 17:41:04.652147 2144 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 12 17:41:04.652210 kubelet[2144]: E0912 17:41:04.652214 2144 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 12 17:41:04.707267 kubelet[2144]: I0912 17:41:04.707182 2144 apiserver.go:52] "Watching apiserver" Sep 12 17:41:04.725363 kubelet[2144]: I0912 17:41:04.725301 2144 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 17:41:04.725529 kubelet[2144]: I0912 17:41:04.725427 2144 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:41:04.731406 kubelet[2144]: E0912 17:41:04.731367 2144 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 12 17:41:04.731406 kubelet[2144]: I0912 17:41:04.731388 2144 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 17:41:04.732870 kubelet[2144]: E0912 17:41:04.732845 2144 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 12 17:41:04.732870 kubelet[2144]: I0912 17:41:04.732862 2144 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 17:41:04.735003 kubelet[2144]: E0912 17:41:04.734959 2144 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 12 17:41:04.768016 kubelet[2144]: I0912 17:41:04.767981 2144 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 17:41:04.770344 kubelet[2144]: I0912 17:41:04.768333 2144 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 17:41:04.770601 kubelet[2144]: E0912 17:41:04.770565 2144 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 12 17:41:04.770845 kubelet[2144]: E0912 17:41:04.770820 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:04.770961 kubelet[2144]: E0912 17:41:04.770928 2144 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 12 17:41:04.771163 kubelet[2144]: E0912 17:41:04.771149 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:06.620793 systemd[1]: Reloading requested from client PID 2435 ('systemctl') (unit session-7.scope)... Sep 12 17:41:06.620812 systemd[1]: Reloading... Sep 12 17:41:06.707463 zram_generator::config[2475]: No configuration found. Sep 12 17:41:06.838562 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:41:06.933632 systemd[1]: Reloading finished in 312 ms. Sep 12 17:41:06.989220 kubelet[2144]: I0912 17:41:06.989076 2144 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:41:06.989139 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:41:07.001018 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:41:07.001431 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:41:07.001498 systemd[1]: kubelet.service: Consumed 1.614s CPU time, 133.8M memory peak, 0B memory swap peak. Sep 12 17:41:07.013878 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:41:07.210977 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:41:07.218329 (kubelet)[2519]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:41:07.266463 kubelet[2519]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:41:07.266463 kubelet[2519]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:41:07.266463 kubelet[2519]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:41:07.266463 kubelet[2519]: I0912 17:41:07.266313 2519 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:41:07.273211 kubelet[2519]: I0912 17:41:07.273145 2519 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 12 17:41:07.273211 kubelet[2519]: I0912 17:41:07.273174 2519 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:41:07.273448 kubelet[2519]: I0912 17:41:07.273422 2519 server.go:956] "Client rotation is on, will bootstrap in background" Sep 12 17:41:07.274628 kubelet[2519]: I0912 17:41:07.274603 2519 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 12 17:41:07.278538 kubelet[2519]: I0912 17:41:07.278492 2519 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:41:07.281624 kubelet[2519]: E0912 17:41:07.281592 2519 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:41:07.281624 kubelet[2519]: I0912 17:41:07.281617 2519 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:41:07.288718 kubelet[2519]: I0912 17:41:07.288679 2519 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:41:07.288974 kubelet[2519]: I0912 17:41:07.288945 2519 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:41:07.289114 kubelet[2519]: I0912 17:41:07.288968 2519 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:41:07.289216 kubelet[2519]: I0912 17:41:07.289126 2519 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:41:07.289216 kubelet[2519]: I0912 17:41:07.289136 2519 container_manager_linux.go:303] "Creating device plugin manager" Sep 12 17:41:07.289216 kubelet[2519]: I0912 17:41:07.289182 2519 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:41:07.289412 kubelet[2519]: I0912 17:41:07.289397 2519 kubelet.go:480] "Attempting to sync node with API server" Sep 12 17:41:07.289444 kubelet[2519]: I0912 17:41:07.289413 2519 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:41:07.289444 kubelet[2519]: I0912 17:41:07.289434 2519 kubelet.go:386] "Adding apiserver pod source" Sep 12 17:41:07.289489 kubelet[2519]: I0912 17:41:07.289449 2519 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:41:07.290817 kubelet[2519]: I0912 17:41:07.290795 2519 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:41:07.293264 kubelet[2519]: I0912 17:41:07.291262 2519 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 12 17:41:07.298876 kubelet[2519]: I0912 17:41:07.298852 2519 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:41:07.299061 kubelet[2519]: I0912 17:41:07.298916 2519 server.go:1289] "Started kubelet" Sep 12 17:41:07.299344 kubelet[2519]: I0912 17:41:07.299235 2519 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:41:07.299963 kubelet[2519]: I0912 17:41:07.299621 2519 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:41:07.299963 kubelet[2519]: I0912 17:41:07.299730 2519 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:41:07.300594 kubelet[2519]: I0912 17:41:07.300568 2519 server.go:317] "Adding debug handlers to kubelet server" Sep 12 17:41:07.300699 kubelet[2519]: I0912 17:41:07.300685 2519 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:41:07.302479 kubelet[2519]: I0912 17:41:07.302290 2519 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:41:07.305691 kubelet[2519]: I0912 17:41:07.305658 2519 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:41:07.305854 kubelet[2519]: I0912 17:41:07.305835 2519 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:41:07.306619 kubelet[2519]: I0912 17:41:07.306469 2519 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:41:07.306619 kubelet[2519]: I0912 17:41:07.306587 2519 factory.go:223] Registration of the systemd container factory successfully Sep 12 17:41:07.307485 kubelet[2519]: I0912 17:41:07.306905 2519 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:41:07.307924 kubelet[2519]: E0912 17:41:07.307891 2519 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:41:07.309323 kubelet[2519]: I0912 17:41:07.308823 2519 factory.go:223] Registration of the containerd container factory successfully Sep 12 17:41:07.319840 kubelet[2519]: I0912 17:41:07.319788 2519 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 12 17:41:07.321596 kubelet[2519]: I0912 17:41:07.321556 2519 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 12 17:41:07.321649 kubelet[2519]: I0912 17:41:07.321597 2519 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 12 17:41:07.321649 kubelet[2519]: I0912 17:41:07.321636 2519 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:41:07.321649 kubelet[2519]: I0912 17:41:07.321647 2519 kubelet.go:2436] "Starting kubelet main sync loop" Sep 12 17:41:07.321761 kubelet[2519]: E0912 17:41:07.321720 2519 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:41:07.355281 kubelet[2519]: I0912 17:41:07.355113 2519 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:41:07.355281 kubelet[2519]: I0912 17:41:07.355135 2519 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:41:07.355281 kubelet[2519]: I0912 17:41:07.355156 2519 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:41:07.355540 kubelet[2519]: I0912 17:41:07.355334 2519 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 17:41:07.355540 kubelet[2519]: I0912 17:41:07.355347 2519 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 17:41:07.355540 kubelet[2519]: I0912 17:41:07.355366 2519 policy_none.go:49] "None policy: Start" Sep 12 17:41:07.355540 kubelet[2519]: I0912 17:41:07.355375 2519 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:41:07.355540 kubelet[2519]: I0912 17:41:07.355386 2519 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:41:07.355540 kubelet[2519]: I0912 17:41:07.355484 2519 state_mem.go:75] "Updated machine memory state" Sep 12 17:41:07.360489 kubelet[2519]: E0912 17:41:07.360293 2519 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 12 17:41:07.360618 kubelet[2519]: I0912 17:41:07.360577 2519 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:41:07.360651 kubelet[2519]: I0912 17:41:07.360608 2519 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:41:07.361366 kubelet[2519]: I0912 17:41:07.361272 2519 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:41:07.362020 kubelet[2519]: E0912 17:41:07.361996 2519 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:41:07.423546 kubelet[2519]: I0912 17:41:07.423467 2519 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 17:41:07.423744 kubelet[2519]: I0912 17:41:07.423681 2519 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 17:41:07.423744 kubelet[2519]: I0912 17:41:07.423686 2519 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 17:41:07.508856 kubelet[2519]: I0912 17:41:07.508466 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 12 17:41:07.508856 kubelet[2519]: I0912 17:41:07.508563 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:41:07.508856 kubelet[2519]: I0912 17:41:07.508625 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:41:07.508856 kubelet[2519]: I0912 17:41:07.508659 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:41:07.508856 kubelet[2519]: I0912 17:41:07.508688 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/45cc6e53c31974fae267787b4be28d8b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"45cc6e53c31974fae267787b4be28d8b\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:41:07.509183 kubelet[2519]: I0912 17:41:07.508751 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/45cc6e53c31974fae267787b4be28d8b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"45cc6e53c31974fae267787b4be28d8b\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:41:07.509183 kubelet[2519]: I0912 17:41:07.508840 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/45cc6e53c31974fae267787b4be28d8b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"45cc6e53c31974fae267787b4be28d8b\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:41:07.509183 kubelet[2519]: I0912 17:41:07.508886 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:41:07.509183 kubelet[2519]: I0912 17:41:07.508938 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:41:07.733762 kubelet[2519]: E0912 17:41:07.733683 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:07.734845 kubelet[2519]: E0912 17:41:07.734582 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:07.734845 kubelet[2519]: E0912 17:41:07.734633 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:08.214518 kubelet[2519]: I0912 17:41:08.214432 2519 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:41:08.289976 kubelet[2519]: I0912 17:41:08.289888 2519 apiserver.go:52] "Watching apiserver" Sep 12 17:41:08.299271 kubelet[2519]: I0912 17:41:08.299010 2519 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 12 17:41:08.299521 kubelet[2519]: I0912 17:41:08.299300 2519 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 12 17:41:08.306158 kubelet[2519]: I0912 17:41:08.306123 2519 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:41:08.336831 kubelet[2519]: I0912 17:41:08.336748 2519 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 17:41:08.339397 kubelet[2519]: I0912 17:41:08.337317 2519 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 17:41:08.369452 kubelet[2519]: I0912 17:41:08.369404 2519 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 17:41:08.532112 kubelet[2519]: E0912 17:41:08.531917 2519 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 12 17:41:08.532288 kubelet[2519]: E0912 17:41:08.532149 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:08.532565 kubelet[2519]: E0912 17:41:08.532469 2519 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 12 17:41:08.532565 kubelet[2519]: E0912 17:41:08.532475 2519 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 12 17:41:08.532760 kubelet[2519]: E0912 17:41:08.532625 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:08.532760 kubelet[2519]: E0912 17:41:08.532693 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:08.960659 kubelet[2519]: I0912 17:41:08.960372 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.960343395 podStartE2EDuration="1.960343395s" podCreationTimestamp="2025-09-12 17:41:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:41:08.793736963 +0000 UTC m=+1.569934593" watchObservedRunningTime="2025-09-12 17:41:08.960343395 +0000 UTC m=+1.736541015" Sep 12 17:41:09.070980 kubelet[2519]: I0912 17:41:09.069923 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.069887941 podStartE2EDuration="2.069887941s" podCreationTimestamp="2025-09-12 17:41:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:41:08.960842072 +0000 UTC m=+1.737039712" watchObservedRunningTime="2025-09-12 17:41:09.069887941 +0000 UTC m=+1.846085561" Sep 12 17:41:09.080780 kubelet[2519]: I0912 17:41:09.080686 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.080666423 podStartE2EDuration="2.080666423s" podCreationTimestamp="2025-09-12 17:41:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:41:09.070650963 +0000 UTC m=+1.846848583" watchObservedRunningTime="2025-09-12 17:41:09.080666423 +0000 UTC m=+1.856864043" Sep 12 17:41:09.338885 kubelet[2519]: E0912 17:41:09.338392 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:09.338885 kubelet[2519]: E0912 17:41:09.338402 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:09.338885 kubelet[2519]: E0912 17:41:09.338676 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:10.339500 kubelet[2519]: E0912 17:41:10.339448 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:11.748224 kubelet[2519]: E0912 17:41:11.748156 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:11.773410 kubelet[2519]: I0912 17:41:11.773367 2519 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 17:41:11.773797 containerd[1456]: time="2025-09-12T17:41:11.773737372Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 17:41:11.774224 kubelet[2519]: I0912 17:41:11.773929 2519 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 17:41:12.570268 update_engine[1446]: I20250912 17:41:12.570106 1446 update_attempter.cc:509] Updating boot flags... Sep 12 17:41:12.607312 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2579) Sep 12 17:41:12.654284 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2582) Sep 12 17:41:12.690570 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2582) Sep 12 17:41:12.750602 systemd[1]: Created slice kubepods-besteffort-pod97610917_1dac_4628_b38d_f771a47dbe5a.slice - libcontainer container kubepods-besteffort-pod97610917_1dac_4628_b38d_f771a47dbe5a.slice. Sep 12 17:41:12.842123 kubelet[2519]: I0912 17:41:12.841957 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/97610917-1dac-4628-b38d-f771a47dbe5a-xtables-lock\") pod \"kube-proxy-p8wdk\" (UID: \"97610917-1dac-4628-b38d-f771a47dbe5a\") " pod="kube-system/kube-proxy-p8wdk" Sep 12 17:41:12.842123 kubelet[2519]: I0912 17:41:12.842008 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/97610917-1dac-4628-b38d-f771a47dbe5a-lib-modules\") pod \"kube-proxy-p8wdk\" (UID: \"97610917-1dac-4628-b38d-f771a47dbe5a\") " pod="kube-system/kube-proxy-p8wdk" Sep 12 17:41:12.842123 kubelet[2519]: I0912 17:41:12.842031 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rs8r\" (UniqueName: \"kubernetes.io/projected/97610917-1dac-4628-b38d-f771a47dbe5a-kube-api-access-8rs8r\") pod \"kube-proxy-p8wdk\" (UID: \"97610917-1dac-4628-b38d-f771a47dbe5a\") " pod="kube-system/kube-proxy-p8wdk" Sep 12 17:41:12.842123 kubelet[2519]: I0912 17:41:12.842058 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/97610917-1dac-4628-b38d-f771a47dbe5a-kube-proxy\") pod \"kube-proxy-p8wdk\" (UID: \"97610917-1dac-4628-b38d-f771a47dbe5a\") " pod="kube-system/kube-proxy-p8wdk" Sep 12 17:41:12.965286 systemd[1]: Created slice kubepods-besteffort-poda08f1697_d9dd_4efa_a15c_7463b482e3df.slice - libcontainer container kubepods-besteffort-poda08f1697_d9dd_4efa_a15c_7463b482e3df.slice. Sep 12 17:41:13.044161 kubelet[2519]: I0912 17:41:13.044093 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a08f1697-d9dd-4efa-a15c-7463b482e3df-var-lib-calico\") pod \"tigera-operator-755d956888-jlqbc\" (UID: \"a08f1697-d9dd-4efa-a15c-7463b482e3df\") " pod="tigera-operator/tigera-operator-755d956888-jlqbc" Sep 12 17:41:13.044161 kubelet[2519]: I0912 17:41:13.044149 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9p6x\" (UniqueName: \"kubernetes.io/projected/a08f1697-d9dd-4efa-a15c-7463b482e3df-kube-api-access-d9p6x\") pod \"tigera-operator-755d956888-jlqbc\" (UID: \"a08f1697-d9dd-4efa-a15c-7463b482e3df\") " pod="tigera-operator/tigera-operator-755d956888-jlqbc" Sep 12 17:41:13.064578 kubelet[2519]: E0912 17:41:13.064440 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:13.065431 containerd[1456]: time="2025-09-12T17:41:13.065386358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p8wdk,Uid:97610917-1dac-4628-b38d-f771a47dbe5a,Namespace:kube-system,Attempt:0,}" Sep 12 17:41:13.098427 containerd[1456]: time="2025-09-12T17:41:13.098142014Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:41:13.098427 containerd[1456]: time="2025-09-12T17:41:13.098231956Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:41:13.098427 containerd[1456]: time="2025-09-12T17:41:13.098271808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:41:13.098896 containerd[1456]: time="2025-09-12T17:41:13.098412331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:41:13.127608 systemd[1]: Started cri-containerd-d58f692c55b84cc4d3c645971d11b6737aa231f6a0baa77c5fc75c2dc741ce5b.scope - libcontainer container d58f692c55b84cc4d3c645971d11b6737aa231f6a0baa77c5fc75c2dc741ce5b. Sep 12 17:41:13.158896 containerd[1456]: time="2025-09-12T17:41:13.158802405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p8wdk,Uid:97610917-1dac-4628-b38d-f771a47dbe5a,Namespace:kube-system,Attempt:0,} returns sandbox id \"d58f692c55b84cc4d3c645971d11b6737aa231f6a0baa77c5fc75c2dc741ce5b\"" Sep 12 17:41:13.160069 kubelet[2519]: E0912 17:41:13.160028 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:13.167197 containerd[1456]: time="2025-09-12T17:41:13.167135972Z" level=info msg="CreateContainer within sandbox \"d58f692c55b84cc4d3c645971d11b6737aa231f6a0baa77c5fc75c2dc741ce5b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 17:41:13.187922 containerd[1456]: time="2025-09-12T17:41:13.187858560Z" level=info msg="CreateContainer within sandbox \"d58f692c55b84cc4d3c645971d11b6737aa231f6a0baa77c5fc75c2dc741ce5b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5bfaed47bec9cb5d83cb0400778343d34686056c24d7f3207253401b3eae3529\"" Sep 12 17:41:13.188730 containerd[1456]: time="2025-09-12T17:41:13.188678861Z" level=info msg="StartContainer for \"5bfaed47bec9cb5d83cb0400778343d34686056c24d7f3207253401b3eae3529\"" Sep 12 17:41:13.229648 systemd[1]: Started cri-containerd-5bfaed47bec9cb5d83cb0400778343d34686056c24d7f3207253401b3eae3529.scope - libcontainer container 5bfaed47bec9cb5d83cb0400778343d34686056c24d7f3207253401b3eae3529. Sep 12 17:41:13.298391 containerd[1456]: time="2025-09-12T17:41:13.298338969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-jlqbc,Uid:a08f1697-d9dd-4efa-a15c-7463b482e3df,Namespace:tigera-operator,Attempt:0,}" Sep 12 17:41:13.384554 containerd[1456]: time="2025-09-12T17:41:13.384402639Z" level=info msg="StartContainer for \"5bfaed47bec9cb5d83cb0400778343d34686056c24d7f3207253401b3eae3529\" returns successfully" Sep 12 17:41:13.523158 containerd[1456]: time="2025-09-12T17:41:13.522985110Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:41:13.523158 containerd[1456]: time="2025-09-12T17:41:13.523070114Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:41:13.523158 containerd[1456]: time="2025-09-12T17:41:13.523086022Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:41:13.523450 containerd[1456]: time="2025-09-12T17:41:13.523280033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:41:13.545528 systemd[1]: Started cri-containerd-dd09792c91b90d8734a16ec0e15e1fad92d1a3ddc48f51cec2680dcfce0b5c8d.scope - libcontainer container dd09792c91b90d8734a16ec0e15e1fad92d1a3ddc48f51cec2680dcfce0b5c8d. Sep 12 17:41:13.590890 containerd[1456]: time="2025-09-12T17:41:13.590823643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-jlqbc,Uid:a08f1697-d9dd-4efa-a15c-7463b482e3df,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"dd09792c91b90d8734a16ec0e15e1fad92d1a3ddc48f51cec2680dcfce0b5c8d\"" Sep 12 17:41:13.593186 containerd[1456]: time="2025-09-12T17:41:13.592898720Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 12 17:41:14.228275 kubelet[2519]: E0912 17:41:14.228187 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:14.390494 kubelet[2519]: E0912 17:41:14.390431 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:14.390494 kubelet[2519]: E0912 17:41:14.390463 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:15.402606 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1189548933.mount: Deactivated successfully. Sep 12 17:41:15.775371 containerd[1456]: time="2025-09-12T17:41:15.775269797Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:15.776161 containerd[1456]: time="2025-09-12T17:41:15.776105783Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 12 17:41:15.777407 containerd[1456]: time="2025-09-12T17:41:15.777359639Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:15.780217 containerd[1456]: time="2025-09-12T17:41:15.780164036Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:15.780987 containerd[1456]: time="2025-09-12T17:41:15.780930957Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.187964905s" Sep 12 17:41:15.780987 containerd[1456]: time="2025-09-12T17:41:15.780981669Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 12 17:41:15.786833 containerd[1456]: time="2025-09-12T17:41:15.786765831Z" level=info msg="CreateContainer within sandbox \"dd09792c91b90d8734a16ec0e15e1fad92d1a3ddc48f51cec2680dcfce0b5c8d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 12 17:41:15.801993 containerd[1456]: time="2025-09-12T17:41:15.801942052Z" level=info msg="CreateContainer within sandbox \"dd09792c91b90d8734a16ec0e15e1fad92d1a3ddc48f51cec2680dcfce0b5c8d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ae99c79673a9ff9be4769a19b08d9a52fc31bd344743db11867b1df29801d62c\"" Sep 12 17:41:15.802732 containerd[1456]: time="2025-09-12T17:41:15.802666436Z" level=info msg="StartContainer for \"ae99c79673a9ff9be4769a19b08d9a52fc31bd344743db11867b1df29801d62c\"" Sep 12 17:41:15.847587 systemd[1]: Started cri-containerd-ae99c79673a9ff9be4769a19b08d9a52fc31bd344743db11867b1df29801d62c.scope - libcontainer container ae99c79673a9ff9be4769a19b08d9a52fc31bd344743db11867b1df29801d62c. Sep 12 17:41:15.880946 containerd[1456]: time="2025-09-12T17:41:15.880895026Z" level=info msg="StartContainer for \"ae99c79673a9ff9be4769a19b08d9a52fc31bd344743db11867b1df29801d62c\" returns successfully" Sep 12 17:41:16.404987 kubelet[2519]: I0912 17:41:16.404407 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-p8wdk" podStartSLOduration=4.404388847 podStartE2EDuration="4.404388847s" podCreationTimestamp="2025-09-12 17:41:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:41:14.412952451 +0000 UTC m=+7.189150091" watchObservedRunningTime="2025-09-12 17:41:16.404388847 +0000 UTC m=+9.180586467" Sep 12 17:41:16.404987 kubelet[2519]: I0912 17:41:16.404544 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-jlqbc" podStartSLOduration=2.2148998 podStartE2EDuration="4.404535503s" podCreationTimestamp="2025-09-12 17:41:12 +0000 UTC" firstStartedPulling="2025-09-12 17:41:13.59231245 +0000 UTC m=+6.368510070" lastFinishedPulling="2025-09-12 17:41:15.781948153 +0000 UTC m=+8.558145773" observedRunningTime="2025-09-12 17:41:16.404156484 +0000 UTC m=+9.180354114" watchObservedRunningTime="2025-09-12 17:41:16.404535503 +0000 UTC m=+9.180733143" Sep 12 17:41:18.239535 systemd[1]: cri-containerd-ae99c79673a9ff9be4769a19b08d9a52fc31bd344743db11867b1df29801d62c.scope: Deactivated successfully. Sep 12 17:41:18.283775 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae99c79673a9ff9be4769a19b08d9a52fc31bd344743db11867b1df29801d62c-rootfs.mount: Deactivated successfully. Sep 12 17:41:18.572702 containerd[1456]: time="2025-09-12T17:41:18.572454457Z" level=info msg="shim disconnected" id=ae99c79673a9ff9be4769a19b08d9a52fc31bd344743db11867b1df29801d62c namespace=k8s.io Sep 12 17:41:18.572702 containerd[1456]: time="2025-09-12T17:41:18.572553949Z" level=warning msg="cleaning up after shim disconnected" id=ae99c79673a9ff9be4769a19b08d9a52fc31bd344743db11867b1df29801d62c namespace=k8s.io Sep 12 17:41:18.572702 containerd[1456]: time="2025-09-12T17:41:18.572564237Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:41:18.807923 kubelet[2519]: E0912 17:41:18.807798 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:19.403342 kubelet[2519]: I0912 17:41:19.403293 2519 scope.go:117] "RemoveContainer" containerID="ae99c79673a9ff9be4769a19b08d9a52fc31bd344743db11867b1df29801d62c" Sep 12 17:41:19.405431 containerd[1456]: time="2025-09-12T17:41:19.405390160Z" level=info msg="CreateContainer within sandbox \"dd09792c91b90d8734a16ec0e15e1fad92d1a3ddc48f51cec2680dcfce0b5c8d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 12 17:41:19.748635 containerd[1456]: time="2025-09-12T17:41:19.748562058Z" level=info msg="CreateContainer within sandbox \"dd09792c91b90d8734a16ec0e15e1fad92d1a3ddc48f51cec2680dcfce0b5c8d\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"de02314006e1fe0e7e325943441d1df836ff263df19c8b885bce24c676b4c370\"" Sep 12 17:41:19.749347 containerd[1456]: time="2025-09-12T17:41:19.749298064Z" level=info msg="StartContainer for \"de02314006e1fe0e7e325943441d1df836ff263df19c8b885bce24c676b4c370\"" Sep 12 17:41:19.776342 systemd[1]: run-containerd-runc-k8s.io-de02314006e1fe0e7e325943441d1df836ff263df19c8b885bce24c676b4c370-runc.i5t44P.mount: Deactivated successfully. Sep 12 17:41:19.790582 systemd[1]: Started cri-containerd-de02314006e1fe0e7e325943441d1df836ff263df19c8b885bce24c676b4c370.scope - libcontainer container de02314006e1fe0e7e325943441d1df836ff263df19c8b885bce24c676b4c370. Sep 12 17:41:19.820153 containerd[1456]: time="2025-09-12T17:41:19.820111234Z" level=info msg="StartContainer for \"de02314006e1fe0e7e325943441d1df836ff263df19c8b885bce24c676b4c370\" returns successfully" Sep 12 17:41:21.644823 sudo[1638]: pam_unix(sudo:session): session closed for user root Sep 12 17:41:21.659053 sshd[1635]: pam_unix(sshd:session): session closed for user core Sep 12 17:41:21.664081 systemd[1]: sshd@6-10.0.0.139:22-10.0.0.1:51624.service: Deactivated successfully. Sep 12 17:41:21.666387 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 17:41:21.666606 systemd[1]: session-7.scope: Consumed 7.463s CPU time, 163.2M memory peak, 0B memory swap peak. Sep 12 17:41:21.667225 systemd-logind[1445]: Session 7 logged out. Waiting for processes to exit. Sep 12 17:41:21.668500 systemd-logind[1445]: Removed session 7. Sep 12 17:41:21.753294 kubelet[2519]: E0912 17:41:21.753171 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:22.410363 kubelet[2519]: E0912 17:41:22.410310 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:25.779389 systemd[1]: Created slice kubepods-besteffort-pod9a7ab2e4_4adf_4aa8_99c0_fc337f807eb5.slice - libcontainer container kubepods-besteffort-pod9a7ab2e4_4adf_4aa8_99c0_fc337f807eb5.slice. Sep 12 17:41:25.818595 kubelet[2519]: I0912 17:41:25.818537 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw9qn\" (UniqueName: \"kubernetes.io/projected/9a7ab2e4-4adf-4aa8-99c0-fc337f807eb5-kube-api-access-nw9qn\") pod \"calico-typha-99ffdccfb-plsqd\" (UID: \"9a7ab2e4-4adf-4aa8-99c0-fc337f807eb5\") " pod="calico-system/calico-typha-99ffdccfb-plsqd" Sep 12 17:41:25.818595 kubelet[2519]: I0912 17:41:25.818586 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a7ab2e4-4adf-4aa8-99c0-fc337f807eb5-tigera-ca-bundle\") pod \"calico-typha-99ffdccfb-plsqd\" (UID: \"9a7ab2e4-4adf-4aa8-99c0-fc337f807eb5\") " pod="calico-system/calico-typha-99ffdccfb-plsqd" Sep 12 17:41:25.818595 kubelet[2519]: I0912 17:41:25.818602 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9a7ab2e4-4adf-4aa8-99c0-fc337f807eb5-typha-certs\") pod \"calico-typha-99ffdccfb-plsqd\" (UID: \"9a7ab2e4-4adf-4aa8-99c0-fc337f807eb5\") " pod="calico-system/calico-typha-99ffdccfb-plsqd" Sep 12 17:41:26.060569 systemd[1]: Created slice kubepods-besteffort-pod7c59acce_b675_499c_8d37_a781a0bc8f04.slice - libcontainer container kubepods-besteffort-pod7c59acce_b675_499c_8d37_a781a0bc8f04.slice. Sep 12 17:41:26.084289 kubelet[2519]: E0912 17:41:26.082756 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:26.084452 containerd[1456]: time="2025-09-12T17:41:26.083171648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-99ffdccfb-plsqd,Uid:9a7ab2e4-4adf-4aa8-99c0-fc337f807eb5,Namespace:calico-system,Attempt:0,}" Sep 12 17:41:26.112752 containerd[1456]: time="2025-09-12T17:41:26.112649907Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:41:26.112752 containerd[1456]: time="2025-09-12T17:41:26.112720517Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:41:26.112752 containerd[1456]: time="2025-09-12T17:41:26.112733801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:41:26.113005 containerd[1456]: time="2025-09-12T17:41:26.112848403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:41:26.121642 kubelet[2519]: I0912 17:41:26.121129 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c59acce-b675-499c-8d37-a781a0bc8f04-tigera-ca-bundle\") pod \"calico-node-wn9x2\" (UID: \"7c59acce-b675-499c-8d37-a781a0bc8f04\") " pod="calico-system/calico-node-wn9x2" Sep 12 17:41:26.121642 kubelet[2519]: I0912 17:41:26.121180 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7c59acce-b675-499c-8d37-a781a0bc8f04-flexvol-driver-host\") pod \"calico-node-wn9x2\" (UID: \"7c59acce-b675-499c-8d37-a781a0bc8f04\") " pod="calico-system/calico-node-wn9x2" Sep 12 17:41:26.121642 kubelet[2519]: I0912 17:41:26.121199 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7c59acce-b675-499c-8d37-a781a0bc8f04-var-run-calico\") pod \"calico-node-wn9x2\" (UID: \"7c59acce-b675-499c-8d37-a781a0bc8f04\") " pod="calico-system/calico-node-wn9x2" Sep 12 17:41:26.121642 kubelet[2519]: I0912 17:41:26.121217 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt7hc\" (UniqueName: \"kubernetes.io/projected/7c59acce-b675-499c-8d37-a781a0bc8f04-kube-api-access-dt7hc\") pod \"calico-node-wn9x2\" (UID: \"7c59acce-b675-499c-8d37-a781a0bc8f04\") " pod="calico-system/calico-node-wn9x2" Sep 12 17:41:26.123387 kubelet[2519]: I0912 17:41:26.123110 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c59acce-b675-499c-8d37-a781a0bc8f04-lib-modules\") pod \"calico-node-wn9x2\" (UID: \"7c59acce-b675-499c-8d37-a781a0bc8f04\") " pod="calico-system/calico-node-wn9x2" Sep 12 17:41:26.123387 kubelet[2519]: I0912 17:41:26.123148 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7c59acce-b675-499c-8d37-a781a0bc8f04-policysync\") pod \"calico-node-wn9x2\" (UID: \"7c59acce-b675-499c-8d37-a781a0bc8f04\") " pod="calico-system/calico-node-wn9x2" Sep 12 17:41:26.123387 kubelet[2519]: I0912 17:41:26.123197 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7c59acce-b675-499c-8d37-a781a0bc8f04-cni-bin-dir\") pod \"calico-node-wn9x2\" (UID: \"7c59acce-b675-499c-8d37-a781a0bc8f04\") " pod="calico-system/calico-node-wn9x2" Sep 12 17:41:26.123387 kubelet[2519]: I0912 17:41:26.123210 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7c59acce-b675-499c-8d37-a781a0bc8f04-xtables-lock\") pod \"calico-node-wn9x2\" (UID: \"7c59acce-b675-499c-8d37-a781a0bc8f04\") " pod="calico-system/calico-node-wn9x2" Sep 12 17:41:26.123387 kubelet[2519]: I0912 17:41:26.123229 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7c59acce-b675-499c-8d37-a781a0bc8f04-cni-log-dir\") pod \"calico-node-wn9x2\" (UID: \"7c59acce-b675-499c-8d37-a781a0bc8f04\") " pod="calico-system/calico-node-wn9x2" Sep 12 17:41:26.123567 kubelet[2519]: I0912 17:41:26.123264 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7c59acce-b675-499c-8d37-a781a0bc8f04-node-certs\") pod \"calico-node-wn9x2\" (UID: \"7c59acce-b675-499c-8d37-a781a0bc8f04\") " pod="calico-system/calico-node-wn9x2" Sep 12 17:41:26.123567 kubelet[2519]: I0912 17:41:26.123300 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7c59acce-b675-499c-8d37-a781a0bc8f04-var-lib-calico\") pod \"calico-node-wn9x2\" (UID: \"7c59acce-b675-499c-8d37-a781a0bc8f04\") " pod="calico-system/calico-node-wn9x2" Sep 12 17:41:26.123567 kubelet[2519]: I0912 17:41:26.123330 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7c59acce-b675-499c-8d37-a781a0bc8f04-cni-net-dir\") pod \"calico-node-wn9x2\" (UID: \"7c59acce-b675-499c-8d37-a781a0bc8f04\") " pod="calico-system/calico-node-wn9x2" Sep 12 17:41:26.136426 systemd[1]: Started cri-containerd-259d0368a4afcc6a4e3cbd185fc4a69d77aaea792a3adef96c85fe2e2fdee7df.scope - libcontainer container 259d0368a4afcc6a4e3cbd185fc4a69d77aaea792a3adef96c85fe2e2fdee7df. Sep 12 17:41:26.176120 containerd[1456]: time="2025-09-12T17:41:26.176064649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-99ffdccfb-plsqd,Uid:9a7ab2e4-4adf-4aa8-99c0-fc337f807eb5,Namespace:calico-system,Attempt:0,} returns sandbox id \"259d0368a4afcc6a4e3cbd185fc4a69d77aaea792a3adef96c85fe2e2fdee7df\"" Sep 12 17:41:26.180442 kubelet[2519]: E0912 17:41:26.180408 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:26.184187 containerd[1456]: time="2025-09-12T17:41:26.184138355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 12 17:41:26.226426 kubelet[2519]: E0912 17:41:26.226205 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.226426 kubelet[2519]: W0912 17:41:26.226234 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.226426 kubelet[2519]: E0912 17:41:26.226289 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.226788 kubelet[2519]: E0912 17:41:26.226769 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.226925 kubelet[2519]: W0912 17:41:26.226862 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.226925 kubelet[2519]: E0912 17:41:26.226886 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.227208 kubelet[2519]: E0912 17:41:26.227189 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.227208 kubelet[2519]: W0912 17:41:26.227206 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.227357 kubelet[2519]: E0912 17:41:26.227220 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.229483 kubelet[2519]: E0912 17:41:26.228948 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.229483 kubelet[2519]: W0912 17:41:26.228969 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.229483 kubelet[2519]: E0912 17:41:26.228982 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.229483 kubelet[2519]: E0912 17:41:26.229207 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.229483 kubelet[2519]: W0912 17:41:26.229216 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.229483 kubelet[2519]: E0912 17:41:26.229227 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.229664 kubelet[2519]: E0912 17:41:26.229504 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.229664 kubelet[2519]: W0912 17:41:26.229541 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.229664 kubelet[2519]: E0912 17:41:26.229553 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.230142 kubelet[2519]: E0912 17:41:26.229845 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.230142 kubelet[2519]: W0912 17:41:26.229875 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.230142 kubelet[2519]: E0912 17:41:26.229889 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.230464 kubelet[2519]: E0912 17:41:26.230442 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.230464 kubelet[2519]: W0912 17:41:26.230460 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.230590 kubelet[2519]: E0912 17:41:26.230505 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.230861 kubelet[2519]: E0912 17:41:26.230819 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.230861 kubelet[2519]: W0912 17:41:26.230836 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.230925 kubelet[2519]: E0912 17:41:26.230864 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.231786 kubelet[2519]: E0912 17:41:26.231175 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.231786 kubelet[2519]: W0912 17:41:26.231191 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.231786 kubelet[2519]: E0912 17:41:26.231203 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.231786 kubelet[2519]: E0912 17:41:26.231519 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.231786 kubelet[2519]: W0912 17:41:26.231531 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.231786 kubelet[2519]: E0912 17:41:26.231545 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.232189 kubelet[2519]: E0912 17:41:26.231850 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.232189 kubelet[2519]: W0912 17:41:26.231861 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.232189 kubelet[2519]: E0912 17:41:26.231872 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.232189 kubelet[2519]: E0912 17:41:26.232177 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.232189 kubelet[2519]: W0912 17:41:26.232188 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.232324 kubelet[2519]: E0912 17:41:26.232202 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.232771 kubelet[2519]: E0912 17:41:26.232490 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.232771 kubelet[2519]: W0912 17:41:26.232506 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.232771 kubelet[2519]: E0912 17:41:26.232517 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.232937 kubelet[2519]: E0912 17:41:26.232916 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.232976 kubelet[2519]: W0912 17:41:26.232938 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.232976 kubelet[2519]: E0912 17:41:26.232952 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.233392 kubelet[2519]: E0912 17:41:26.233377 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.233436 kubelet[2519]: W0912 17:41:26.233392 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.233436 kubelet[2519]: E0912 17:41:26.233404 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.233718 kubelet[2519]: E0912 17:41:26.233704 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.233718 kubelet[2519]: W0912 17:41:26.233717 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.233718 kubelet[2519]: E0912 17:41:26.233729 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.234004 kubelet[2519]: E0912 17:41:26.233981 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.234004 kubelet[2519]: W0912 17:41:26.233994 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.234066 kubelet[2519]: E0912 17:41:26.234005 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.234320 kubelet[2519]: E0912 17:41:26.234297 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.234320 kubelet[2519]: W0912 17:41:26.234311 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.234441 kubelet[2519]: E0912 17:41:26.234323 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.234689 kubelet[2519]: E0912 17:41:26.234665 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.234689 kubelet[2519]: W0912 17:41:26.234685 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.234830 kubelet[2519]: E0912 17:41:26.234696 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.235134 kubelet[2519]: E0912 17:41:26.235113 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.235134 kubelet[2519]: W0912 17:41:26.235130 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.235201 kubelet[2519]: E0912 17:41:26.235142 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.235926 kubelet[2519]: E0912 17:41:26.235899 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.235968 kubelet[2519]: W0912 17:41:26.235914 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.235968 kubelet[2519]: E0912 17:41:26.235941 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.237531 kubelet[2519]: E0912 17:41:26.237505 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.237596 kubelet[2519]: W0912 17:41:26.237530 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.237596 kubelet[2519]: E0912 17:41:26.237557 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.314860 kubelet[2519]: E0912 17:41:26.314732 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lplbc" podUID="7bc16a13-1355-4530-b199-c12f8c96fcdd" Sep 12 17:41:26.320780 kubelet[2519]: E0912 17:41:26.320202 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.320780 kubelet[2519]: W0912 17:41:26.320233 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.320780 kubelet[2519]: E0912 17:41:26.320279 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.325428 kubelet[2519]: E0912 17:41:26.325229 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.325428 kubelet[2519]: W0912 17:41:26.325268 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.325428 kubelet[2519]: E0912 17:41:26.325290 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.327486 kubelet[2519]: E0912 17:41:26.327463 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.327486 kubelet[2519]: W0912 17:41:26.327482 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.327589 kubelet[2519]: E0912 17:41:26.327495 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.327809 kubelet[2519]: E0912 17:41:26.327790 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.327809 kubelet[2519]: W0912 17:41:26.327806 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.327860 kubelet[2519]: E0912 17:41:26.327820 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.328064 kubelet[2519]: E0912 17:41:26.328048 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.328064 kubelet[2519]: W0912 17:41:26.328060 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.328126 kubelet[2519]: E0912 17:41:26.328069 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.328389 kubelet[2519]: E0912 17:41:26.328371 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.328389 kubelet[2519]: W0912 17:41:26.328385 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.328450 kubelet[2519]: E0912 17:41:26.328395 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.328635 kubelet[2519]: E0912 17:41:26.328619 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.328635 kubelet[2519]: W0912 17:41:26.328632 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.328693 kubelet[2519]: E0912 17:41:26.328642 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.330179 kubelet[2519]: E0912 17:41:26.328837 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.330179 kubelet[2519]: W0912 17:41:26.328850 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.330179 kubelet[2519]: E0912 17:41:26.328859 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.330179 kubelet[2519]: E0912 17:41:26.329288 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.330179 kubelet[2519]: W0912 17:41:26.329298 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.330179 kubelet[2519]: E0912 17:41:26.329307 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.331096 kubelet[2519]: E0912 17:41:26.331078 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.331096 kubelet[2519]: W0912 17:41:26.331093 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.331173 kubelet[2519]: E0912 17:41:26.331104 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.331382 kubelet[2519]: E0912 17:41:26.331366 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.331382 kubelet[2519]: W0912 17:41:26.331380 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.331459 kubelet[2519]: E0912 17:41:26.331390 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.331646 kubelet[2519]: E0912 17:41:26.331621 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.331646 kubelet[2519]: W0912 17:41:26.331633 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.331646 kubelet[2519]: E0912 17:41:26.331643 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.331901 kubelet[2519]: E0912 17:41:26.331878 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.331901 kubelet[2519]: W0912 17:41:26.331891 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.331901 kubelet[2519]: E0912 17:41:26.331900 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.332145 kubelet[2519]: E0912 17:41:26.332121 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.332145 kubelet[2519]: W0912 17:41:26.332134 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.332145 kubelet[2519]: E0912 17:41:26.332145 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.332422 kubelet[2519]: E0912 17:41:26.332406 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.332422 kubelet[2519]: W0912 17:41:26.332419 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.332503 kubelet[2519]: E0912 17:41:26.332429 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.332662 kubelet[2519]: E0912 17:41:26.332646 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.332662 kubelet[2519]: W0912 17:41:26.332659 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.332726 kubelet[2519]: E0912 17:41:26.332669 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.332980 kubelet[2519]: E0912 17:41:26.332961 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.332980 kubelet[2519]: W0912 17:41:26.332975 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.333141 kubelet[2519]: E0912 17:41:26.332984 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.336055 kubelet[2519]: E0912 17:41:26.335488 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.336055 kubelet[2519]: W0912 17:41:26.335506 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.336055 kubelet[2519]: E0912 17:41:26.335520 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.336055 kubelet[2519]: E0912 17:41:26.335850 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.336055 kubelet[2519]: W0912 17:41:26.335859 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.336055 kubelet[2519]: E0912 17:41:26.335870 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.336765 kubelet[2519]: E0912 17:41:26.336749 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.336765 kubelet[2519]: W0912 17:41:26.336762 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.336819 kubelet[2519]: E0912 17:41:26.336771 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.337598 kubelet[2519]: E0912 17:41:26.337283 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.337598 kubelet[2519]: W0912 17:41:26.337296 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.337598 kubelet[2519]: E0912 17:41:26.337306 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.337598 kubelet[2519]: I0912 17:41:26.337328 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7bc16a13-1355-4530-b199-c12f8c96fcdd-socket-dir\") pod \"csi-node-driver-lplbc\" (UID: \"7bc16a13-1355-4530-b199-c12f8c96fcdd\") " pod="calico-system/csi-node-driver-lplbc" Sep 12 17:41:26.338430 kubelet[2519]: E0912 17:41:26.337769 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.338547 kubelet[2519]: W0912 17:41:26.338530 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.338654 kubelet[2519]: E0912 17:41:26.338628 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.338915 kubelet[2519]: I0912 17:41:26.338812 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7bc16a13-1355-4530-b199-c12f8c96fcdd-kubelet-dir\") pod \"csi-node-driver-lplbc\" (UID: \"7bc16a13-1355-4530-b199-c12f8c96fcdd\") " pod="calico-system/csi-node-driver-lplbc" Sep 12 17:41:26.339188 kubelet[2519]: E0912 17:41:26.339080 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.339188 kubelet[2519]: W0912 17:41:26.339093 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.339188 kubelet[2519]: E0912 17:41:26.339103 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.339188 kubelet[2519]: I0912 17:41:26.339156 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pn49\" (UniqueName: \"kubernetes.io/projected/7bc16a13-1355-4530-b199-c12f8c96fcdd-kube-api-access-9pn49\") pod \"csi-node-driver-lplbc\" (UID: \"7bc16a13-1355-4530-b199-c12f8c96fcdd\") " pod="calico-system/csi-node-driver-lplbc" Sep 12 17:41:26.339786 kubelet[2519]: E0912 17:41:26.339767 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.339786 kubelet[2519]: W0912 17:41:26.339782 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.339845 kubelet[2519]: E0912 17:41:26.339793 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.340550 kubelet[2519]: E0912 17:41:26.340530 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.340550 kubelet[2519]: W0912 17:41:26.340544 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.340643 kubelet[2519]: E0912 17:41:26.340554 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.340913 kubelet[2519]: E0912 17:41:26.340786 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.340913 kubelet[2519]: W0912 17:41:26.340798 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.340913 kubelet[2519]: E0912 17:41:26.340807 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.340995 kubelet[2519]: I0912 17:41:26.340938 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7bc16a13-1355-4530-b199-c12f8c96fcdd-registration-dir\") pod \"csi-node-driver-lplbc\" (UID: \"7bc16a13-1355-4530-b199-c12f8c96fcdd\") " pod="calico-system/csi-node-driver-lplbc" Sep 12 17:41:26.342538 kubelet[2519]: E0912 17:41:26.342519 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.342538 kubelet[2519]: W0912 17:41:26.342535 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.342638 kubelet[2519]: E0912 17:41:26.342546 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.342838 kubelet[2519]: E0912 17:41:26.342824 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.342964 kubelet[2519]: W0912 17:41:26.342889 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.342964 kubelet[2519]: E0912 17:41:26.342904 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.343209 kubelet[2519]: E0912 17:41:26.343197 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.343502 kubelet[2519]: W0912 17:41:26.343327 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.343502 kubelet[2519]: E0912 17:41:26.343350 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.343894 kubelet[2519]: E0912 17:41:26.343881 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.343973 kubelet[2519]: W0912 17:41:26.343959 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.344051 kubelet[2519]: E0912 17:41:26.344037 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.345652 kubelet[2519]: E0912 17:41:26.345638 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.345744 kubelet[2519]: W0912 17:41:26.345717 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.345744 kubelet[2519]: E0912 17:41:26.345732 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.345866 kubelet[2519]: I0912 17:41:26.345824 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7bc16a13-1355-4530-b199-c12f8c96fcdd-varrun\") pod \"csi-node-driver-lplbc\" (UID: \"7bc16a13-1355-4530-b199-c12f8c96fcdd\") " pod="calico-system/csi-node-driver-lplbc" Sep 12 17:41:26.346214 kubelet[2519]: E0912 17:41:26.346100 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.346214 kubelet[2519]: W0912 17:41:26.346112 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.346214 kubelet[2519]: E0912 17:41:26.346124 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.354732 kubelet[2519]: E0912 17:41:26.354645 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.354945 kubelet[2519]: W0912 17:41:26.354898 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.354945 kubelet[2519]: E0912 17:41:26.354927 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.355417 kubelet[2519]: E0912 17:41:26.355353 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.355417 kubelet[2519]: W0912 17:41:26.355389 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.355417 kubelet[2519]: E0912 17:41:26.355401 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.356723 kubelet[2519]: E0912 17:41:26.356706 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.357057 kubelet[2519]: W0912 17:41:26.356987 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.357057 kubelet[2519]: E0912 17:41:26.357003 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.364128 containerd[1456]: time="2025-09-12T17:41:26.364072399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wn9x2,Uid:7c59acce-b675-499c-8d37-a781a0bc8f04,Namespace:calico-system,Attempt:0,}" Sep 12 17:41:26.397055 containerd[1456]: time="2025-09-12T17:41:26.396906038Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:41:26.397055 containerd[1456]: time="2025-09-12T17:41:26.397027391Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:41:26.397055 containerd[1456]: time="2025-09-12T17:41:26.397045776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:41:26.397264 containerd[1456]: time="2025-09-12T17:41:26.397190212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:41:26.423435 systemd[1]: Started cri-containerd-f49a011f5d70ec08b1bb67ee734a307276d1d0a1fa864132646ef4f6ecf447c2.scope - libcontainer container f49a011f5d70ec08b1bb67ee734a307276d1d0a1fa864132646ef4f6ecf447c2. Sep 12 17:41:26.447070 kubelet[2519]: E0912 17:41:26.446863 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.447070 kubelet[2519]: W0912 17:41:26.446893 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.447070 kubelet[2519]: E0912 17:41:26.446921 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.447498 kubelet[2519]: E0912 17:41:26.447415 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.447498 kubelet[2519]: W0912 17:41:26.447432 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.447498 kubelet[2519]: E0912 17:41:26.447444 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.448662 kubelet[2519]: E0912 17:41:26.448595 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.448662 kubelet[2519]: W0912 17:41:26.448635 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.448766 kubelet[2519]: E0912 17:41:26.448668 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.450566 kubelet[2519]: E0912 17:41:26.450531 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.450566 kubelet[2519]: W0912 17:41:26.450555 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.450566 kubelet[2519]: E0912 17:41:26.450590 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.450993 kubelet[2519]: E0912 17:41:26.450951 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.450993 kubelet[2519]: W0912 17:41:26.450971 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.450993 kubelet[2519]: E0912 17:41:26.450983 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.451459 kubelet[2519]: E0912 17:41:26.451429 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.451459 kubelet[2519]: W0912 17:41:26.451449 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.451459 kubelet[2519]: E0912 17:41:26.451462 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.454406 kubelet[2519]: E0912 17:41:26.454363 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.454406 kubelet[2519]: W0912 17:41:26.454393 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.454518 kubelet[2519]: E0912 17:41:26.454443 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.455395 kubelet[2519]: E0912 17:41:26.454828 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.455395 kubelet[2519]: W0912 17:41:26.454841 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.455395 kubelet[2519]: E0912 17:41:26.454851 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.456715 kubelet[2519]: E0912 17:41:26.456673 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.456715 kubelet[2519]: W0912 17:41:26.456690 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.456715 kubelet[2519]: E0912 17:41:26.456701 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.457079 kubelet[2519]: E0912 17:41:26.457055 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.457079 kubelet[2519]: W0912 17:41:26.457073 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.457160 kubelet[2519]: E0912 17:41:26.457091 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.457530 kubelet[2519]: E0912 17:41:26.457507 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.457530 kubelet[2519]: W0912 17:41:26.457521 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.457530 kubelet[2519]: E0912 17:41:26.457533 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.458957 kubelet[2519]: E0912 17:41:26.458835 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.458957 kubelet[2519]: W0912 17:41:26.458863 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.458957 kubelet[2519]: E0912 17:41:26.458876 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.464024 kubelet[2519]: E0912 17:41:26.462520 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.464024 kubelet[2519]: W0912 17:41:26.462547 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.464024 kubelet[2519]: E0912 17:41:26.462583 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.464024 kubelet[2519]: E0912 17:41:26.463847 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.464024 kubelet[2519]: W0912 17:41:26.463862 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.464024 kubelet[2519]: E0912 17:41:26.463875 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.464468 kubelet[2519]: E0912 17:41:26.464212 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.464468 kubelet[2519]: W0912 17:41:26.464223 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.464468 kubelet[2519]: E0912 17:41:26.464288 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.464690 kubelet[2519]: E0912 17:41:26.464667 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.464690 kubelet[2519]: W0912 17:41:26.464682 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.464800 kubelet[2519]: E0912 17:41:26.464695 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.465092 kubelet[2519]: E0912 17:41:26.465068 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.465092 kubelet[2519]: W0912 17:41:26.465086 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.465185 kubelet[2519]: E0912 17:41:26.465099 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.465435 kubelet[2519]: E0912 17:41:26.465409 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.465435 kubelet[2519]: W0912 17:41:26.465428 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.465512 kubelet[2519]: E0912 17:41:26.465441 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.467350 kubelet[2519]: E0912 17:41:26.467332 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.467350 kubelet[2519]: W0912 17:41:26.467346 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.467447 kubelet[2519]: E0912 17:41:26.467357 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.467783 kubelet[2519]: E0912 17:41:26.467760 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.467783 kubelet[2519]: W0912 17:41:26.467773 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.467783 kubelet[2519]: E0912 17:41:26.467783 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.469204 kubelet[2519]: E0912 17:41:26.469173 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.469204 kubelet[2519]: W0912 17:41:26.469187 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.469204 kubelet[2519]: E0912 17:41:26.469198 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.469540 kubelet[2519]: E0912 17:41:26.469517 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.469540 kubelet[2519]: W0912 17:41:26.469529 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.469540 kubelet[2519]: E0912 17:41:26.469540 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.469973 kubelet[2519]: E0912 17:41:26.469954 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.469973 kubelet[2519]: W0912 17:41:26.469966 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.469973 kubelet[2519]: E0912 17:41:26.469976 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.470479 kubelet[2519]: E0912 17:41:26.470454 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.470479 kubelet[2519]: W0912 17:41:26.470474 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.470564 kubelet[2519]: E0912 17:41:26.470493 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.470993 containerd[1456]: time="2025-09-12T17:41:26.470954229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wn9x2,Uid:7c59acce-b675-499c-8d37-a781a0bc8f04,Namespace:calico-system,Attempt:0,} returns sandbox id \"f49a011f5d70ec08b1bb67ee734a307276d1d0a1fa864132646ef4f6ecf447c2\"" Sep 12 17:41:26.471860 kubelet[2519]: E0912 17:41:26.471837 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.471860 kubelet[2519]: W0912 17:41:26.471853 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.471860 kubelet[2519]: E0912 17:41:26.471864 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:26.474585 kubelet[2519]: E0912 17:41:26.473287 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:26.474585 kubelet[2519]: W0912 17:41:26.473310 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:26.474585 kubelet[2519]: E0912 17:41:26.473325 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:27.966713 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1140286316.mount: Deactivated successfully. Sep 12 17:41:28.326101 kubelet[2519]: E0912 17:41:28.325778 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lplbc" podUID="7bc16a13-1355-4530-b199-c12f8c96fcdd" Sep 12 17:41:28.404921 containerd[1456]: time="2025-09-12T17:41:28.404838042Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:28.405829 containerd[1456]: time="2025-09-12T17:41:28.405719952Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 12 17:41:28.406975 containerd[1456]: time="2025-09-12T17:41:28.406928376Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:28.409783 containerd[1456]: time="2025-09-12T17:41:28.409710820Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:28.410769 containerd[1456]: time="2025-09-12T17:41:28.410705759Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.226523414s" Sep 12 17:41:28.410769 containerd[1456]: time="2025-09-12T17:41:28.410761452Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 12 17:41:28.412923 containerd[1456]: time="2025-09-12T17:41:28.412876061Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 12 17:41:28.429382 containerd[1456]: time="2025-09-12T17:41:28.429337052Z" level=info msg="CreateContainer within sandbox \"259d0368a4afcc6a4e3cbd185fc4a69d77aaea792a3adef96c85fe2e2fdee7df\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 12 17:41:28.530935 containerd[1456]: time="2025-09-12T17:41:28.530877616Z" level=info msg="CreateContainer within sandbox \"259d0368a4afcc6a4e3cbd185fc4a69d77aaea792a3adef96c85fe2e2fdee7df\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"75d056da341dea30aac9c8d3afab99a7277a62bbb10d458807f993cb4d7afcf6\"" Sep 12 17:41:28.531624 containerd[1456]: time="2025-09-12T17:41:28.531593559Z" level=info msg="StartContainer for \"75d056da341dea30aac9c8d3afab99a7277a62bbb10d458807f993cb4d7afcf6\"" Sep 12 17:41:28.558415 systemd[1]: Started cri-containerd-75d056da341dea30aac9c8d3afab99a7277a62bbb10d458807f993cb4d7afcf6.scope - libcontainer container 75d056da341dea30aac9c8d3afab99a7277a62bbb10d458807f993cb4d7afcf6. Sep 12 17:41:28.737907 containerd[1456]: time="2025-09-12T17:41:28.737564981Z" level=info msg="StartContainer for \"75d056da341dea30aac9c8d3afab99a7277a62bbb10d458807f993cb4d7afcf6\" returns successfully" Sep 12 17:41:29.434997 kubelet[2519]: E0912 17:41:29.434948 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:29.458836 kubelet[2519]: E0912 17:41:29.458787 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:29.458836 kubelet[2519]: W0912 17:41:29.458813 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:29.458836 kubelet[2519]: E0912 17:41:29.458838 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:29.459188 kubelet[2519]: E0912 17:41:29.459143 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:29.459188 kubelet[2519]: W0912 17:41:29.459178 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:29.459306 kubelet[2519]: E0912 17:41:29.459207 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:29.459640 kubelet[2519]: E0912 17:41:29.459620 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:29.459640 kubelet[2519]: W0912 17:41:29.459633 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:29.459640 kubelet[2519]: E0912 17:41:29.459641 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:29.459947 kubelet[2519]: E0912 17:41:29.459921 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:29.459947 kubelet[2519]: W0912 17:41:29.459932 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:29.459947 kubelet[2519]: E0912 17:41:29.459942 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:29.460258 kubelet[2519]: E0912 17:41:29.460219 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:29.460311 kubelet[2519]: W0912 17:41:29.460231 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:29.460311 kubelet[2519]: E0912 17:41:29.460272 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:29.460542 kubelet[2519]: E0912 17:41:29.460525 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:29.460542 kubelet[2519]: W0912 17:41:29.460536 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:29.460611 kubelet[2519]: E0912 17:41:29.460544 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:29.460756 kubelet[2519]: E0912 17:41:29.460740 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:29.460756 kubelet[2519]: W0912 17:41:29.460751 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:29.460832 kubelet[2519]: E0912 17:41:29.460759 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:29.460960 kubelet[2519]: E0912 17:41:29.460944 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:29.460960 kubelet[2519]: W0912 17:41:29.460955 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:29.461026 kubelet[2519]: E0912 17:41:29.460963 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:29.461168 kubelet[2519]: E0912 17:41:29.461152 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:29.461168 kubelet[2519]: W0912 17:41:29.461162 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:29.461268 kubelet[2519]: E0912 17:41:29.461171 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:29.461421 kubelet[2519]: E0912 17:41:29.461396 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:29.461421 kubelet[2519]: W0912 17:41:29.461407 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:29.461488 kubelet[2519]: E0912 17:41:29.461432 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:29.461672 kubelet[2519]: E0912 17:41:29.461653 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:29.461672 kubelet[2519]: W0912 17:41:29.461668 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:29.461743 kubelet[2519]: E0912 17:41:29.461677 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:29.461896 kubelet[2519]: E0912 17:41:29.461878 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:29.461896 kubelet[2519]: W0912 17:41:29.461889 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:29.461896 kubelet[2519]: E0912 17:41:29.461897 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:29.462115 kubelet[2519]: E0912 17:41:29.462100 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:29.462115 kubelet[2519]: W0912 17:41:29.462111 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:29.462189 kubelet[2519]: E0912 17:41:29.462119 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:29.462332 kubelet[2519]: E0912 17:41:29.462313 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:29.462332 kubelet[2519]: W0912 17:41:29.462326 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:29.462332 kubelet[2519]: E0912 17:41:29.462335 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:29.462644 kubelet[2519]: E0912 17:41:29.462620 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:29.462644 kubelet[2519]: W0912 17:41:29.462639 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:29.462741 kubelet[2519]: E0912 17:41:29.462654 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:29.477095 kubelet[2519]: E0912 17:41:29.477061 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:29.477095 kubelet[2519]: W0912 17:41:29.477083 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:29.477095 kubelet[2519]: E0912 17:41:29.477102 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:29.477579 kubelet[2519]: E0912 17:41:29.477543 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:29.477579 kubelet[2519]: W0912 17:41:29.477574 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:29.477683 kubelet[2519]: E0912 17:41:29.477605 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:29.478028 kubelet[2519]: E0912 17:41:29.477980 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:29.478028 kubelet[2519]: W0912 17:41:29.477994 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:29.478028 kubelet[2519]: E0912 17:41:29.478003 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:29.478603 kubelet[2519]: E0912 17:41:29.478562 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:29.478603 kubelet[2519]: W0912 17:41:29.478598 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:29.478759 kubelet[2519]: E0912 17:41:29.478629 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:29.479000 kubelet[2519]: E0912 17:41:29.478955 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:29.479000 kubelet[2519]: W0912 17:41:29.478974 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:29.479000 kubelet[2519]: E0912 17:41:29.478986 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:29.479497 kubelet[2519]: E0912 17:41:29.479320 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:29.479497 kubelet[2519]: W0912 17:41:29.479334 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:29.479497 kubelet[2519]: E0912 17:41:29.479345 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:29.479683 kubelet[2519]: E0912 17:41:29.479664 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:29.479741 kubelet[2519]: W0912 17:41:29.479683 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:29.479741 kubelet[2519]: E0912 17:41:29.479699 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:29.480053 kubelet[2519]: E0912 17:41:29.480037 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:29.480105 kubelet[2519]: W0912 17:41:29.480052 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:29.480105 kubelet[2519]: E0912 17:41:29.480065 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:29.480397 kubelet[2519]: E0912 17:41:29.480382 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:29.480397 kubelet[2519]: W0912 17:41:29.480395 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:29.480478 kubelet[2519]: E0912 17:41:29.480407 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:29.481343 kubelet[2519]: E0912 17:41:29.481311 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:29.481343 kubelet[2519]: W0912 17:41:29.481341 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:29.481437 kubelet[2519]: E0912 17:41:29.481357 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:29.481833 kubelet[2519]: E0912 17:41:29.481696 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:29.481833 kubelet[2519]: W0912 17:41:29.481712 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:29.481833 kubelet[2519]: E0912 17:41:29.481725 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:29.482262 kubelet[2519]: E0912 17:41:29.482165 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:29.482262 kubelet[2519]: W0912 17:41:29.482179 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:29.482262 kubelet[2519]: E0912 17:41:29.482207 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:29.482675 kubelet[2519]: E0912 17:41:29.482625 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:29.482675 kubelet[2519]: W0912 17:41:29.482638 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:29.482783 kubelet[2519]: E0912 17:41:29.482649 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:29.484357 kubelet[2519]: E0912 17:41:29.484318 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:29.484357 kubelet[2519]: W0912 17:41:29.484341 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:29.484357 kubelet[2519]: E0912 17:41:29.484357 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:29.485143 kubelet[2519]: E0912 17:41:29.484962 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:29.485143 kubelet[2519]: W0912 17:41:29.484977 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:29.485143 kubelet[2519]: E0912 17:41:29.484992 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:29.485480 kubelet[2519]: E0912 17:41:29.485465 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:29.485626 kubelet[2519]: W0912 17:41:29.485557 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:29.485626 kubelet[2519]: E0912 17:41:29.485580 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:29.486693 kubelet[2519]: E0912 17:41:29.486665 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:29.486693 kubelet[2519]: W0912 17:41:29.486688 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:29.486808 kubelet[2519]: E0912 17:41:29.486703 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:29.487130 kubelet[2519]: E0912 17:41:29.487110 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:29.487130 kubelet[2519]: W0912 17:41:29.487126 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:29.487227 kubelet[2519]: E0912 17:41:29.487139 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:30.322996 kubelet[2519]: E0912 17:41:30.322925 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lplbc" podUID="7bc16a13-1355-4530-b199-c12f8c96fcdd" Sep 12 17:41:31.123217 kubelet[2519]: I0912 17:41:31.123110 2519 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:41:31.123787 kubelet[2519]: E0912 17:41:31.123617 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:31.174317 kubelet[2519]: E0912 17:41:31.174277 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:31.174317 kubelet[2519]: W0912 17:41:31.174303 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:31.174681 kubelet[2519]: E0912 17:41:31.174331 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:31.174681 kubelet[2519]: E0912 17:41:31.174583 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:31.174681 kubelet[2519]: W0912 17:41:31.174593 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:31.174681 kubelet[2519]: E0912 17:41:31.174603 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:31.174889 kubelet[2519]: E0912 17:41:31.174867 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:31.174889 kubelet[2519]: W0912 17:41:31.174877 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:31.174889 kubelet[2519]: E0912 17:41:31.174885 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:31.175165 kubelet[2519]: E0912 17:41:31.175140 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:31.175165 kubelet[2519]: W0912 17:41:31.175151 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:31.175165 kubelet[2519]: E0912 17:41:31.175161 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:31.175479 kubelet[2519]: E0912 17:41:31.175452 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:31.175479 kubelet[2519]: W0912 17:41:31.175474 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:31.175599 kubelet[2519]: E0912 17:41:31.175488 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:31.175759 kubelet[2519]: E0912 17:41:31.175744 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:31.175759 kubelet[2519]: W0912 17:41:31.175754 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:31.175844 kubelet[2519]: E0912 17:41:31.175763 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:31.175963 kubelet[2519]: E0912 17:41:31.175949 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:31.175963 kubelet[2519]: W0912 17:41:31.175958 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:31.176047 kubelet[2519]: E0912 17:41:31.175966 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:31.176158 kubelet[2519]: E0912 17:41:31.176144 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:31.176158 kubelet[2519]: W0912 17:41:31.176153 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:31.176258 kubelet[2519]: E0912 17:41:31.176161 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:31.176428 kubelet[2519]: E0912 17:41:31.176408 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:31.176428 kubelet[2519]: W0912 17:41:31.176425 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:31.176526 kubelet[2519]: E0912 17:41:31.176442 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:31.176705 kubelet[2519]: E0912 17:41:31.176685 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:31.176705 kubelet[2519]: W0912 17:41:31.176698 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:31.176821 kubelet[2519]: E0912 17:41:31.176710 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:31.176888 kubelet[2519]: E0912 17:41:31.176873 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:31.176888 kubelet[2519]: W0912 17:41:31.176882 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:31.176990 kubelet[2519]: E0912 17:41:31.176890 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:31.177112 kubelet[2519]: E0912 17:41:31.177092 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:31.177112 kubelet[2519]: W0912 17:41:31.177108 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:31.177182 kubelet[2519]: E0912 17:41:31.177122 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:31.177403 kubelet[2519]: E0912 17:41:31.177387 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:31.177403 kubelet[2519]: W0912 17:41:31.177402 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:31.177504 kubelet[2519]: E0912 17:41:31.177415 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:31.177689 kubelet[2519]: E0912 17:41:31.177673 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:31.177689 kubelet[2519]: W0912 17:41:31.177686 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:31.177745 kubelet[2519]: E0912 17:41:31.177697 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:31.177939 kubelet[2519]: E0912 17:41:31.177925 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:31.177971 kubelet[2519]: W0912 17:41:31.177937 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:31.177971 kubelet[2519]: E0912 17:41:31.177948 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:31.188381 kubelet[2519]: E0912 17:41:31.188354 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:31.188381 kubelet[2519]: W0912 17:41:31.188375 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:31.188516 kubelet[2519]: E0912 17:41:31.188391 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:31.188690 kubelet[2519]: E0912 17:41:31.188662 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:31.188690 kubelet[2519]: W0912 17:41:31.188678 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:31.189804 kubelet[2519]: E0912 17:41:31.188690 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:31.189804 kubelet[2519]: E0912 17:41:31.189098 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:31.189804 kubelet[2519]: W0912 17:41:31.189111 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:31.189804 kubelet[2519]: E0912 17:41:31.189134 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:31.189804 kubelet[2519]: E0912 17:41:31.189626 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:31.189804 kubelet[2519]: W0912 17:41:31.189683 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:31.189804 kubelet[2519]: E0912 17:41:31.189709 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:31.190686 kubelet[2519]: E0912 17:41:31.190665 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:31.190686 kubelet[2519]: W0912 17:41:31.190683 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:31.190686 kubelet[2519]: E0912 17:41:31.190695 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:31.191003 kubelet[2519]: E0912 17:41:31.190981 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:31.191003 kubelet[2519]: W0912 17:41:31.190998 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:31.191122 kubelet[2519]: E0912 17:41:31.191012 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:31.191354 kubelet[2519]: E0912 17:41:31.191303 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:31.191354 kubelet[2519]: W0912 17:41:31.191315 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:31.191354 kubelet[2519]: E0912 17:41:31.191325 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:31.191567 kubelet[2519]: E0912 17:41:31.191551 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:31.191611 kubelet[2519]: W0912 17:41:31.191563 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:31.191611 kubelet[2519]: E0912 17:41:31.191602 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:31.191884 kubelet[2519]: E0912 17:41:31.191866 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:31.191884 kubelet[2519]: W0912 17:41:31.191881 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:31.191974 kubelet[2519]: E0912 17:41:31.191894 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:31.192257 kubelet[2519]: E0912 17:41:31.192219 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:31.192257 kubelet[2519]: W0912 17:41:31.192235 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:31.192332 kubelet[2519]: E0912 17:41:31.192266 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:31.192563 kubelet[2519]: E0912 17:41:31.192543 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:31.192563 kubelet[2519]: W0912 17:41:31.192558 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:31.192628 kubelet[2519]: E0912 17:41:31.192569 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:31.192882 kubelet[2519]: E0912 17:41:31.192864 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:31.192882 kubelet[2519]: W0912 17:41:31.192879 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:31.192955 kubelet[2519]: E0912 17:41:31.192889 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:31.193211 kubelet[2519]: E0912 17:41:31.193185 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:31.193211 kubelet[2519]: W0912 17:41:31.193203 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:31.193211 kubelet[2519]: E0912 17:41:31.193215 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:31.193570 kubelet[2519]: E0912 17:41:31.193547 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:31.193570 kubelet[2519]: W0912 17:41:31.193564 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:31.193673 kubelet[2519]: E0912 17:41:31.193578 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:31.193897 kubelet[2519]: E0912 17:41:31.193874 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:31.193897 kubelet[2519]: W0912 17:41:31.193893 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:31.193897 kubelet[2519]: E0912 17:41:31.193905 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:31.194187 kubelet[2519]: E0912 17:41:31.194170 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:31.194187 kubelet[2519]: W0912 17:41:31.194182 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:31.194297 kubelet[2519]: E0912 17:41:31.194194 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:31.194518 kubelet[2519]: E0912 17:41:31.194497 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:31.194518 kubelet[2519]: W0912 17:41:31.194514 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:31.194619 kubelet[2519]: E0912 17:41:31.194531 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:31.194798 kubelet[2519]: E0912 17:41:31.194780 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:41:31.194798 kubelet[2519]: W0912 17:41:31.194793 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:41:31.194868 kubelet[2519]: E0912 17:41:31.194802 2519 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:41:31.581567 containerd[1456]: time="2025-09-12T17:41:31.581480831Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:31.618293 containerd[1456]: time="2025-09-12T17:41:31.618150272Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 12 17:41:31.641068 containerd[1456]: time="2025-09-12T17:41:31.640981786Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:31.665492 containerd[1456]: time="2025-09-12T17:41:31.665383402Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:31.665971 containerd[1456]: time="2025-09-12T17:41:31.665925248Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 3.25300851s" Sep 12 17:41:31.666033 containerd[1456]: time="2025-09-12T17:41:31.665971656Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 12 17:41:31.734221 containerd[1456]: time="2025-09-12T17:41:31.734163248Z" level=info msg="CreateContainer within sandbox \"f49a011f5d70ec08b1bb67ee734a307276d1d0a1fa864132646ef4f6ecf447c2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 12 17:41:31.777046 containerd[1456]: time="2025-09-12T17:41:31.776867249Z" level=info msg="CreateContainer within sandbox \"f49a011f5d70ec08b1bb67ee734a307276d1d0a1fa864132646ef4f6ecf447c2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3c086dc7142c06b6d8d8c3d89e03751a1b6e95ea17ace0bfc668385db6468315\"" Sep 12 17:41:31.778095 containerd[1456]: time="2025-09-12T17:41:31.778007238Z" level=info msg="StartContainer for \"3c086dc7142c06b6d8d8c3d89e03751a1b6e95ea17ace0bfc668385db6468315\"" Sep 12 17:41:31.825490 systemd[1]: Started cri-containerd-3c086dc7142c06b6d8d8c3d89e03751a1b6e95ea17ace0bfc668385db6468315.scope - libcontainer container 3c086dc7142c06b6d8d8c3d89e03751a1b6e95ea17ace0bfc668385db6468315. Sep 12 17:41:31.865939 containerd[1456]: time="2025-09-12T17:41:31.865761253Z" level=info msg="StartContainer for \"3c086dc7142c06b6d8d8c3d89e03751a1b6e95ea17ace0bfc668385db6468315\" returns successfully" Sep 12 17:41:31.877172 systemd[1]: cri-containerd-3c086dc7142c06b6d8d8c3d89e03751a1b6e95ea17ace0bfc668385db6468315.scope: Deactivated successfully. Sep 12 17:41:31.902849 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c086dc7142c06b6d8d8c3d89e03751a1b6e95ea17ace0bfc668385db6468315-rootfs.mount: Deactivated successfully. Sep 12 17:41:31.947924 containerd[1456]: time="2025-09-12T17:41:31.947607473Z" level=info msg="shim disconnected" id=3c086dc7142c06b6d8d8c3d89e03751a1b6e95ea17ace0bfc668385db6468315 namespace=k8s.io Sep 12 17:41:31.947924 containerd[1456]: time="2025-09-12T17:41:31.947692114Z" level=warning msg="cleaning up after shim disconnected" id=3c086dc7142c06b6d8d8c3d89e03751a1b6e95ea17ace0bfc668385db6468315 namespace=k8s.io Sep 12 17:41:31.947924 containerd[1456]: time="2025-09-12T17:41:31.947701452Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:41:32.322932 kubelet[2519]: E0912 17:41:32.322847 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lplbc" podUID="7bc16a13-1355-4530-b199-c12f8c96fcdd" Sep 12 17:41:32.441287 containerd[1456]: time="2025-09-12T17:41:32.441214964Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 12 17:41:32.456435 kubelet[2519]: I0912 17:41:32.454918 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-99ffdccfb-plsqd" podStartSLOduration=5.223194193 podStartE2EDuration="7.454900526s" podCreationTimestamp="2025-09-12 17:41:25 +0000 UTC" firstStartedPulling="2025-09-12 17:41:26.180925107 +0000 UTC m=+18.957122727" lastFinishedPulling="2025-09-12 17:41:28.41263144 +0000 UTC m=+21.188829060" observedRunningTime="2025-09-12 17:41:29.451087652 +0000 UTC m=+22.227285282" watchObservedRunningTime="2025-09-12 17:41:32.454900526 +0000 UTC m=+25.231098147" Sep 12 17:41:34.322479 kubelet[2519]: E0912 17:41:34.322417 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lplbc" podUID="7bc16a13-1355-4530-b199-c12f8c96fcdd" Sep 12 17:41:35.773774 containerd[1456]: time="2025-09-12T17:41:35.773720037Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:35.774568 containerd[1456]: time="2025-09-12T17:41:35.774531776Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 12 17:41:35.775754 containerd[1456]: time="2025-09-12T17:41:35.775714091Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:35.778152 containerd[1456]: time="2025-09-12T17:41:35.778100375Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:35.778956 containerd[1456]: time="2025-09-12T17:41:35.778926922Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 3.337653084s" Sep 12 17:41:35.779021 containerd[1456]: time="2025-09-12T17:41:35.778958983Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 12 17:41:35.785299 containerd[1456]: time="2025-09-12T17:41:35.785256469Z" level=info msg="CreateContainer within sandbox \"f49a011f5d70ec08b1bb67ee734a307276d1d0a1fa864132646ef4f6ecf447c2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 12 17:41:35.800780 containerd[1456]: time="2025-09-12T17:41:35.800733380Z" level=info msg="CreateContainer within sandbox \"f49a011f5d70ec08b1bb67ee734a307276d1d0a1fa864132646ef4f6ecf447c2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3f8cb34ae672752908f173baa61b22a1a6c42e8359c1e1c0a53e2f6b9b16fff9\"" Sep 12 17:41:35.801625 containerd[1456]: time="2025-09-12T17:41:35.801317986Z" level=info msg="StartContainer for \"3f8cb34ae672752908f173baa61b22a1a6c42e8359c1e1c0a53e2f6b9b16fff9\"" Sep 12 17:41:35.836447 systemd[1]: Started cri-containerd-3f8cb34ae672752908f173baa61b22a1a6c42e8359c1e1c0a53e2f6b9b16fff9.scope - libcontainer container 3f8cb34ae672752908f173baa61b22a1a6c42e8359c1e1c0a53e2f6b9b16fff9. Sep 12 17:41:35.867820 containerd[1456]: time="2025-09-12T17:41:35.867763929Z" level=info msg="StartContainer for \"3f8cb34ae672752908f173baa61b22a1a6c42e8359c1e1c0a53e2f6b9b16fff9\" returns successfully" Sep 12 17:41:36.322206 kubelet[2519]: E0912 17:41:36.322127 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lplbc" podUID="7bc16a13-1355-4530-b199-c12f8c96fcdd" Sep 12 17:41:36.466334 kubelet[2519]: I0912 17:41:36.466271 2519 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:41:36.466692 kubelet[2519]: E0912 17:41:36.466635 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:37.385133 systemd[1]: cri-containerd-3f8cb34ae672752908f173baa61b22a1a6c42e8359c1e1c0a53e2f6b9b16fff9.scope: Deactivated successfully. Sep 12 17:41:37.412919 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f8cb34ae672752908f173baa61b22a1a6c42e8359c1e1c0a53e2f6b9b16fff9-rootfs.mount: Deactivated successfully. Sep 12 17:41:37.424192 kubelet[2519]: I0912 17:41:37.424151 2519 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 17:41:37.456264 kubelet[2519]: E0912 17:41:37.456179 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:37.712801 containerd[1456]: time="2025-09-12T17:41:37.710980861Z" level=info msg="shim disconnected" id=3f8cb34ae672752908f173baa61b22a1a6c42e8359c1e1c0a53e2f6b9b16fff9 namespace=k8s.io Sep 12 17:41:37.712801 containerd[1456]: time="2025-09-12T17:41:37.711048931Z" level=warning msg="cleaning up after shim disconnected" id=3f8cb34ae672752908f173baa61b22a1a6c42e8359c1e1c0a53e2f6b9b16fff9 namespace=k8s.io Sep 12 17:41:37.712801 containerd[1456]: time="2025-09-12T17:41:37.711057847Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:41:37.724175 systemd[1]: Created slice kubepods-besteffort-poda47cf293_3695_48ac_b538_51d286575612.slice - libcontainer container kubepods-besteffort-poda47cf293_3695_48ac_b538_51d286575612.slice. Sep 12 17:41:37.734839 systemd[1]: Created slice kubepods-besteffort-pod53e646fe_62f1_4e56_82b1_6d2004ca48b0.slice - libcontainer container kubepods-besteffort-pod53e646fe_62f1_4e56_82b1_6d2004ca48b0.slice. Sep 12 17:41:37.739188 kubelet[2519]: I0912 17:41:37.738955 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/53e646fe-62f1-4e56-82b1-6d2004ca48b0-tigera-ca-bundle\") pod \"calico-kube-controllers-86c7c74448-qxr9j\" (UID: \"53e646fe-62f1-4e56-82b1-6d2004ca48b0\") " pod="calico-system/calico-kube-controllers-86c7c74448-qxr9j" Sep 12 17:41:37.739188 kubelet[2519]: I0912 17:41:37.739025 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a47cf293-3695-48ac-b538-51d286575612-whisker-backend-key-pair\") pod \"whisker-77d9d656dc-fm5zv\" (UID: \"a47cf293-3695-48ac-b538-51d286575612\") " pod="calico-system/whisker-77d9d656dc-fm5zv" Sep 12 17:41:37.739188 kubelet[2519]: I0912 17:41:37.739056 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2hh6\" (UniqueName: \"kubernetes.io/projected/a47cf293-3695-48ac-b538-51d286575612-kube-api-access-k2hh6\") pod \"whisker-77d9d656dc-fm5zv\" (UID: \"a47cf293-3695-48ac-b538-51d286575612\") " pod="calico-system/whisker-77d9d656dc-fm5zv" Sep 12 17:41:37.739188 kubelet[2519]: I0912 17:41:37.739079 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5wsc\" (UniqueName: \"kubernetes.io/projected/53e646fe-62f1-4e56-82b1-6d2004ca48b0-kube-api-access-c5wsc\") pod \"calico-kube-controllers-86c7c74448-qxr9j\" (UID: \"53e646fe-62f1-4e56-82b1-6d2004ca48b0\") " pod="calico-system/calico-kube-controllers-86c7c74448-qxr9j" Sep 12 17:41:37.739188 kubelet[2519]: I0912 17:41:37.739103 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a47cf293-3695-48ac-b538-51d286575612-whisker-ca-bundle\") pod \"whisker-77d9d656dc-fm5zv\" (UID: \"a47cf293-3695-48ac-b538-51d286575612\") " pod="calico-system/whisker-77d9d656dc-fm5zv" Sep 12 17:41:37.755506 systemd[1]: Created slice kubepods-burstable-pod9db1f756_711b_4858_9a6e_e40374cd29db.slice - libcontainer container kubepods-burstable-pod9db1f756_711b_4858_9a6e_e40374cd29db.slice. Sep 12 17:41:37.768794 systemd[1]: Created slice kubepods-besteffort-pod1dd46bd3_f845_44a4_81fe_9f42d3bc81d7.slice - libcontainer container kubepods-besteffort-pod1dd46bd3_f845_44a4_81fe_9f42d3bc81d7.slice. Sep 12 17:41:37.778179 systemd[1]: Created slice kubepods-burstable-pod212a184c_5a29_4b83_a7ee_b13bac18f280.slice - libcontainer container kubepods-burstable-pod212a184c_5a29_4b83_a7ee_b13bac18f280.slice. Sep 12 17:41:37.789132 systemd[1]: Created slice kubepods-besteffort-pod5842d66e_40a9_47c3_82c8_fa74a7aa357d.slice - libcontainer container kubepods-besteffort-pod5842d66e_40a9_47c3_82c8_fa74a7aa357d.slice. Sep 12 17:41:37.795855 systemd[1]: Created slice kubepods-besteffort-podb66f1c9c_ade5_4ae5_9f94_5011eb972cbd.slice - libcontainer container kubepods-besteffort-podb66f1c9c_ade5_4ae5_9f94_5011eb972cbd.slice. Sep 12 17:41:37.840004 kubelet[2519]: I0912 17:41:37.839935 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l2lv\" (UniqueName: \"kubernetes.io/projected/9db1f756-711b-4858-9a6e-e40374cd29db-kube-api-access-6l2lv\") pod \"coredns-674b8bbfcf-j8td4\" (UID: \"9db1f756-711b-4858-9a6e-e40374cd29db\") " pod="kube-system/coredns-674b8bbfcf-j8td4" Sep 12 17:41:37.840004 kubelet[2519]: I0912 17:41:37.839994 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/5842d66e-40a9-47c3-82c8-fa74a7aa357d-goldmane-key-pair\") pod \"goldmane-54d579b49d-hrtwm\" (UID: \"5842d66e-40a9-47c3-82c8-fa74a7aa357d\") " pod="calico-system/goldmane-54d579b49d-hrtwm" Sep 12 17:41:37.840004 kubelet[2519]: I0912 17:41:37.840017 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jv6l\" (UniqueName: \"kubernetes.io/projected/5842d66e-40a9-47c3-82c8-fa74a7aa357d-kube-api-access-8jv6l\") pod \"goldmane-54d579b49d-hrtwm\" (UID: \"5842d66e-40a9-47c3-82c8-fa74a7aa357d\") " pod="calico-system/goldmane-54d579b49d-hrtwm" Sep 12 17:41:37.840383 kubelet[2519]: I0912 17:41:37.840052 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5842d66e-40a9-47c3-82c8-fa74a7aa357d-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-hrtwm\" (UID: \"5842d66e-40a9-47c3-82c8-fa74a7aa357d\") " pod="calico-system/goldmane-54d579b49d-hrtwm" Sep 12 17:41:37.840383 kubelet[2519]: I0912 17:41:37.840079 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5842d66e-40a9-47c3-82c8-fa74a7aa357d-config\") pod \"goldmane-54d579b49d-hrtwm\" (UID: \"5842d66e-40a9-47c3-82c8-fa74a7aa357d\") " pod="calico-system/goldmane-54d579b49d-hrtwm" Sep 12 17:41:37.840383 kubelet[2519]: I0912 17:41:37.840095 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c5nj\" (UniqueName: \"kubernetes.io/projected/b66f1c9c-ade5-4ae5-9f94-5011eb972cbd-kube-api-access-4c5nj\") pod \"calico-apiserver-6dc89697db-gkcqh\" (UID: \"b66f1c9c-ade5-4ae5-9f94-5011eb972cbd\") " pod="calico-apiserver/calico-apiserver-6dc89697db-gkcqh" Sep 12 17:41:37.840383 kubelet[2519]: I0912 17:41:37.840113 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/212a184c-5a29-4b83-a7ee-b13bac18f280-config-volume\") pod \"coredns-674b8bbfcf-545s4\" (UID: \"212a184c-5a29-4b83-a7ee-b13bac18f280\") " pod="kube-system/coredns-674b8bbfcf-545s4" Sep 12 17:41:37.840383 kubelet[2519]: I0912 17:41:37.840129 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1dd46bd3-f845-44a4-81fe-9f42d3bc81d7-calico-apiserver-certs\") pod \"calico-apiserver-6dc89697db-nnwcs\" (UID: \"1dd46bd3-f845-44a4-81fe-9f42d3bc81d7\") " pod="calico-apiserver/calico-apiserver-6dc89697db-nnwcs" Sep 12 17:41:37.840530 kubelet[2519]: I0912 17:41:37.840146 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lktx\" (UniqueName: \"kubernetes.io/projected/1dd46bd3-f845-44a4-81fe-9f42d3bc81d7-kube-api-access-7lktx\") pod \"calico-apiserver-6dc89697db-nnwcs\" (UID: \"1dd46bd3-f845-44a4-81fe-9f42d3bc81d7\") " pod="calico-apiserver/calico-apiserver-6dc89697db-nnwcs" Sep 12 17:41:37.840530 kubelet[2519]: I0912 17:41:37.840174 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b66f1c9c-ade5-4ae5-9f94-5011eb972cbd-calico-apiserver-certs\") pod \"calico-apiserver-6dc89697db-gkcqh\" (UID: \"b66f1c9c-ade5-4ae5-9f94-5011eb972cbd\") " pod="calico-apiserver/calico-apiserver-6dc89697db-gkcqh" Sep 12 17:41:37.840530 kubelet[2519]: I0912 17:41:37.840197 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9db1f756-711b-4858-9a6e-e40374cd29db-config-volume\") pod \"coredns-674b8bbfcf-j8td4\" (UID: \"9db1f756-711b-4858-9a6e-e40374cd29db\") " pod="kube-system/coredns-674b8bbfcf-j8td4" Sep 12 17:41:37.840530 kubelet[2519]: I0912 17:41:37.840211 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc7k2\" (UniqueName: \"kubernetes.io/projected/212a184c-5a29-4b83-a7ee-b13bac18f280-kube-api-access-lc7k2\") pod \"coredns-674b8bbfcf-545s4\" (UID: \"212a184c-5a29-4b83-a7ee-b13bac18f280\") " pod="kube-system/coredns-674b8bbfcf-545s4" Sep 12 17:41:38.029947 containerd[1456]: time="2025-09-12T17:41:38.029803114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77d9d656dc-fm5zv,Uid:a47cf293-3695-48ac-b538-51d286575612,Namespace:calico-system,Attempt:0,}" Sep 12 17:41:38.041878 containerd[1456]: time="2025-09-12T17:41:38.041811926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86c7c74448-qxr9j,Uid:53e646fe-62f1-4e56-82b1-6d2004ca48b0,Namespace:calico-system,Attempt:0,}" Sep 12 17:41:38.093513 containerd[1456]: time="2025-09-12T17:41:38.093465519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-hrtwm,Uid:5842d66e-40a9-47c3-82c8-fa74a7aa357d,Namespace:calico-system,Attempt:0,}" Sep 12 17:41:38.101729 containerd[1456]: time="2025-09-12T17:41:38.101692606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dc89697db-gkcqh,Uid:b66f1c9c-ade5-4ae5-9f94-5011eb972cbd,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:41:38.329603 systemd[1]: Created slice kubepods-besteffort-pod7bc16a13_1355_4530_b199_c12f8c96fcdd.slice - libcontainer container kubepods-besteffort-pod7bc16a13_1355_4530_b199_c12f8c96fcdd.slice. Sep 12 17:41:38.334084 containerd[1456]: time="2025-09-12T17:41:38.333562279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lplbc,Uid:7bc16a13-1355-4530-b199-c12f8c96fcdd,Namespace:calico-system,Attempt:0,}" Sep 12 17:41:38.362158 kubelet[2519]: E0912 17:41:38.362103 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:38.363830 containerd[1456]: time="2025-09-12T17:41:38.363789716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-j8td4,Uid:9db1f756-711b-4858-9a6e-e40374cd29db,Namespace:kube-system,Attempt:0,}" Sep 12 17:41:38.376270 containerd[1456]: time="2025-09-12T17:41:38.376166580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dc89697db-nnwcs,Uid:1dd46bd3-f845-44a4-81fe-9f42d3bc81d7,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:41:38.384273 kubelet[2519]: E0912 17:41:38.383874 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:38.385525 containerd[1456]: time="2025-09-12T17:41:38.385478533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-545s4,Uid:212a184c-5a29-4b83-a7ee-b13bac18f280,Namespace:kube-system,Attempt:0,}" Sep 12 17:41:38.433086 containerd[1456]: time="2025-09-12T17:41:38.433013016Z" level=error msg="Failed to destroy network for sandbox \"bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:41:38.435745 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb-shm.mount: Deactivated successfully. Sep 12 17:41:38.437121 containerd[1456]: time="2025-09-12T17:41:38.436993712Z" level=error msg="Failed to destroy network for sandbox \"03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:41:38.439629 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c-shm.mount: Deactivated successfully. Sep 12 17:41:38.442464 containerd[1456]: time="2025-09-12T17:41:38.440638437Z" level=error msg="Failed to destroy network for sandbox \"7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:41:38.442464 containerd[1456]: time="2025-09-12T17:41:38.441923505Z" level=error msg="encountered an error cleaning up failed sandbox \"7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:41:38.442464 containerd[1456]: time="2025-09-12T17:41:38.442070385Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-hrtwm,Uid:5842d66e-40a9-47c3-82c8-fa74a7aa357d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:41:38.442701 containerd[1456]: time="2025-09-12T17:41:38.442661832Z" level=error msg="encountered an error cleaning up failed sandbox \"bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:41:38.442837 containerd[1456]: time="2025-09-12T17:41:38.442810956Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86c7c74448-qxr9j,Uid:53e646fe-62f1-4e56-82b1-6d2004ca48b0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:41:38.443065 containerd[1456]: time="2025-09-12T17:41:38.443032689Z" level=error msg="encountered an error cleaning up failed sandbox \"03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:41:38.443227 containerd[1456]: time="2025-09-12T17:41:38.443152297Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dc89697db-gkcqh,Uid:b66f1c9c-ade5-4ae5-9f94-5011eb972cbd,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:41:38.446102 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4-shm.mount: Deactivated successfully. Sep 12 17:41:38.462500 containerd[1456]: time="2025-09-12T17:41:38.462428345Z" level=error msg="Failed to destroy network for sandbox \"a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:41:38.463286 containerd[1456]: time="2025-09-12T17:41:38.463232308Z" level=error msg="encountered an error cleaning up failed sandbox \"a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:41:38.463472 containerd[1456]: time="2025-09-12T17:41:38.463435395Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77d9d656dc-fm5zv,Uid:a47cf293-3695-48ac-b538-51d286575612,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:41:38.467850 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66-shm.mount: Deactivated successfully. Sep 12 17:41:38.469101 kubelet[2519]: E0912 17:41:38.469050 2519 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:41:38.469612 kubelet[2519]: E0912 17:41:38.469132 2519 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-77d9d656dc-fm5zv" Sep 12 17:41:38.469612 kubelet[2519]: E0912 17:41:38.469158 2519 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-77d9d656dc-fm5zv" Sep 12 17:41:38.469612 kubelet[2519]: E0912 17:41:38.469213 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-77d9d656dc-fm5zv_calico-system(a47cf293-3695-48ac-b538-51d286575612)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-77d9d656dc-fm5zv_calico-system(a47cf293-3695-48ac-b538-51d286575612)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-77d9d656dc-fm5zv" podUID="a47cf293-3695-48ac-b538-51d286575612" Sep 12 17:41:38.469749 kubelet[2519]: E0912 17:41:38.469301 2519 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:41:38.469749 kubelet[2519]: E0912 17:41:38.469348 2519 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86c7c74448-qxr9j" Sep 12 17:41:38.469749 kubelet[2519]: E0912 17:41:38.469373 2519 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86c7c74448-qxr9j" Sep 12 17:41:38.469749 kubelet[2519]: E0912 17:41:38.469468 2519 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:41:38.469860 kubelet[2519]: E0912 17:41:38.469647 2519 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dc89697db-gkcqh" Sep 12 17:41:38.469860 kubelet[2519]: E0912 17:41:38.469663 2519 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dc89697db-gkcqh" Sep 12 17:41:38.469860 kubelet[2519]: E0912 17:41:38.469731 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6dc89697db-gkcqh_calico-apiserver(b66f1c9c-ade5-4ae5-9f94-5011eb972cbd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6dc89697db-gkcqh_calico-apiserver(b66f1c9c-ade5-4ae5-9f94-5011eb972cbd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dc89697db-gkcqh" podUID="b66f1c9c-ade5-4ae5-9f94-5011eb972cbd" Sep 12 17:41:38.469954 kubelet[2519]: E0912 17:41:38.469505 2519 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:41:38.469954 kubelet[2519]: E0912 17:41:38.469800 2519 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-hrtwm" Sep 12 17:41:38.469954 kubelet[2519]: E0912 17:41:38.469815 2519 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-hrtwm" Sep 12 17:41:38.470032 kubelet[2519]: E0912 17:41:38.469839 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-hrtwm_calico-system(5842d66e-40a9-47c3-82c8-fa74a7aa357d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-hrtwm_calico-system(5842d66e-40a9-47c3-82c8-fa74a7aa357d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-hrtwm" podUID="5842d66e-40a9-47c3-82c8-fa74a7aa357d" Sep 12 17:41:38.470032 kubelet[2519]: E0912 17:41:38.469914 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-86c7c74448-qxr9j_calico-system(53e646fe-62f1-4e56-82b1-6d2004ca48b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-86c7c74448-qxr9j_calico-system(53e646fe-62f1-4e56-82b1-6d2004ca48b0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86c7c74448-qxr9j" podUID="53e646fe-62f1-4e56-82b1-6d2004ca48b0" Sep 12 17:41:38.485721 kubelet[2519]: I0912 17:41:38.485096 2519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb" Sep 12 17:41:38.493341 containerd[1456]: time="2025-09-12T17:41:38.493293297Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 12 17:41:38.498123 containerd[1456]: time="2025-09-12T17:41:38.498062795Z" level=info msg="StopPodSandbox for \"bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb\"" Sep 12 17:41:38.499843 kubelet[2519]: I0912 17:41:38.498662 2519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c" Sep 12 17:41:38.500062 containerd[1456]: time="2025-09-12T17:41:38.500035734Z" level=info msg="Ensure that sandbox bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb in task-service has been cleanup successfully" Sep 12 17:41:38.506585 containerd[1456]: time="2025-09-12T17:41:38.506500724Z" level=info msg="StopPodSandbox for \"03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c\"" Sep 12 17:41:38.523398 kubelet[2519]: I0912 17:41:38.523358 2519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4" Sep 12 17:41:38.534498 containerd[1456]: time="2025-09-12T17:41:38.534438066Z" level=info msg="StopPodSandbox for \"7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4\"" Sep 12 17:41:38.534674 containerd[1456]: time="2025-09-12T17:41:38.534640382Z" level=info msg="Ensure that sandbox 7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4 in task-service has been cleanup successfully" Sep 12 17:41:38.540374 containerd[1456]: time="2025-09-12T17:41:38.540298132Z" level=info msg="Ensure that sandbox 03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c in task-service has been cleanup successfully" Sep 12 17:41:38.571712 containerd[1456]: time="2025-09-12T17:41:38.571634181Z" level=error msg="StopPodSandbox for \"bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb\" failed" error="failed to destroy network for sandbox \"bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:41:38.572012 kubelet[2519]: E0912 17:41:38.571957 2519 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb" Sep 12 17:41:38.572090 kubelet[2519]: E0912 17:41:38.572042 2519 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb"} Sep 12 17:41:38.572140 kubelet[2519]: E0912 17:41:38.572113 2519 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"53e646fe-62f1-4e56-82b1-6d2004ca48b0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:41:38.572307 kubelet[2519]: E0912 17:41:38.572161 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"53e646fe-62f1-4e56-82b1-6d2004ca48b0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86c7c74448-qxr9j" podUID="53e646fe-62f1-4e56-82b1-6d2004ca48b0" Sep 12 17:41:38.572780 containerd[1456]: time="2025-09-12T17:41:38.572601625Z" level=error msg="Failed to destroy network for sandbox \"8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:41:38.573176 containerd[1456]: time="2025-09-12T17:41:38.573144639Z" level=error msg="encountered an error cleaning up failed sandbox \"8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:41:38.573347 containerd[1456]: time="2025-09-12T17:41:38.573317819Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-j8td4,Uid:9db1f756-711b-4858-9a6e-e40374cd29db,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:41:38.573583 kubelet[2519]: E0912 17:41:38.573558 2519 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:41:38.573721 kubelet[2519]: E0912 17:41:38.573694 2519 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-j8td4" Sep 12 17:41:38.574235 kubelet[2519]: E0912 17:41:38.573793 2519 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-j8td4" Sep 12 17:41:38.574235 kubelet[2519]: E0912 17:41:38.573855 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-j8td4_kube-system(9db1f756-711b-4858-9a6e-e40374cd29db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-j8td4_kube-system(9db1f756-711b-4858-9a6e-e40374cd29db)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-j8td4" podUID="9db1f756-711b-4858-9a6e-e40374cd29db" Sep 12 17:41:38.592048 containerd[1456]: time="2025-09-12T17:41:38.589794945Z" level=error msg="Failed to destroy network for sandbox \"fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:41:38.592048 containerd[1456]: time="2025-09-12T17:41:38.590417432Z" level=error msg="encountered an error cleaning up failed sandbox \"fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:41:38.592048 containerd[1456]: time="2025-09-12T17:41:38.590492144Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lplbc,Uid:7bc16a13-1355-4530-b199-c12f8c96fcdd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:41:38.592277 kubelet[2519]: E0912 17:41:38.590901 2519 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:41:38.592277 kubelet[2519]: E0912 17:41:38.590987 2519 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lplbc" Sep 12 17:41:38.592277 kubelet[2519]: E0912 17:41:38.591061 2519 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lplbc" Sep 12 17:41:38.592432 kubelet[2519]: E0912 17:41:38.591131 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lplbc_calico-system(7bc16a13-1355-4530-b199-c12f8c96fcdd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lplbc_calico-system(7bc16a13-1355-4530-b199-c12f8c96fcdd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lplbc" podUID="7bc16a13-1355-4530-b199-c12f8c96fcdd" Sep 12 17:41:38.604193 containerd[1456]: time="2025-09-12T17:41:38.603462889Z" level=error msg="StopPodSandbox for \"03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c\" failed" error="failed to destroy network for sandbox \"03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:41:38.604376 kubelet[2519]: E0912 17:41:38.603806 2519 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c" Sep 12 17:41:38.604376 kubelet[2519]: E0912 17:41:38.603882 2519 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c"} Sep 12 17:41:38.604376 kubelet[2519]: E0912 17:41:38.603919 2519 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b66f1c9c-ade5-4ae5-9f94-5011eb972cbd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:41:38.604376 kubelet[2519]: E0912 17:41:38.603968 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b66f1c9c-ade5-4ae5-9f94-5011eb972cbd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dc89697db-gkcqh" podUID="b66f1c9c-ade5-4ae5-9f94-5011eb972cbd" Sep 12 17:41:38.607123 containerd[1456]: time="2025-09-12T17:41:38.607056376Z" level=error msg="StopPodSandbox for \"7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4\" failed" error="failed to destroy network for sandbox \"7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:41:38.607366 containerd[1456]: time="2025-09-12T17:41:38.607233504Z" level=error msg="Failed to destroy network for sandbox \"425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:41:38.607455 kubelet[2519]: E0912 17:41:38.607359 2519 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4" Sep 12 17:41:38.607455 kubelet[2519]: E0912 17:41:38.607425 2519 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4"} Sep 12 17:41:38.607540 kubelet[2519]: E0912 17:41:38.607475 2519 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5842d66e-40a9-47c3-82c8-fa74a7aa357d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:41:38.607540 kubelet[2519]: E0912 17:41:38.607503 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5842d66e-40a9-47c3-82c8-fa74a7aa357d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-hrtwm" podUID="5842d66e-40a9-47c3-82c8-fa74a7aa357d" Sep 12 17:41:38.607839 containerd[1456]: time="2025-09-12T17:41:38.607806375Z" level=error msg="encountered an error cleaning up failed sandbox \"425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:41:38.607909 containerd[1456]: time="2025-09-12T17:41:38.607871088Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dc89697db-nnwcs,Uid:1dd46bd3-f845-44a4-81fe-9f42d3bc81d7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:41:38.608099 kubelet[2519]: E0912 17:41:38.608063 2519 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:41:38.608099 kubelet[2519]: E0912 17:41:38.608104 2519 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dc89697db-nnwcs" Sep 12 17:41:38.608341 kubelet[2519]: E0912 17:41:38.608125 2519 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dc89697db-nnwcs" Sep 12 17:41:38.608341 kubelet[2519]: E0912 17:41:38.608171 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6dc89697db-nnwcs_calico-apiserver(1dd46bd3-f845-44a4-81fe-9f42d3bc81d7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6dc89697db-nnwcs_calico-apiserver(1dd46bd3-f845-44a4-81fe-9f42d3bc81d7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dc89697db-nnwcs" podUID="1dd46bd3-f845-44a4-81fe-9f42d3bc81d7" Sep 12 17:41:38.608929 containerd[1456]: time="2025-09-12T17:41:38.608887605Z" level=error msg="Failed to destroy network for sandbox \"70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:41:38.609345 containerd[1456]: time="2025-09-12T17:41:38.609311262Z" level=error msg="encountered an error cleaning up failed sandbox \"70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:41:38.609424 containerd[1456]: time="2025-09-12T17:41:38.609359896Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-545s4,Uid:212a184c-5a29-4b83-a7ee-b13bac18f280,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:41:38.609625 kubelet[2519]: E0912 17:41:38.609567 2519 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:41:38.609690 kubelet[2519]: E0912 17:41:38.609625 2519 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-545s4" Sep 12 17:41:38.609690 kubelet[2519]: E0912 17:41:38.609651 2519 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-545s4" Sep 12 17:41:38.609778 kubelet[2519]: E0912 17:41:38.609691 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-545s4_kube-system(212a184c-5a29-4b83-a7ee-b13bac18f280)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-545s4_kube-system(212a184c-5a29-4b83-a7ee-b13bac18f280)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-545s4" podUID="212a184c-5a29-4b83-a7ee-b13bac18f280" Sep 12 17:41:39.413596 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b-shm.mount: Deactivated successfully. Sep 12 17:41:39.413870 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021-shm.mount: Deactivated successfully. Sep 12 17:41:39.414185 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83-shm.mount: Deactivated successfully. Sep 12 17:41:39.526536 kubelet[2519]: I0912 17:41:39.526479 2519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66" Sep 12 17:41:39.527350 containerd[1456]: time="2025-09-12T17:41:39.527263592Z" level=info msg="StopPodSandbox for \"a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66\"" Sep 12 17:41:39.527644 containerd[1456]: time="2025-09-12T17:41:39.527484834Z" level=info msg="Ensure that sandbox a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66 in task-service has been cleanup successfully" Sep 12 17:41:39.528761 kubelet[2519]: I0912 17:41:39.528175 2519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021" Sep 12 17:41:39.528809 containerd[1456]: time="2025-09-12T17:41:39.528765062Z" level=info msg="StopPodSandbox for \"8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021\"" Sep 12 17:41:39.528905 containerd[1456]: time="2025-09-12T17:41:39.528887735Z" level=info msg="Ensure that sandbox 8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021 in task-service has been cleanup successfully" Sep 12 17:41:39.530345 kubelet[2519]: I0912 17:41:39.530312 2519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83" Sep 12 17:41:39.531229 containerd[1456]: time="2025-09-12T17:41:39.530886052Z" level=info msg="StopPodSandbox for \"fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83\"" Sep 12 17:41:39.531229 containerd[1456]: time="2025-09-12T17:41:39.531023644Z" level=info msg="Ensure that sandbox fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83 in task-service has been cleanup successfully" Sep 12 17:41:39.532759 kubelet[2519]: I0912 17:41:39.532736 2519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed" Sep 12 17:41:39.534349 containerd[1456]: time="2025-09-12T17:41:39.534235301Z" level=info msg="StopPodSandbox for \"70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed\"" Sep 12 17:41:39.534553 containerd[1456]: time="2025-09-12T17:41:39.534520894Z" level=info msg="Ensure that sandbox 70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed in task-service has been cleanup successfully" Sep 12 17:41:39.539164 kubelet[2519]: I0912 17:41:39.539124 2519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b" Sep 12 17:41:39.539842 containerd[1456]: time="2025-09-12T17:41:39.539805530Z" level=info msg="StopPodSandbox for \"425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b\"" Sep 12 17:41:39.540131 containerd[1456]: time="2025-09-12T17:41:39.540006052Z" level=info msg="Ensure that sandbox 425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b in task-service has been cleanup successfully" Sep 12 17:41:39.575469 containerd[1456]: time="2025-09-12T17:41:39.575208389Z" level=error msg="StopPodSandbox for \"70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed\" failed" error="failed to destroy network for sandbox \"70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:41:39.575469 containerd[1456]: time="2025-09-12T17:41:39.575290756Z" level=error msg="StopPodSandbox for \"8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021\" failed" error="failed to destroy network for sandbox \"8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:41:39.576848 kubelet[2519]: E0912 17:41:39.576778 2519 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed" Sep 12 17:41:39.576934 kubelet[2519]: E0912 17:41:39.576852 2519 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed"} Sep 12 17:41:39.576934 kubelet[2519]: E0912 17:41:39.576907 2519 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"212a184c-5a29-4b83-a7ee-b13bac18f280\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:41:39.577075 kubelet[2519]: E0912 17:41:39.576938 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"212a184c-5a29-4b83-a7ee-b13bac18f280\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-545s4" podUID="212a184c-5a29-4b83-a7ee-b13bac18f280" Sep 12 17:41:39.578344 kubelet[2519]: E0912 17:41:39.578305 2519 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021" Sep 12 17:41:39.578395 kubelet[2519]: E0912 17:41:39.578348 2519 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021"} Sep 12 17:41:39.578395 kubelet[2519]: E0912 17:41:39.578379 2519 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9db1f756-711b-4858-9a6e-e40374cd29db\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:41:39.578465 kubelet[2519]: E0912 17:41:39.578408 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9db1f756-711b-4858-9a6e-e40374cd29db\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-j8td4" podUID="9db1f756-711b-4858-9a6e-e40374cd29db" Sep 12 17:41:39.586654 containerd[1456]: time="2025-09-12T17:41:39.586499194Z" level=error msg="StopPodSandbox for \"fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83\" failed" error="failed to destroy network for sandbox \"fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:41:39.586979 containerd[1456]: time="2025-09-12T17:41:39.586787984Z" level=error msg="StopPodSandbox for \"425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b\" failed" error="failed to destroy network for sandbox \"425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:41:39.587023 kubelet[2519]: E0912 17:41:39.586874 2519 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83" Sep 12 17:41:39.587023 kubelet[2519]: E0912 17:41:39.586950 2519 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83"} Sep 12 17:41:39.587023 kubelet[2519]: E0912 17:41:39.587000 2519 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7bc16a13-1355-4530-b199-c12f8c96fcdd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:41:39.587170 kubelet[2519]: E0912 17:41:39.587037 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7bc16a13-1355-4530-b199-c12f8c96fcdd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lplbc" podUID="7bc16a13-1355-4530-b199-c12f8c96fcdd" Sep 12 17:41:39.587374 kubelet[2519]: E0912 17:41:39.587309 2519 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b" Sep 12 17:41:39.587374 kubelet[2519]: E0912 17:41:39.587373 2519 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b"} Sep 12 17:41:39.587571 kubelet[2519]: E0912 17:41:39.587427 2519 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1dd46bd3-f845-44a4-81fe-9f42d3bc81d7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:41:39.587571 kubelet[2519]: E0912 17:41:39.587455 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1dd46bd3-f845-44a4-81fe-9f42d3bc81d7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dc89697db-nnwcs" podUID="1dd46bd3-f845-44a4-81fe-9f42d3bc81d7" Sep 12 17:41:39.590469 containerd[1456]: time="2025-09-12T17:41:39.590409040Z" level=error msg="StopPodSandbox for \"a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66\" failed" error="failed to destroy network for sandbox \"a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:41:39.590651 kubelet[2519]: E0912 17:41:39.590619 2519 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66" Sep 12 17:41:39.590713 kubelet[2519]: E0912 17:41:39.590662 2519 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66"} Sep 12 17:41:39.590713 kubelet[2519]: E0912 17:41:39.590693 2519 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a47cf293-3695-48ac-b538-51d286575612\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:41:39.590884 kubelet[2519]: E0912 17:41:39.590715 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a47cf293-3695-48ac-b538-51d286575612\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-77d9d656dc-fm5zv" podUID="a47cf293-3695-48ac-b538-51d286575612" Sep 12 17:41:44.296719 systemd[1]: Started sshd@7-10.0.0.139:22-10.0.0.1:54584.service - OpenSSH per-connection server daemon (10.0.0.1:54584). Sep 12 17:41:44.453187 sshd[3881]: Accepted publickey for core from 10.0.0.1 port 54584 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:41:44.455944 sshd[3881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:41:44.470056 systemd-logind[1445]: New session 8 of user core. Sep 12 17:41:44.477593 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 17:41:44.675517 sshd[3881]: pam_unix(sshd:session): session closed for user core Sep 12 17:41:44.688528 systemd[1]: sshd@7-10.0.0.139:22-10.0.0.1:54584.service: Deactivated successfully. Sep 12 17:41:44.698553 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 17:41:44.700954 systemd-logind[1445]: Session 8 logged out. Waiting for processes to exit. Sep 12 17:41:44.702829 systemd-logind[1445]: Removed session 8. Sep 12 17:41:46.341921 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1522813680.mount: Deactivated successfully. Sep 12 17:41:47.449567 containerd[1456]: time="2025-09-12T17:41:47.449401934Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:47.450643 containerd[1456]: time="2025-09-12T17:41:47.450587825Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 12 17:41:47.452523 containerd[1456]: time="2025-09-12T17:41:47.452055974Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:47.458052 containerd[1456]: time="2025-09-12T17:41:47.457975604Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:47.477394 containerd[1456]: time="2025-09-12T17:41:47.477346518Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 8.983780421s" Sep 12 17:41:47.477394 containerd[1456]: time="2025-09-12T17:41:47.477390742Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 12 17:41:47.499728 containerd[1456]: time="2025-09-12T17:41:47.499664707Z" level=info msg="CreateContainer within sandbox \"f49a011f5d70ec08b1bb67ee734a307276d1d0a1fa864132646ef4f6ecf447c2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 12 17:41:47.549666 containerd[1456]: time="2025-09-12T17:41:47.549593968Z" level=info msg="CreateContainer within sandbox \"f49a011f5d70ec08b1bb67ee734a307276d1d0a1fa864132646ef4f6ecf447c2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f8824176d062c908f311235adb3a8e8f45fd6176b4b3a3a52bc14706d4c7ad97\"" Sep 12 17:41:47.550492 containerd[1456]: time="2025-09-12T17:41:47.550450835Z" level=info msg="StartContainer for \"f8824176d062c908f311235adb3a8e8f45fd6176b4b3a3a52bc14706d4c7ad97\"" Sep 12 17:41:47.619488 systemd[1]: Started cri-containerd-f8824176d062c908f311235adb3a8e8f45fd6176b4b3a3a52bc14706d4c7ad97.scope - libcontainer container f8824176d062c908f311235adb3a8e8f45fd6176b4b3a3a52bc14706d4c7ad97. Sep 12 17:41:47.773507 containerd[1456]: time="2025-09-12T17:41:47.773296176Z" level=info msg="StartContainer for \"f8824176d062c908f311235adb3a8e8f45fd6176b4b3a3a52bc14706d4c7ad97\" returns successfully" Sep 12 17:41:47.809594 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 12 17:41:47.810903 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 12 17:41:47.910698 containerd[1456]: time="2025-09-12T17:41:47.910632614Z" level=info msg="StopPodSandbox for \"a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66\"" Sep 12 17:41:48.113440 containerd[1456]: 2025-09-12 17:41:48.010 [INFO][3964] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66" Sep 12 17:41:48.113440 containerd[1456]: 2025-09-12 17:41:48.011 [INFO][3964] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66" iface="eth0" netns="/var/run/netns/cni-ef35d3bc-69a7-3534-f82c-a17631257d2e" Sep 12 17:41:48.113440 containerd[1456]: 2025-09-12 17:41:48.011 [INFO][3964] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66" iface="eth0" netns="/var/run/netns/cni-ef35d3bc-69a7-3534-f82c-a17631257d2e" Sep 12 17:41:48.113440 containerd[1456]: 2025-09-12 17:41:48.012 [INFO][3964] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66" iface="eth0" netns="/var/run/netns/cni-ef35d3bc-69a7-3534-f82c-a17631257d2e" Sep 12 17:41:48.113440 containerd[1456]: 2025-09-12 17:41:48.012 [INFO][3964] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66" Sep 12 17:41:48.113440 containerd[1456]: 2025-09-12 17:41:48.012 [INFO][3964] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66" Sep 12 17:41:48.113440 containerd[1456]: 2025-09-12 17:41:48.096 [INFO][3973] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66" HandleID="k8s-pod-network.a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66" Workload="localhost-k8s-whisker--77d9d656dc--fm5zv-eth0" Sep 12 17:41:48.113440 containerd[1456]: 2025-09-12 17:41:48.097 [INFO][3973] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:41:48.113440 containerd[1456]: 2025-09-12 17:41:48.097 [INFO][3973] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:41:48.113440 containerd[1456]: 2025-09-12 17:41:48.105 [WARNING][3973] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66" HandleID="k8s-pod-network.a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66" Workload="localhost-k8s-whisker--77d9d656dc--fm5zv-eth0" Sep 12 17:41:48.113440 containerd[1456]: 2025-09-12 17:41:48.105 [INFO][3973] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66" HandleID="k8s-pod-network.a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66" Workload="localhost-k8s-whisker--77d9d656dc--fm5zv-eth0" Sep 12 17:41:48.113440 containerd[1456]: 2025-09-12 17:41:48.106 [INFO][3973] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:41:48.113440 containerd[1456]: 2025-09-12 17:41:48.110 [INFO][3964] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66" Sep 12 17:41:48.114116 containerd[1456]: time="2025-09-12T17:41:48.113734713Z" level=info msg="TearDown network for sandbox \"a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66\" successfully" Sep 12 17:41:48.114116 containerd[1456]: time="2025-09-12T17:41:48.113769219Z" level=info msg="StopPodSandbox for \"a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66\" returns successfully" Sep 12 17:41:48.224945 kubelet[2519]: I0912 17:41:48.224855 2519 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a47cf293-3695-48ac-b538-51d286575612-whisker-backend-key-pair\") pod \"a47cf293-3695-48ac-b538-51d286575612\" (UID: \"a47cf293-3695-48ac-b538-51d286575612\") " Sep 12 17:41:48.224945 kubelet[2519]: I0912 17:41:48.224913 2519 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a47cf293-3695-48ac-b538-51d286575612-whisker-ca-bundle\") pod \"a47cf293-3695-48ac-b538-51d286575612\" (UID: \"a47cf293-3695-48ac-b538-51d286575612\") " Sep 12 17:41:48.224945 kubelet[2519]: I0912 17:41:48.224937 2519 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2hh6\" (UniqueName: \"kubernetes.io/projected/a47cf293-3695-48ac-b538-51d286575612-kube-api-access-k2hh6\") pod \"a47cf293-3695-48ac-b538-51d286575612\" (UID: \"a47cf293-3695-48ac-b538-51d286575612\") " Sep 12 17:41:48.225827 kubelet[2519]: I0912 17:41:48.225641 2519 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a47cf293-3695-48ac-b538-51d286575612-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "a47cf293-3695-48ac-b538-51d286575612" (UID: "a47cf293-3695-48ac-b538-51d286575612"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 17:41:48.230876 kubelet[2519]: I0912 17:41:48.230822 2519 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a47cf293-3695-48ac-b538-51d286575612-kube-api-access-k2hh6" (OuterVolumeSpecName: "kube-api-access-k2hh6") pod "a47cf293-3695-48ac-b538-51d286575612" (UID: "a47cf293-3695-48ac-b538-51d286575612"). InnerVolumeSpecName "kube-api-access-k2hh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:41:48.231046 kubelet[2519]: I0912 17:41:48.230895 2519 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a47cf293-3695-48ac-b538-51d286575612-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "a47cf293-3695-48ac-b538-51d286575612" (UID: "a47cf293-3695-48ac-b538-51d286575612"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 17:41:48.326271 kubelet[2519]: I0912 17:41:48.326170 2519 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-k2hh6\" (UniqueName: \"kubernetes.io/projected/a47cf293-3695-48ac-b538-51d286575612-kube-api-access-k2hh6\") on node \"localhost\" DevicePath \"\"" Sep 12 17:41:48.326271 kubelet[2519]: I0912 17:41:48.326227 2519 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a47cf293-3695-48ac-b538-51d286575612-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 12 17:41:48.326271 kubelet[2519]: I0912 17:41:48.326292 2519 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a47cf293-3695-48ac-b538-51d286575612-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 12 17:41:48.489081 systemd[1]: run-netns-cni\x2def35d3bc\x2d69a7\x2d3534\x2df82c\x2da17631257d2e.mount: Deactivated successfully. Sep 12 17:41:48.489260 systemd[1]: var-lib-kubelet-pods-a47cf293\x2d3695\x2d48ac\x2db538\x2d51d286575612-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk2hh6.mount: Deactivated successfully. Sep 12 17:41:48.489404 systemd[1]: var-lib-kubelet-pods-a47cf293\x2d3695\x2d48ac\x2db538\x2d51d286575612-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 12 17:41:48.574638 systemd[1]: Removed slice kubepods-besteffort-poda47cf293_3695_48ac_b538_51d286575612.slice - libcontainer container kubepods-besteffort-poda47cf293_3695_48ac_b538_51d286575612.slice. Sep 12 17:41:48.592458 kubelet[2519]: I0912 17:41:48.589051 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-wn9x2" podStartSLOduration=1.58270899 podStartE2EDuration="22.589025417s" podCreationTimestamp="2025-09-12 17:41:26 +0000 UTC" firstStartedPulling="2025-09-12 17:41:26.472476708 +0000 UTC m=+19.248674328" lastFinishedPulling="2025-09-12 17:41:47.478793145 +0000 UTC m=+40.254990755" observedRunningTime="2025-09-12 17:41:48.58818949 +0000 UTC m=+41.364387270" watchObservedRunningTime="2025-09-12 17:41:48.589025417 +0000 UTC m=+41.365223037" Sep 12 17:41:48.687735 systemd[1]: Created slice kubepods-besteffort-pod6cd43fc9_5260_4dc6_8c77_9b65d0e02427.slice - libcontainer container kubepods-besteffort-pod6cd43fc9_5260_4dc6_8c77_9b65d0e02427.slice. Sep 12 17:41:48.730989 kubelet[2519]: I0912 17:41:48.730929 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6cd43fc9-5260-4dc6-8c77-9b65d0e02427-whisker-backend-key-pair\") pod \"whisker-5bd969bccd-8dfn8\" (UID: \"6cd43fc9-5260-4dc6-8c77-9b65d0e02427\") " pod="calico-system/whisker-5bd969bccd-8dfn8" Sep 12 17:41:48.731267 kubelet[2519]: I0912 17:41:48.731229 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cd43fc9-5260-4dc6-8c77-9b65d0e02427-whisker-ca-bundle\") pod \"whisker-5bd969bccd-8dfn8\" (UID: \"6cd43fc9-5260-4dc6-8c77-9b65d0e02427\") " pod="calico-system/whisker-5bd969bccd-8dfn8" Sep 12 17:41:48.731384 kubelet[2519]: I0912 17:41:48.731369 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6t6m\" (UniqueName: \"kubernetes.io/projected/6cd43fc9-5260-4dc6-8c77-9b65d0e02427-kube-api-access-x6t6m\") pod \"whisker-5bd969bccd-8dfn8\" (UID: \"6cd43fc9-5260-4dc6-8c77-9b65d0e02427\") " pod="calico-system/whisker-5bd969bccd-8dfn8" Sep 12 17:41:48.996200 containerd[1456]: time="2025-09-12T17:41:48.996103214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5bd969bccd-8dfn8,Uid:6cd43fc9-5260-4dc6-8c77-9b65d0e02427,Namespace:calico-system,Attempt:0,}" Sep 12 17:41:49.169359 systemd-networkd[1395]: calib396477f914: Link UP Sep 12 17:41:49.172099 systemd-networkd[1395]: calib396477f914: Gained carrier Sep 12 17:41:49.216788 containerd[1456]: 2025-09-12 17:41:49.044 [INFO][4022] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:41:49.216788 containerd[1456]: 2025-09-12 17:41:49.062 [INFO][4022] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5bd969bccd--8dfn8-eth0 whisker-5bd969bccd- calico-system 6cd43fc9-5260-4dc6-8c77-9b65d0e02427 977 0 2025-09-12 17:41:48 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5bd969bccd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5bd969bccd-8dfn8 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calib396477f914 [] [] }} ContainerID="081c3d21e5ae6a7a6dfe91f84fc41a8800e6ce6fd64b5cf905705a8f7a11c929" Namespace="calico-system" Pod="whisker-5bd969bccd-8dfn8" WorkloadEndpoint="localhost-k8s-whisker--5bd969bccd--8dfn8-" Sep 12 17:41:49.216788 containerd[1456]: 2025-09-12 17:41:49.062 [INFO][4022] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="081c3d21e5ae6a7a6dfe91f84fc41a8800e6ce6fd64b5cf905705a8f7a11c929" Namespace="calico-system" Pod="whisker-5bd969bccd-8dfn8" WorkloadEndpoint="localhost-k8s-whisker--5bd969bccd--8dfn8-eth0" Sep 12 17:41:49.216788 containerd[1456]: 2025-09-12 17:41:49.097 [INFO][4035] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="081c3d21e5ae6a7a6dfe91f84fc41a8800e6ce6fd64b5cf905705a8f7a11c929" HandleID="k8s-pod-network.081c3d21e5ae6a7a6dfe91f84fc41a8800e6ce6fd64b5cf905705a8f7a11c929" Workload="localhost-k8s-whisker--5bd969bccd--8dfn8-eth0" Sep 12 17:41:49.216788 containerd[1456]: 2025-09-12 17:41:49.097 [INFO][4035] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="081c3d21e5ae6a7a6dfe91f84fc41a8800e6ce6fd64b5cf905705a8f7a11c929" HandleID="k8s-pod-network.081c3d21e5ae6a7a6dfe91f84fc41a8800e6ce6fd64b5cf905705a8f7a11c929" Workload="localhost-k8s-whisker--5bd969bccd--8dfn8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000188ac0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5bd969bccd-8dfn8", "timestamp":"2025-09-12 17:41:49.097532927 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:41:49.216788 containerd[1456]: 2025-09-12 17:41:49.097 [INFO][4035] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:41:49.216788 containerd[1456]: 2025-09-12 17:41:49.097 [INFO][4035] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:41:49.216788 containerd[1456]: 2025-09-12 17:41:49.098 [INFO][4035] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:41:49.216788 containerd[1456]: 2025-09-12 17:41:49.106 [INFO][4035] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.081c3d21e5ae6a7a6dfe91f84fc41a8800e6ce6fd64b5cf905705a8f7a11c929" host="localhost" Sep 12 17:41:49.216788 containerd[1456]: 2025-09-12 17:41:49.116 [INFO][4035] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:41:49.216788 containerd[1456]: 2025-09-12 17:41:49.121 [INFO][4035] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:41:49.216788 containerd[1456]: 2025-09-12 17:41:49.123 [INFO][4035] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:41:49.216788 containerd[1456]: 2025-09-12 17:41:49.126 [INFO][4035] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:41:49.216788 containerd[1456]: 2025-09-12 17:41:49.126 [INFO][4035] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.081c3d21e5ae6a7a6dfe91f84fc41a8800e6ce6fd64b5cf905705a8f7a11c929" host="localhost" Sep 12 17:41:49.216788 containerd[1456]: 2025-09-12 17:41:49.129 [INFO][4035] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.081c3d21e5ae6a7a6dfe91f84fc41a8800e6ce6fd64b5cf905705a8f7a11c929 Sep 12 17:41:49.216788 containerd[1456]: 2025-09-12 17:41:49.136 [INFO][4035] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.081c3d21e5ae6a7a6dfe91f84fc41a8800e6ce6fd64b5cf905705a8f7a11c929" host="localhost" Sep 12 17:41:49.216788 containerd[1456]: 2025-09-12 17:41:49.145 [INFO][4035] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.081c3d21e5ae6a7a6dfe91f84fc41a8800e6ce6fd64b5cf905705a8f7a11c929" host="localhost" Sep 12 17:41:49.216788 containerd[1456]: 2025-09-12 17:41:49.145 [INFO][4035] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.081c3d21e5ae6a7a6dfe91f84fc41a8800e6ce6fd64b5cf905705a8f7a11c929" host="localhost" Sep 12 17:41:49.216788 containerd[1456]: 2025-09-12 17:41:49.145 [INFO][4035] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:41:49.216788 containerd[1456]: 2025-09-12 17:41:49.145 [INFO][4035] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="081c3d21e5ae6a7a6dfe91f84fc41a8800e6ce6fd64b5cf905705a8f7a11c929" HandleID="k8s-pod-network.081c3d21e5ae6a7a6dfe91f84fc41a8800e6ce6fd64b5cf905705a8f7a11c929" Workload="localhost-k8s-whisker--5bd969bccd--8dfn8-eth0" Sep 12 17:41:49.217536 containerd[1456]: 2025-09-12 17:41:49.155 [INFO][4022] cni-plugin/k8s.go 418: Populated endpoint ContainerID="081c3d21e5ae6a7a6dfe91f84fc41a8800e6ce6fd64b5cf905705a8f7a11c929" Namespace="calico-system" Pod="whisker-5bd969bccd-8dfn8" WorkloadEndpoint="localhost-k8s-whisker--5bd969bccd--8dfn8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5bd969bccd--8dfn8-eth0", GenerateName:"whisker-5bd969bccd-", Namespace:"calico-system", SelfLink:"", UID:"6cd43fc9-5260-4dc6-8c77-9b65d0e02427", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 41, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5bd969bccd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5bd969bccd-8dfn8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib396477f914", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:41:49.217536 containerd[1456]: 2025-09-12 17:41:49.155 [INFO][4022] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="081c3d21e5ae6a7a6dfe91f84fc41a8800e6ce6fd64b5cf905705a8f7a11c929" Namespace="calico-system" Pod="whisker-5bd969bccd-8dfn8" WorkloadEndpoint="localhost-k8s-whisker--5bd969bccd--8dfn8-eth0" Sep 12 17:41:49.217536 containerd[1456]: 2025-09-12 17:41:49.155 [INFO][4022] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib396477f914 ContainerID="081c3d21e5ae6a7a6dfe91f84fc41a8800e6ce6fd64b5cf905705a8f7a11c929" Namespace="calico-system" Pod="whisker-5bd969bccd-8dfn8" WorkloadEndpoint="localhost-k8s-whisker--5bd969bccd--8dfn8-eth0" Sep 12 17:41:49.217536 containerd[1456]: 2025-09-12 17:41:49.169 [INFO][4022] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="081c3d21e5ae6a7a6dfe91f84fc41a8800e6ce6fd64b5cf905705a8f7a11c929" Namespace="calico-system" Pod="whisker-5bd969bccd-8dfn8" WorkloadEndpoint="localhost-k8s-whisker--5bd969bccd--8dfn8-eth0" Sep 12 17:41:49.217536 containerd[1456]: 2025-09-12 17:41:49.176 [INFO][4022] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="081c3d21e5ae6a7a6dfe91f84fc41a8800e6ce6fd64b5cf905705a8f7a11c929" Namespace="calico-system" Pod="whisker-5bd969bccd-8dfn8" WorkloadEndpoint="localhost-k8s-whisker--5bd969bccd--8dfn8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5bd969bccd--8dfn8-eth0", GenerateName:"whisker-5bd969bccd-", Namespace:"calico-system", SelfLink:"", UID:"6cd43fc9-5260-4dc6-8c77-9b65d0e02427", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 41, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5bd969bccd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"081c3d21e5ae6a7a6dfe91f84fc41a8800e6ce6fd64b5cf905705a8f7a11c929", Pod:"whisker-5bd969bccd-8dfn8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib396477f914", MAC:"1a:c3:1d:e6:39:5d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:41:49.217536 containerd[1456]: 2025-09-12 17:41:49.198 [INFO][4022] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="081c3d21e5ae6a7a6dfe91f84fc41a8800e6ce6fd64b5cf905705a8f7a11c929" Namespace="calico-system" Pod="whisker-5bd969bccd-8dfn8" WorkloadEndpoint="localhost-k8s-whisker--5bd969bccd--8dfn8-eth0" Sep 12 17:41:49.274382 containerd[1456]: time="2025-09-12T17:41:49.271363192Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:41:49.274382 containerd[1456]: time="2025-09-12T17:41:49.271472869Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:41:49.274382 containerd[1456]: time="2025-09-12T17:41:49.271487999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:41:49.274382 containerd[1456]: time="2025-09-12T17:41:49.271632833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:41:49.305789 systemd[1]: Started cri-containerd-081c3d21e5ae6a7a6dfe91f84fc41a8800e6ce6fd64b5cf905705a8f7a11c929.scope - libcontainer container 081c3d21e5ae6a7a6dfe91f84fc41a8800e6ce6fd64b5cf905705a8f7a11c929. Sep 12 17:41:49.323934 systemd-resolved[1325]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:41:49.326881 kubelet[2519]: I0912 17:41:49.325871 2519 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a47cf293-3695-48ac-b538-51d286575612" path="/var/lib/kubelet/pods/a47cf293-3695-48ac-b538-51d286575612/volumes" Sep 12 17:41:49.327859 containerd[1456]: time="2025-09-12T17:41:49.327827292Z" level=info msg="StopPodSandbox for \"03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c\"" Sep 12 17:41:49.405406 containerd[1456]: time="2025-09-12T17:41:49.405214208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5bd969bccd-8dfn8,Uid:6cd43fc9-5260-4dc6-8c77-9b65d0e02427,Namespace:calico-system,Attempt:0,} returns sandbox id \"081c3d21e5ae6a7a6dfe91f84fc41a8800e6ce6fd64b5cf905705a8f7a11c929\"" Sep 12 17:41:49.412464 containerd[1456]: time="2025-09-12T17:41:49.412063356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 12 17:41:49.507233 containerd[1456]: 2025-09-12 17:41:49.408 [INFO][4192] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c" Sep 12 17:41:49.507233 containerd[1456]: 2025-09-12 17:41:49.408 [INFO][4192] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c" iface="eth0" netns="/var/run/netns/cni-b0f30e01-87db-0159-f0ae-da70ef550a6d" Sep 12 17:41:49.507233 containerd[1456]: 2025-09-12 17:41:49.410 [INFO][4192] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c" iface="eth0" netns="/var/run/netns/cni-b0f30e01-87db-0159-f0ae-da70ef550a6d" Sep 12 17:41:49.507233 containerd[1456]: 2025-09-12 17:41:49.410 [INFO][4192] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c" iface="eth0" netns="/var/run/netns/cni-b0f30e01-87db-0159-f0ae-da70ef550a6d" Sep 12 17:41:49.507233 containerd[1456]: 2025-09-12 17:41:49.410 [INFO][4192] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c" Sep 12 17:41:49.507233 containerd[1456]: 2025-09-12 17:41:49.410 [INFO][4192] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c" Sep 12 17:41:49.507233 containerd[1456]: 2025-09-12 17:41:49.487 [INFO][4229] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c" HandleID="k8s-pod-network.03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c" Workload="localhost-k8s-calico--apiserver--6dc89697db--gkcqh-eth0" Sep 12 17:41:49.507233 containerd[1456]: 2025-09-12 17:41:49.487 [INFO][4229] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:41:49.507233 containerd[1456]: 2025-09-12 17:41:49.488 [INFO][4229] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:41:49.507233 containerd[1456]: 2025-09-12 17:41:49.495 [WARNING][4229] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c" HandleID="k8s-pod-network.03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c" Workload="localhost-k8s-calico--apiserver--6dc89697db--gkcqh-eth0" Sep 12 17:41:49.507233 containerd[1456]: 2025-09-12 17:41:49.496 [INFO][4229] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c" HandleID="k8s-pod-network.03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c" Workload="localhost-k8s-calico--apiserver--6dc89697db--gkcqh-eth0" Sep 12 17:41:49.507233 containerd[1456]: 2025-09-12 17:41:49.498 [INFO][4229] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:41:49.507233 containerd[1456]: 2025-09-12 17:41:49.502 [INFO][4192] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c" Sep 12 17:41:49.507850 containerd[1456]: time="2025-09-12T17:41:49.507544077Z" level=info msg="TearDown network for sandbox \"03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c\" successfully" Sep 12 17:41:49.507850 containerd[1456]: time="2025-09-12T17:41:49.507578733Z" level=info msg="StopPodSandbox for \"03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c\" returns successfully" Sep 12 17:41:49.509129 containerd[1456]: time="2025-09-12T17:41:49.508737371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dc89697db-gkcqh,Uid:b66f1c9c-ade5-4ae5-9f94-5011eb972cbd,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:41:49.512044 systemd[1]: run-netns-cni\x2db0f30e01\x2d87db\x2d0159\x2df0ae\x2dda70ef550a6d.mount: Deactivated successfully. Sep 12 17:41:49.528405 kernel: bpftool[4249]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 12 17:41:49.662711 systemd-networkd[1395]: cali0764c52122a: Link UP Sep 12 17:41:49.664283 systemd-networkd[1395]: cali0764c52122a: Gained carrier Sep 12 17:41:49.682970 containerd[1456]: 2025-09-12 17:41:49.579 [INFO][4250] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6dc89697db--gkcqh-eth0 calico-apiserver-6dc89697db- calico-apiserver b66f1c9c-ade5-4ae5-9f94-5011eb972cbd 983 0 2025-09-12 17:41:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6dc89697db projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6dc89697db-gkcqh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0764c52122a [] [] }} ContainerID="ec73fa9fa58503330376402d78d4c6c56479bb0547f53f26b73898cc0d16f8a0" Namespace="calico-apiserver" Pod="calico-apiserver-6dc89697db-gkcqh" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dc89697db--gkcqh-" Sep 12 17:41:49.682970 containerd[1456]: 2025-09-12 17:41:49.580 [INFO][4250] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ec73fa9fa58503330376402d78d4c6c56479bb0547f53f26b73898cc0d16f8a0" Namespace="calico-apiserver" Pod="calico-apiserver-6dc89697db-gkcqh" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dc89697db--gkcqh-eth0" Sep 12 17:41:49.682970 containerd[1456]: 2025-09-12 17:41:49.616 [INFO][4266] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ec73fa9fa58503330376402d78d4c6c56479bb0547f53f26b73898cc0d16f8a0" HandleID="k8s-pod-network.ec73fa9fa58503330376402d78d4c6c56479bb0547f53f26b73898cc0d16f8a0" Workload="localhost-k8s-calico--apiserver--6dc89697db--gkcqh-eth0" Sep 12 17:41:49.682970 containerd[1456]: 2025-09-12 17:41:49.616 [INFO][4266] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ec73fa9fa58503330376402d78d4c6c56479bb0547f53f26b73898cc0d16f8a0" HandleID="k8s-pod-network.ec73fa9fa58503330376402d78d4c6c56479bb0547f53f26b73898cc0d16f8a0" Workload="localhost-k8s-calico--apiserver--6dc89697db--gkcqh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7360), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6dc89697db-gkcqh", "timestamp":"2025-09-12 17:41:49.616159274 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:41:49.682970 containerd[1456]: 2025-09-12 17:41:49.616 [INFO][4266] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:41:49.682970 containerd[1456]: 2025-09-12 17:41:49.616 [INFO][4266] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:41:49.682970 containerd[1456]: 2025-09-12 17:41:49.616 [INFO][4266] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:41:49.682970 containerd[1456]: 2025-09-12 17:41:49.627 [INFO][4266] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ec73fa9fa58503330376402d78d4c6c56479bb0547f53f26b73898cc0d16f8a0" host="localhost" Sep 12 17:41:49.682970 containerd[1456]: 2025-09-12 17:41:49.633 [INFO][4266] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:41:49.682970 containerd[1456]: 2025-09-12 17:41:49.637 [INFO][4266] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:41:49.682970 containerd[1456]: 2025-09-12 17:41:49.638 [INFO][4266] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:41:49.682970 containerd[1456]: 2025-09-12 17:41:49.640 [INFO][4266] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:41:49.682970 containerd[1456]: 2025-09-12 17:41:49.640 [INFO][4266] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ec73fa9fa58503330376402d78d4c6c56479bb0547f53f26b73898cc0d16f8a0" host="localhost" Sep 12 17:41:49.682970 containerd[1456]: 2025-09-12 17:41:49.643 [INFO][4266] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ec73fa9fa58503330376402d78d4c6c56479bb0547f53f26b73898cc0d16f8a0 Sep 12 17:41:49.682970 containerd[1456]: 2025-09-12 17:41:49.648 [INFO][4266] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ec73fa9fa58503330376402d78d4c6c56479bb0547f53f26b73898cc0d16f8a0" host="localhost" Sep 12 17:41:49.682970 containerd[1456]: 2025-09-12 17:41:49.655 [INFO][4266] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.ec73fa9fa58503330376402d78d4c6c56479bb0547f53f26b73898cc0d16f8a0" host="localhost" Sep 12 17:41:49.682970 containerd[1456]: 2025-09-12 17:41:49.655 [INFO][4266] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.ec73fa9fa58503330376402d78d4c6c56479bb0547f53f26b73898cc0d16f8a0" host="localhost" Sep 12 17:41:49.682970 containerd[1456]: 2025-09-12 17:41:49.655 [INFO][4266] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:41:49.682970 containerd[1456]: 2025-09-12 17:41:49.655 [INFO][4266] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="ec73fa9fa58503330376402d78d4c6c56479bb0547f53f26b73898cc0d16f8a0" HandleID="k8s-pod-network.ec73fa9fa58503330376402d78d4c6c56479bb0547f53f26b73898cc0d16f8a0" Workload="localhost-k8s-calico--apiserver--6dc89697db--gkcqh-eth0" Sep 12 17:41:49.683648 containerd[1456]: 2025-09-12 17:41:49.659 [INFO][4250] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ec73fa9fa58503330376402d78d4c6c56479bb0547f53f26b73898cc0d16f8a0" Namespace="calico-apiserver" Pod="calico-apiserver-6dc89697db-gkcqh" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dc89697db--gkcqh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6dc89697db--gkcqh-eth0", GenerateName:"calico-apiserver-6dc89697db-", Namespace:"calico-apiserver", SelfLink:"", UID:"b66f1c9c-ade5-4ae5-9f94-5011eb972cbd", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 41, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dc89697db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6dc89697db-gkcqh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0764c52122a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:41:49.683648 containerd[1456]: 2025-09-12 17:41:49.659 [INFO][4250] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="ec73fa9fa58503330376402d78d4c6c56479bb0547f53f26b73898cc0d16f8a0" Namespace="calico-apiserver" Pod="calico-apiserver-6dc89697db-gkcqh" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dc89697db--gkcqh-eth0" Sep 12 17:41:49.683648 containerd[1456]: 2025-09-12 17:41:49.660 [INFO][4250] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0764c52122a ContainerID="ec73fa9fa58503330376402d78d4c6c56479bb0547f53f26b73898cc0d16f8a0" Namespace="calico-apiserver" Pod="calico-apiserver-6dc89697db-gkcqh" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dc89697db--gkcqh-eth0" Sep 12 17:41:49.683648 containerd[1456]: 2025-09-12 17:41:49.665 [INFO][4250] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ec73fa9fa58503330376402d78d4c6c56479bb0547f53f26b73898cc0d16f8a0" Namespace="calico-apiserver" Pod="calico-apiserver-6dc89697db-gkcqh" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dc89697db--gkcqh-eth0" Sep 12 17:41:49.683648 containerd[1456]: 2025-09-12 17:41:49.665 [INFO][4250] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ec73fa9fa58503330376402d78d4c6c56479bb0547f53f26b73898cc0d16f8a0" Namespace="calico-apiserver" Pod="calico-apiserver-6dc89697db-gkcqh" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dc89697db--gkcqh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6dc89697db--gkcqh-eth0", GenerateName:"calico-apiserver-6dc89697db-", Namespace:"calico-apiserver", SelfLink:"", UID:"b66f1c9c-ade5-4ae5-9f94-5011eb972cbd", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 41, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dc89697db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ec73fa9fa58503330376402d78d4c6c56479bb0547f53f26b73898cc0d16f8a0", Pod:"calico-apiserver-6dc89697db-gkcqh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0764c52122a", MAC:"1e:5a:60:5b:8c:b5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:41:49.683648 containerd[1456]: 2025-09-12 17:41:49.677 [INFO][4250] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ec73fa9fa58503330376402d78d4c6c56479bb0547f53f26b73898cc0d16f8a0" Namespace="calico-apiserver" Pod="calico-apiserver-6dc89697db-gkcqh" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dc89697db--gkcqh-eth0" Sep 12 17:41:49.691737 systemd[1]: Started sshd@8-10.0.0.139:22-10.0.0.1:54588.service - OpenSSH per-connection server daemon (10.0.0.1:54588). Sep 12 17:41:49.717452 containerd[1456]: time="2025-09-12T17:41:49.717057346Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:41:49.717452 containerd[1456]: time="2025-09-12T17:41:49.717215265Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:41:49.717452 containerd[1456]: time="2025-09-12T17:41:49.717309154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:41:49.718690 containerd[1456]: time="2025-09-12T17:41:49.718446542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:41:49.741936 systemd[1]: Started cri-containerd-ec73fa9fa58503330376402d78d4c6c56479bb0547f53f26b73898cc0d16f8a0.scope - libcontainer container ec73fa9fa58503330376402d78d4c6c56479bb0547f53f26b73898cc0d16f8a0. Sep 12 17:41:49.758784 sshd[4299]: Accepted publickey for core from 10.0.0.1 port 54588 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:41:49.759394 systemd-resolved[1325]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:41:49.760332 sshd[4299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:41:49.767180 systemd-logind[1445]: New session 9 of user core. Sep 12 17:41:49.775525 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 17:41:49.791751 containerd[1456]: time="2025-09-12T17:41:49.791680547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dc89697db-gkcqh,Uid:b66f1c9c-ade5-4ae5-9f94-5011eb972cbd,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"ec73fa9fa58503330376402d78d4c6c56479bb0547f53f26b73898cc0d16f8a0\"" Sep 12 17:41:49.862206 systemd-networkd[1395]: vxlan.calico: Link UP Sep 12 17:41:49.862412 systemd-networkd[1395]: vxlan.calico: Gained carrier Sep 12 17:41:49.938263 sshd[4299]: pam_unix(sshd:session): session closed for user core Sep 12 17:41:49.942855 systemd[1]: sshd@8-10.0.0.139:22-10.0.0.1:54588.service: Deactivated successfully. Sep 12 17:41:49.945256 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 17:41:49.947345 systemd-logind[1445]: Session 9 logged out. Waiting for processes to exit. Sep 12 17:41:49.948612 systemd-logind[1445]: Removed session 9. Sep 12 17:41:50.323619 containerd[1456]: time="2025-09-12T17:41:50.323439779Z" level=info msg="StopPodSandbox for \"70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed\"" Sep 12 17:41:50.432442 containerd[1456]: 2025-09-12 17:41:50.386 [INFO][4440] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed" Sep 12 17:41:50.432442 containerd[1456]: 2025-09-12 17:41:50.387 [INFO][4440] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed" iface="eth0" netns="/var/run/netns/cni-aa3c0253-00a3-ae8e-93b4-e4f9acc6d16a" Sep 12 17:41:50.432442 containerd[1456]: 2025-09-12 17:41:50.387 [INFO][4440] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed" iface="eth0" netns="/var/run/netns/cni-aa3c0253-00a3-ae8e-93b4-e4f9acc6d16a" Sep 12 17:41:50.432442 containerd[1456]: 2025-09-12 17:41:50.387 [INFO][4440] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed" iface="eth0" netns="/var/run/netns/cni-aa3c0253-00a3-ae8e-93b4-e4f9acc6d16a" Sep 12 17:41:50.432442 containerd[1456]: 2025-09-12 17:41:50.387 [INFO][4440] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed" Sep 12 17:41:50.432442 containerd[1456]: 2025-09-12 17:41:50.387 [INFO][4440] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed" Sep 12 17:41:50.432442 containerd[1456]: 2025-09-12 17:41:50.416 [INFO][4450] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed" HandleID="k8s-pod-network.70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed" Workload="localhost-k8s-coredns--674b8bbfcf--545s4-eth0" Sep 12 17:41:50.432442 containerd[1456]: 2025-09-12 17:41:50.417 [INFO][4450] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:41:50.432442 containerd[1456]: 2025-09-12 17:41:50.417 [INFO][4450] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:41:50.432442 containerd[1456]: 2025-09-12 17:41:50.424 [WARNING][4450] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed" HandleID="k8s-pod-network.70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed" Workload="localhost-k8s-coredns--674b8bbfcf--545s4-eth0" Sep 12 17:41:50.432442 containerd[1456]: 2025-09-12 17:41:50.424 [INFO][4450] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed" HandleID="k8s-pod-network.70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed" Workload="localhost-k8s-coredns--674b8bbfcf--545s4-eth0" Sep 12 17:41:50.432442 containerd[1456]: 2025-09-12 17:41:50.425 [INFO][4450] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:41:50.432442 containerd[1456]: 2025-09-12 17:41:50.429 [INFO][4440] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed" Sep 12 17:41:50.432906 containerd[1456]: time="2025-09-12T17:41:50.432511959Z" level=info msg="TearDown network for sandbox \"70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed\" successfully" Sep 12 17:41:50.432906 containerd[1456]: time="2025-09-12T17:41:50.432545172Z" level=info msg="StopPodSandbox for \"70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed\" returns successfully" Sep 12 17:41:50.432960 kubelet[2519]: E0912 17:41:50.432931 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:50.433367 containerd[1456]: time="2025-09-12T17:41:50.433344328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-545s4,Uid:212a184c-5a29-4b83-a7ee-b13bac18f280,Namespace:kube-system,Attempt:1,}" Sep 12 17:41:50.495906 systemd[1]: run-netns-cni\x2daa3c0253\x2d00a3\x2dae8e\x2d93b4\x2de4f9acc6d16a.mount: Deactivated successfully. Sep 12 17:41:50.523417 systemd-networkd[1395]: calib396477f914: Gained IPv6LL Sep 12 17:41:50.555612 systemd-networkd[1395]: cali908fef8736c: Link UP Sep 12 17:41:50.557303 systemd-networkd[1395]: cali908fef8736c: Gained carrier Sep 12 17:41:50.573559 containerd[1456]: 2025-09-12 17:41:50.483 [INFO][4458] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--545s4-eth0 coredns-674b8bbfcf- kube-system 212a184c-5a29-4b83-a7ee-b13bac18f280 999 0 2025-09-12 17:41:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-545s4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali908fef8736c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3075cf08dc04273e10bf1953b469a7f3ad8eb66ef58bda20970bf7891d10b95e" Namespace="kube-system" Pod="coredns-674b8bbfcf-545s4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--545s4-" Sep 12 17:41:50.573559 containerd[1456]: 2025-09-12 17:41:50.483 [INFO][4458] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3075cf08dc04273e10bf1953b469a7f3ad8eb66ef58bda20970bf7891d10b95e" Namespace="kube-system" Pod="coredns-674b8bbfcf-545s4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--545s4-eth0" Sep 12 17:41:50.573559 containerd[1456]: 2025-09-12 17:41:50.509 [INFO][4473] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3075cf08dc04273e10bf1953b469a7f3ad8eb66ef58bda20970bf7891d10b95e" HandleID="k8s-pod-network.3075cf08dc04273e10bf1953b469a7f3ad8eb66ef58bda20970bf7891d10b95e" Workload="localhost-k8s-coredns--674b8bbfcf--545s4-eth0" Sep 12 17:41:50.573559 containerd[1456]: 2025-09-12 17:41:50.509 [INFO][4473] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3075cf08dc04273e10bf1953b469a7f3ad8eb66ef58bda20970bf7891d10b95e" HandleID="k8s-pod-network.3075cf08dc04273e10bf1953b469a7f3ad8eb66ef58bda20970bf7891d10b95e" Workload="localhost-k8s-coredns--674b8bbfcf--545s4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c78c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-545s4", "timestamp":"2025-09-12 17:41:50.509713368 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:41:50.573559 containerd[1456]: 2025-09-12 17:41:50.509 [INFO][4473] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:41:50.573559 containerd[1456]: 2025-09-12 17:41:50.509 [INFO][4473] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:41:50.573559 containerd[1456]: 2025-09-12 17:41:50.510 [INFO][4473] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:41:50.573559 containerd[1456]: 2025-09-12 17:41:50.517 [INFO][4473] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3075cf08dc04273e10bf1953b469a7f3ad8eb66ef58bda20970bf7891d10b95e" host="localhost" Sep 12 17:41:50.573559 containerd[1456]: 2025-09-12 17:41:50.521 [INFO][4473] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:41:50.573559 containerd[1456]: 2025-09-12 17:41:50.532 [INFO][4473] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:41:50.573559 containerd[1456]: 2025-09-12 17:41:50.533 [INFO][4473] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:41:50.573559 containerd[1456]: 2025-09-12 17:41:50.535 [INFO][4473] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:41:50.573559 containerd[1456]: 2025-09-12 17:41:50.535 [INFO][4473] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3075cf08dc04273e10bf1953b469a7f3ad8eb66ef58bda20970bf7891d10b95e" host="localhost" Sep 12 17:41:50.573559 containerd[1456]: 2025-09-12 17:41:50.537 [INFO][4473] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3075cf08dc04273e10bf1953b469a7f3ad8eb66ef58bda20970bf7891d10b95e Sep 12 17:41:50.573559 containerd[1456]: 2025-09-12 17:41:50.542 [INFO][4473] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3075cf08dc04273e10bf1953b469a7f3ad8eb66ef58bda20970bf7891d10b95e" host="localhost" Sep 12 17:41:50.573559 containerd[1456]: 2025-09-12 17:41:50.548 [INFO][4473] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.3075cf08dc04273e10bf1953b469a7f3ad8eb66ef58bda20970bf7891d10b95e" host="localhost" Sep 12 17:41:50.573559 containerd[1456]: 2025-09-12 17:41:50.548 [INFO][4473] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.3075cf08dc04273e10bf1953b469a7f3ad8eb66ef58bda20970bf7891d10b95e" host="localhost" Sep 12 17:41:50.573559 containerd[1456]: 2025-09-12 17:41:50.548 [INFO][4473] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:41:50.573559 containerd[1456]: 2025-09-12 17:41:50.548 [INFO][4473] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="3075cf08dc04273e10bf1953b469a7f3ad8eb66ef58bda20970bf7891d10b95e" HandleID="k8s-pod-network.3075cf08dc04273e10bf1953b469a7f3ad8eb66ef58bda20970bf7891d10b95e" Workload="localhost-k8s-coredns--674b8bbfcf--545s4-eth0" Sep 12 17:41:50.574196 containerd[1456]: 2025-09-12 17:41:50.552 [INFO][4458] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3075cf08dc04273e10bf1953b469a7f3ad8eb66ef58bda20970bf7891d10b95e" Namespace="kube-system" Pod="coredns-674b8bbfcf-545s4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--545s4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--545s4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"212a184c-5a29-4b83-a7ee-b13bac18f280", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 41, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-545s4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali908fef8736c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:41:50.574196 containerd[1456]: 2025-09-12 17:41:50.552 [INFO][4458] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="3075cf08dc04273e10bf1953b469a7f3ad8eb66ef58bda20970bf7891d10b95e" Namespace="kube-system" Pod="coredns-674b8bbfcf-545s4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--545s4-eth0" Sep 12 17:41:50.574196 containerd[1456]: 2025-09-12 17:41:50.552 [INFO][4458] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali908fef8736c ContainerID="3075cf08dc04273e10bf1953b469a7f3ad8eb66ef58bda20970bf7891d10b95e" Namespace="kube-system" Pod="coredns-674b8bbfcf-545s4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--545s4-eth0" Sep 12 17:41:50.574196 containerd[1456]: 2025-09-12 17:41:50.555 [INFO][4458] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3075cf08dc04273e10bf1953b469a7f3ad8eb66ef58bda20970bf7891d10b95e" Namespace="kube-system" Pod="coredns-674b8bbfcf-545s4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--545s4-eth0" Sep 12 17:41:50.574196 containerd[1456]: 2025-09-12 17:41:50.556 [INFO][4458] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3075cf08dc04273e10bf1953b469a7f3ad8eb66ef58bda20970bf7891d10b95e" Namespace="kube-system" Pod="coredns-674b8bbfcf-545s4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--545s4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--545s4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"212a184c-5a29-4b83-a7ee-b13bac18f280", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 41, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3075cf08dc04273e10bf1953b469a7f3ad8eb66ef58bda20970bf7891d10b95e", Pod:"coredns-674b8bbfcf-545s4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali908fef8736c", MAC:"46:84:2b:ca:92:a9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:41:50.574196 containerd[1456]: 2025-09-12 17:41:50.569 [INFO][4458] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3075cf08dc04273e10bf1953b469a7f3ad8eb66ef58bda20970bf7891d10b95e" Namespace="kube-system" Pod="coredns-674b8bbfcf-545s4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--545s4-eth0" Sep 12 17:41:50.597855 containerd[1456]: time="2025-09-12T17:41:50.597686071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:41:50.597855 containerd[1456]: time="2025-09-12T17:41:50.597794626Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:41:50.597855 containerd[1456]: time="2025-09-12T17:41:50.597811078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:41:50.598068 containerd[1456]: time="2025-09-12T17:41:50.597920455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:41:50.624413 systemd[1]: Started cri-containerd-3075cf08dc04273e10bf1953b469a7f3ad8eb66ef58bda20970bf7891d10b95e.scope - libcontainer container 3075cf08dc04273e10bf1953b469a7f3ad8eb66ef58bda20970bf7891d10b95e. Sep 12 17:41:50.638647 systemd-resolved[1325]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:41:50.665372 containerd[1456]: time="2025-09-12T17:41:50.665321522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-545s4,Uid:212a184c-5a29-4b83-a7ee-b13bac18f280,Namespace:kube-system,Attempt:1,} returns sandbox id \"3075cf08dc04273e10bf1953b469a7f3ad8eb66ef58bda20970bf7891d10b95e\"" Sep 12 17:41:50.666046 kubelet[2519]: E0912 17:41:50.666001 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:50.672440 containerd[1456]: time="2025-09-12T17:41:50.672374813Z" level=info msg="CreateContainer within sandbox \"3075cf08dc04273e10bf1953b469a7f3ad8eb66ef58bda20970bf7891d10b95e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:41:50.696797 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2164484107.mount: Deactivated successfully. Sep 12 17:41:50.705157 containerd[1456]: time="2025-09-12T17:41:50.703504870Z" level=info msg="CreateContainer within sandbox \"3075cf08dc04273e10bf1953b469a7f3ad8eb66ef58bda20970bf7891d10b95e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c0a6596238e4923c40618c55a19bc3abe39e4e12c1238ca17687faec78b872d5\"" Sep 12 17:41:50.705948 containerd[1456]: time="2025-09-12T17:41:50.705912808Z" level=info msg="StartContainer for \"c0a6596238e4923c40618c55a19bc3abe39e4e12c1238ca17687faec78b872d5\"" Sep 12 17:41:50.739514 systemd[1]: Started cri-containerd-c0a6596238e4923c40618c55a19bc3abe39e4e12c1238ca17687faec78b872d5.scope - libcontainer container c0a6596238e4923c40618c55a19bc3abe39e4e12c1238ca17687faec78b872d5. Sep 12 17:41:50.780166 containerd[1456]: time="2025-09-12T17:41:50.780110037Z" level=info msg="StartContainer for \"c0a6596238e4923c40618c55a19bc3abe39e4e12c1238ca17687faec78b872d5\" returns successfully" Sep 12 17:41:50.836933 containerd[1456]: time="2025-09-12T17:41:50.836749778Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:50.856042 containerd[1456]: time="2025-09-12T17:41:50.855930947Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 12 17:41:50.858079 containerd[1456]: time="2025-09-12T17:41:50.858011624Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:50.860267 containerd[1456]: time="2025-09-12T17:41:50.860217159Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:50.860931 containerd[1456]: time="2025-09-12T17:41:50.860900044Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.448619808s" Sep 12 17:41:50.860980 containerd[1456]: time="2025-09-12T17:41:50.860933607Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 12 17:41:50.866208 containerd[1456]: time="2025-09-12T17:41:50.866147339Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:41:50.870895 containerd[1456]: time="2025-09-12T17:41:50.870853066Z" level=info msg="CreateContainer within sandbox \"081c3d21e5ae6a7a6dfe91f84fc41a8800e6ce6fd64b5cf905705a8f7a11c929\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 12 17:41:50.971463 systemd-networkd[1395]: vxlan.calico: Gained IPv6LL Sep 12 17:41:51.087880 containerd[1456]: time="2025-09-12T17:41:51.087706980Z" level=info msg="CreateContainer within sandbox \"081c3d21e5ae6a7a6dfe91f84fc41a8800e6ce6fd64b5cf905705a8f7a11c929\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"676e3075f4599cdd832e5f793fe5f45cedf4915262c74cee4bacec4d88647027\"" Sep 12 17:41:51.088798 containerd[1456]: time="2025-09-12T17:41:51.088755168Z" level=info msg="StartContainer for \"676e3075f4599cdd832e5f793fe5f45cedf4915262c74cee4bacec4d88647027\"" Sep 12 17:41:51.127550 systemd[1]: Started cri-containerd-676e3075f4599cdd832e5f793fe5f45cedf4915262c74cee4bacec4d88647027.scope - libcontainer container 676e3075f4599cdd832e5f793fe5f45cedf4915262c74cee4bacec4d88647027. Sep 12 17:41:51.193114 containerd[1456]: time="2025-09-12T17:41:51.192974704Z" level=info msg="StartContainer for \"676e3075f4599cdd832e5f793fe5f45cedf4915262c74cee4bacec4d88647027\" returns successfully" Sep 12 17:41:51.327573 containerd[1456]: time="2025-09-12T17:41:51.327505457Z" level=info msg="StopPodSandbox for \"425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b\"" Sep 12 17:41:51.328223 containerd[1456]: time="2025-09-12T17:41:51.327531025Z" level=info msg="StopPodSandbox for \"7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4\"" Sep 12 17:41:51.358486 systemd-networkd[1395]: cali0764c52122a: Gained IPv6LL Sep 12 17:41:51.443004 containerd[1456]: 2025-09-12 17:41:51.396 [INFO][4637] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b" Sep 12 17:41:51.443004 containerd[1456]: 2025-09-12 17:41:51.396 [INFO][4637] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b" iface="eth0" netns="/var/run/netns/cni-679d18cd-5c56-4979-1901-c6a683e1b20d" Sep 12 17:41:51.443004 containerd[1456]: 2025-09-12 17:41:51.397 [INFO][4637] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b" iface="eth0" netns="/var/run/netns/cni-679d18cd-5c56-4979-1901-c6a683e1b20d" Sep 12 17:41:51.443004 containerd[1456]: 2025-09-12 17:41:51.397 [INFO][4637] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b" iface="eth0" netns="/var/run/netns/cni-679d18cd-5c56-4979-1901-c6a683e1b20d" Sep 12 17:41:51.443004 containerd[1456]: 2025-09-12 17:41:51.397 [INFO][4637] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b" Sep 12 17:41:51.443004 containerd[1456]: 2025-09-12 17:41:51.397 [INFO][4637] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b" Sep 12 17:41:51.443004 containerd[1456]: 2025-09-12 17:41:51.425 [INFO][4656] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b" HandleID="k8s-pod-network.425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b" Workload="localhost-k8s-calico--apiserver--6dc89697db--nnwcs-eth0" Sep 12 17:41:51.443004 containerd[1456]: 2025-09-12 17:41:51.425 [INFO][4656] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:41:51.443004 containerd[1456]: 2025-09-12 17:41:51.425 [INFO][4656] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:41:51.443004 containerd[1456]: 2025-09-12 17:41:51.433 [WARNING][4656] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b" HandleID="k8s-pod-network.425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b" Workload="localhost-k8s-calico--apiserver--6dc89697db--nnwcs-eth0" Sep 12 17:41:51.443004 containerd[1456]: 2025-09-12 17:41:51.433 [INFO][4656] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b" HandleID="k8s-pod-network.425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b" Workload="localhost-k8s-calico--apiserver--6dc89697db--nnwcs-eth0" Sep 12 17:41:51.443004 containerd[1456]: 2025-09-12 17:41:51.435 [INFO][4656] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:41:51.443004 containerd[1456]: 2025-09-12 17:41:51.440 [INFO][4637] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b" Sep 12 17:41:51.443615 containerd[1456]: time="2025-09-12T17:41:51.443231466Z" level=info msg="TearDown network for sandbox \"425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b\" successfully" Sep 12 17:41:51.443615 containerd[1456]: time="2025-09-12T17:41:51.443351843Z" level=info msg="StopPodSandbox for \"425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b\" returns successfully" Sep 12 17:41:51.444262 containerd[1456]: time="2025-09-12T17:41:51.444218087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dc89697db-nnwcs,Uid:1dd46bd3-f845-44a4-81fe-9f42d3bc81d7,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:41:51.451098 containerd[1456]: 2025-09-12 17:41:51.391 [INFO][4638] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4" Sep 12 17:41:51.451098 containerd[1456]: 2025-09-12 17:41:51.391 [INFO][4638] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4" iface="eth0" netns="/var/run/netns/cni-8ad09332-56b4-8743-8a90-21a9fd328a57" Sep 12 17:41:51.451098 containerd[1456]: 2025-09-12 17:41:51.392 [INFO][4638] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4" iface="eth0" netns="/var/run/netns/cni-8ad09332-56b4-8743-8a90-21a9fd328a57" Sep 12 17:41:51.451098 containerd[1456]: 2025-09-12 17:41:51.396 [INFO][4638] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4" iface="eth0" netns="/var/run/netns/cni-8ad09332-56b4-8743-8a90-21a9fd328a57" Sep 12 17:41:51.451098 containerd[1456]: 2025-09-12 17:41:51.396 [INFO][4638] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4" Sep 12 17:41:51.451098 containerd[1456]: 2025-09-12 17:41:51.396 [INFO][4638] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4" Sep 12 17:41:51.451098 containerd[1456]: 2025-09-12 17:41:51.427 [INFO][4654] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4" HandleID="k8s-pod-network.7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4" Workload="localhost-k8s-goldmane--54d579b49d--hrtwm-eth0" Sep 12 17:41:51.451098 containerd[1456]: 2025-09-12 17:41:51.427 [INFO][4654] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:41:51.451098 containerd[1456]: 2025-09-12 17:41:51.435 [INFO][4654] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:41:51.451098 containerd[1456]: 2025-09-12 17:41:51.442 [WARNING][4654] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4" HandleID="k8s-pod-network.7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4" Workload="localhost-k8s-goldmane--54d579b49d--hrtwm-eth0" Sep 12 17:41:51.451098 containerd[1456]: 2025-09-12 17:41:51.442 [INFO][4654] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4" HandleID="k8s-pod-network.7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4" Workload="localhost-k8s-goldmane--54d579b49d--hrtwm-eth0" Sep 12 17:41:51.451098 containerd[1456]: 2025-09-12 17:41:51.443 [INFO][4654] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:41:51.451098 containerd[1456]: 2025-09-12 17:41:51.447 [INFO][4638] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4" Sep 12 17:41:51.451972 containerd[1456]: time="2025-09-12T17:41:51.451270542Z" level=info msg="TearDown network for sandbox \"7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4\" successfully" Sep 12 17:41:51.451972 containerd[1456]: time="2025-09-12T17:41:51.451300739Z" level=info msg="StopPodSandbox for \"7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4\" returns successfully" Sep 12 17:41:51.452214 containerd[1456]: time="2025-09-12T17:41:51.452171120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-hrtwm,Uid:5842d66e-40a9-47c3-82c8-fa74a7aa357d,Namespace:calico-system,Attempt:1,}" Sep 12 17:41:51.498390 systemd[1]: run-netns-cni\x2d679d18cd\x2d5c56\x2d4979\x2d1901\x2dc6a683e1b20d.mount: Deactivated successfully. Sep 12 17:41:51.498914 systemd[1]: run-netns-cni\x2d8ad09332\x2d56b4\x2d8743\x2d8a90\x2d21a9fd328a57.mount: Deactivated successfully. Sep 12 17:41:51.583923 kubelet[2519]: E0912 17:41:51.583870 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:51.598277 kubelet[2519]: I0912 17:41:51.598184 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-545s4" podStartSLOduration=39.598145415 podStartE2EDuration="39.598145415s" podCreationTimestamp="2025-09-12 17:41:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:41:51.59729371 +0000 UTC m=+44.373491340" watchObservedRunningTime="2025-09-12 17:41:51.598145415 +0000 UTC m=+44.374343035" Sep 12 17:41:51.615895 systemd-networkd[1395]: cali28de6226a93: Link UP Sep 12 17:41:51.616906 systemd-networkd[1395]: cali28de6226a93: Gained carrier Sep 12 17:41:51.640832 containerd[1456]: 2025-09-12 17:41:51.521 [INFO][4671] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6dc89697db--nnwcs-eth0 calico-apiserver-6dc89697db- calico-apiserver 1dd46bd3-f845-44a4-81fe-9f42d3bc81d7 1018 0 2025-09-12 17:41:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6dc89697db projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6dc89697db-nnwcs eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali28de6226a93 [] [] }} ContainerID="219f4c3e4970c497c8e03f94efe438e7efd81f42c17a54b25264b9b3400a04fd" Namespace="calico-apiserver" Pod="calico-apiserver-6dc89697db-nnwcs" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dc89697db--nnwcs-" Sep 12 17:41:51.640832 containerd[1456]: 2025-09-12 17:41:51.521 [INFO][4671] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="219f4c3e4970c497c8e03f94efe438e7efd81f42c17a54b25264b9b3400a04fd" Namespace="calico-apiserver" Pod="calico-apiserver-6dc89697db-nnwcs" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dc89697db--nnwcs-eth0" Sep 12 17:41:51.640832 containerd[1456]: 2025-09-12 17:41:51.557 [INFO][4698] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="219f4c3e4970c497c8e03f94efe438e7efd81f42c17a54b25264b9b3400a04fd" HandleID="k8s-pod-network.219f4c3e4970c497c8e03f94efe438e7efd81f42c17a54b25264b9b3400a04fd" Workload="localhost-k8s-calico--apiserver--6dc89697db--nnwcs-eth0" Sep 12 17:41:51.640832 containerd[1456]: 2025-09-12 17:41:51.557 [INFO][4698] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="219f4c3e4970c497c8e03f94efe438e7efd81f42c17a54b25264b9b3400a04fd" HandleID="k8s-pod-network.219f4c3e4970c497c8e03f94efe438e7efd81f42c17a54b25264b9b3400a04fd" Workload="localhost-k8s-calico--apiserver--6dc89697db--nnwcs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ae350), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6dc89697db-nnwcs", "timestamp":"2025-09-12 17:41:51.557002787 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:41:51.640832 containerd[1456]: 2025-09-12 17:41:51.557 [INFO][4698] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:41:51.640832 containerd[1456]: 2025-09-12 17:41:51.557 [INFO][4698] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:41:51.640832 containerd[1456]: 2025-09-12 17:41:51.557 [INFO][4698] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:41:51.640832 containerd[1456]: 2025-09-12 17:41:51.566 [INFO][4698] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.219f4c3e4970c497c8e03f94efe438e7efd81f42c17a54b25264b9b3400a04fd" host="localhost" Sep 12 17:41:51.640832 containerd[1456]: 2025-09-12 17:41:51.572 [INFO][4698] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:41:51.640832 containerd[1456]: 2025-09-12 17:41:51.577 [INFO][4698] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:41:51.640832 containerd[1456]: 2025-09-12 17:41:51.579 [INFO][4698] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:41:51.640832 containerd[1456]: 2025-09-12 17:41:51.582 [INFO][4698] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:41:51.640832 containerd[1456]: 2025-09-12 17:41:51.582 [INFO][4698] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.219f4c3e4970c497c8e03f94efe438e7efd81f42c17a54b25264b9b3400a04fd" host="localhost" Sep 12 17:41:51.640832 containerd[1456]: 2025-09-12 17:41:51.584 [INFO][4698] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.219f4c3e4970c497c8e03f94efe438e7efd81f42c17a54b25264b9b3400a04fd Sep 12 17:41:51.640832 containerd[1456]: 2025-09-12 17:41:51.590 [INFO][4698] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.219f4c3e4970c497c8e03f94efe438e7efd81f42c17a54b25264b9b3400a04fd" host="localhost" Sep 12 17:41:51.640832 containerd[1456]: 2025-09-12 17:41:51.598 [INFO][4698] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.219f4c3e4970c497c8e03f94efe438e7efd81f42c17a54b25264b9b3400a04fd" host="localhost" Sep 12 17:41:51.640832 containerd[1456]: 2025-09-12 17:41:51.598 [INFO][4698] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.219f4c3e4970c497c8e03f94efe438e7efd81f42c17a54b25264b9b3400a04fd" host="localhost" Sep 12 17:41:51.640832 containerd[1456]: 2025-09-12 17:41:51.598 [INFO][4698] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:41:51.640832 containerd[1456]: 2025-09-12 17:41:51.598 [INFO][4698] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="219f4c3e4970c497c8e03f94efe438e7efd81f42c17a54b25264b9b3400a04fd" HandleID="k8s-pod-network.219f4c3e4970c497c8e03f94efe438e7efd81f42c17a54b25264b9b3400a04fd" Workload="localhost-k8s-calico--apiserver--6dc89697db--nnwcs-eth0" Sep 12 17:41:51.641479 containerd[1456]: 2025-09-12 17:41:51.605 [INFO][4671] cni-plugin/k8s.go 418: Populated endpoint ContainerID="219f4c3e4970c497c8e03f94efe438e7efd81f42c17a54b25264b9b3400a04fd" Namespace="calico-apiserver" Pod="calico-apiserver-6dc89697db-nnwcs" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dc89697db--nnwcs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6dc89697db--nnwcs-eth0", GenerateName:"calico-apiserver-6dc89697db-", Namespace:"calico-apiserver", SelfLink:"", UID:"1dd46bd3-f845-44a4-81fe-9f42d3bc81d7", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 41, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dc89697db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6dc89697db-nnwcs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali28de6226a93", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:41:51.641479 containerd[1456]: 2025-09-12 17:41:51.606 [INFO][4671] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="219f4c3e4970c497c8e03f94efe438e7efd81f42c17a54b25264b9b3400a04fd" Namespace="calico-apiserver" Pod="calico-apiserver-6dc89697db-nnwcs" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dc89697db--nnwcs-eth0" Sep 12 17:41:51.641479 containerd[1456]: 2025-09-12 17:41:51.606 [INFO][4671] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali28de6226a93 ContainerID="219f4c3e4970c497c8e03f94efe438e7efd81f42c17a54b25264b9b3400a04fd" Namespace="calico-apiserver" Pod="calico-apiserver-6dc89697db-nnwcs" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dc89697db--nnwcs-eth0" Sep 12 17:41:51.641479 containerd[1456]: 2025-09-12 17:41:51.619 [INFO][4671] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="219f4c3e4970c497c8e03f94efe438e7efd81f42c17a54b25264b9b3400a04fd" Namespace="calico-apiserver" Pod="calico-apiserver-6dc89697db-nnwcs" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dc89697db--nnwcs-eth0" Sep 12 17:41:51.641479 containerd[1456]: 2025-09-12 17:41:51.620 [INFO][4671] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="219f4c3e4970c497c8e03f94efe438e7efd81f42c17a54b25264b9b3400a04fd" Namespace="calico-apiserver" Pod="calico-apiserver-6dc89697db-nnwcs" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dc89697db--nnwcs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6dc89697db--nnwcs-eth0", GenerateName:"calico-apiserver-6dc89697db-", Namespace:"calico-apiserver", SelfLink:"", UID:"1dd46bd3-f845-44a4-81fe-9f42d3bc81d7", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 41, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dc89697db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"219f4c3e4970c497c8e03f94efe438e7efd81f42c17a54b25264b9b3400a04fd", Pod:"calico-apiserver-6dc89697db-nnwcs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali28de6226a93", MAC:"86:1c:7d:65:92:0b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:41:51.641479 containerd[1456]: 2025-09-12 17:41:51.633 [INFO][4671] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="219f4c3e4970c497c8e03f94efe438e7efd81f42c17a54b25264b9b3400a04fd" Namespace="calico-apiserver" Pod="calico-apiserver-6dc89697db-nnwcs" WorkloadEndpoint="localhost-k8s-calico--apiserver--6dc89697db--nnwcs-eth0" Sep 12 17:41:51.666958 containerd[1456]: time="2025-09-12T17:41:51.666551790Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:41:51.666958 containerd[1456]: time="2025-09-12T17:41:51.666622254Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:41:51.666958 containerd[1456]: time="2025-09-12T17:41:51.666635609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:41:51.666958 containerd[1456]: time="2025-09-12T17:41:51.666744036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:41:51.702808 systemd[1]: Started cri-containerd-219f4c3e4970c497c8e03f94efe438e7efd81f42c17a54b25264b9b3400a04fd.scope - libcontainer container 219f4c3e4970c497c8e03f94efe438e7efd81f42c17a54b25264b9b3400a04fd. Sep 12 17:41:51.726635 systemd-networkd[1395]: califa9cc941912: Link UP Sep 12 17:41:51.728204 systemd-networkd[1395]: califa9cc941912: Gained carrier Sep 12 17:41:51.739346 systemd-resolved[1325]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:41:51.748622 containerd[1456]: 2025-09-12 17:41:51.533 [INFO][4683] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--hrtwm-eth0 goldmane-54d579b49d- calico-system 5842d66e-40a9-47c3-82c8-fa74a7aa357d 1017 0 2025-09-12 17:41:25 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-hrtwm eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] califa9cc941912 [] [] }} ContainerID="fb4ee46b37dd944e682116ee2ee340ee0f00a3edb73952cb49cf133c275b8108" Namespace="calico-system" Pod="goldmane-54d579b49d-hrtwm" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--hrtwm-" Sep 12 17:41:51.748622 containerd[1456]: 2025-09-12 17:41:51.533 [INFO][4683] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fb4ee46b37dd944e682116ee2ee340ee0f00a3edb73952cb49cf133c275b8108" Namespace="calico-system" Pod="goldmane-54d579b49d-hrtwm" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--hrtwm-eth0" Sep 12 17:41:51.748622 containerd[1456]: 2025-09-12 17:41:51.575 [INFO][4704] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fb4ee46b37dd944e682116ee2ee340ee0f00a3edb73952cb49cf133c275b8108" HandleID="k8s-pod-network.fb4ee46b37dd944e682116ee2ee340ee0f00a3edb73952cb49cf133c275b8108" Workload="localhost-k8s-goldmane--54d579b49d--hrtwm-eth0" Sep 12 17:41:51.748622 containerd[1456]: 2025-09-12 17:41:51.576 [INFO][4704] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fb4ee46b37dd944e682116ee2ee340ee0f00a3edb73952cb49cf133c275b8108" HandleID="k8s-pod-network.fb4ee46b37dd944e682116ee2ee340ee0f00a3edb73952cb49cf133c275b8108" Workload="localhost-k8s-goldmane--54d579b49d--hrtwm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001436b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-hrtwm", "timestamp":"2025-09-12 17:41:51.575683496 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:41:51.748622 containerd[1456]: 2025-09-12 17:41:51.576 [INFO][4704] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:41:51.748622 containerd[1456]: 2025-09-12 17:41:51.598 [INFO][4704] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:41:51.748622 containerd[1456]: 2025-09-12 17:41:51.599 [INFO][4704] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:41:51.748622 containerd[1456]: 2025-09-12 17:41:51.670 [INFO][4704] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fb4ee46b37dd944e682116ee2ee340ee0f00a3edb73952cb49cf133c275b8108" host="localhost" Sep 12 17:41:51.748622 containerd[1456]: 2025-09-12 17:41:51.681 [INFO][4704] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:41:51.748622 containerd[1456]: 2025-09-12 17:41:51.689 [INFO][4704] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:41:51.748622 containerd[1456]: 2025-09-12 17:41:51.692 [INFO][4704] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:41:51.748622 containerd[1456]: 2025-09-12 17:41:51.695 [INFO][4704] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:41:51.748622 containerd[1456]: 2025-09-12 17:41:51.695 [INFO][4704] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fb4ee46b37dd944e682116ee2ee340ee0f00a3edb73952cb49cf133c275b8108" host="localhost" Sep 12 17:41:51.748622 containerd[1456]: 2025-09-12 17:41:51.697 [INFO][4704] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fb4ee46b37dd944e682116ee2ee340ee0f00a3edb73952cb49cf133c275b8108 Sep 12 17:41:51.748622 containerd[1456]: 2025-09-12 17:41:51.704 [INFO][4704] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fb4ee46b37dd944e682116ee2ee340ee0f00a3edb73952cb49cf133c275b8108" host="localhost" Sep 12 17:41:51.748622 containerd[1456]: 2025-09-12 17:41:51.713 [INFO][4704] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.fb4ee46b37dd944e682116ee2ee340ee0f00a3edb73952cb49cf133c275b8108" host="localhost" Sep 12 17:41:51.748622 containerd[1456]: 2025-09-12 17:41:51.714 [INFO][4704] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.fb4ee46b37dd944e682116ee2ee340ee0f00a3edb73952cb49cf133c275b8108" host="localhost" Sep 12 17:41:51.748622 containerd[1456]: 2025-09-12 17:41:51.714 [INFO][4704] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:41:51.748622 containerd[1456]: 2025-09-12 17:41:51.714 [INFO][4704] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="fb4ee46b37dd944e682116ee2ee340ee0f00a3edb73952cb49cf133c275b8108" HandleID="k8s-pod-network.fb4ee46b37dd944e682116ee2ee340ee0f00a3edb73952cb49cf133c275b8108" Workload="localhost-k8s-goldmane--54d579b49d--hrtwm-eth0" Sep 12 17:41:51.749387 containerd[1456]: 2025-09-12 17:41:51.719 [INFO][4683] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fb4ee46b37dd944e682116ee2ee340ee0f00a3edb73952cb49cf133c275b8108" Namespace="calico-system" Pod="goldmane-54d579b49d-hrtwm" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--hrtwm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--hrtwm-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"5842d66e-40a9-47c3-82c8-fa74a7aa357d", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 41, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-hrtwm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califa9cc941912", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:41:51.749387 containerd[1456]: 2025-09-12 17:41:51.719 [INFO][4683] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="fb4ee46b37dd944e682116ee2ee340ee0f00a3edb73952cb49cf133c275b8108" Namespace="calico-system" Pod="goldmane-54d579b49d-hrtwm" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--hrtwm-eth0" Sep 12 17:41:51.749387 containerd[1456]: 2025-09-12 17:41:51.719 [INFO][4683] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califa9cc941912 ContainerID="fb4ee46b37dd944e682116ee2ee340ee0f00a3edb73952cb49cf133c275b8108" Namespace="calico-system" Pod="goldmane-54d579b49d-hrtwm" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--hrtwm-eth0" Sep 12 17:41:51.749387 containerd[1456]: 2025-09-12 17:41:51.728 [INFO][4683] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fb4ee46b37dd944e682116ee2ee340ee0f00a3edb73952cb49cf133c275b8108" Namespace="calico-system" Pod="goldmane-54d579b49d-hrtwm" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--hrtwm-eth0" Sep 12 17:41:51.749387 containerd[1456]: 2025-09-12 17:41:51.729 [INFO][4683] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fb4ee46b37dd944e682116ee2ee340ee0f00a3edb73952cb49cf133c275b8108" Namespace="calico-system" Pod="goldmane-54d579b49d-hrtwm" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--hrtwm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--hrtwm-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"5842d66e-40a9-47c3-82c8-fa74a7aa357d", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 41, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fb4ee46b37dd944e682116ee2ee340ee0f00a3edb73952cb49cf133c275b8108", Pod:"goldmane-54d579b49d-hrtwm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califa9cc941912", MAC:"da:d6:64:2e:3d:04", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:41:51.749387 containerd[1456]: 2025-09-12 17:41:51.743 [INFO][4683] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fb4ee46b37dd944e682116ee2ee340ee0f00a3edb73952cb49cf133c275b8108" Namespace="calico-system" Pod="goldmane-54d579b49d-hrtwm" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--hrtwm-eth0" Sep 12 17:41:51.777615 containerd[1456]: time="2025-09-12T17:41:51.777203395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dc89697db-nnwcs,Uid:1dd46bd3-f845-44a4-81fe-9f42d3bc81d7,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"219f4c3e4970c497c8e03f94efe438e7efd81f42c17a54b25264b9b3400a04fd\"" Sep 12 17:41:51.779098 containerd[1456]: time="2025-09-12T17:41:51.778850149Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:41:51.779098 containerd[1456]: time="2025-09-12T17:41:51.778916254Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:41:51.779098 containerd[1456]: time="2025-09-12T17:41:51.778930230Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:41:51.779098 containerd[1456]: time="2025-09-12T17:41:51.779035521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:41:51.803574 systemd-networkd[1395]: cali908fef8736c: Gained IPv6LL Sep 12 17:41:51.806495 systemd[1]: Started cri-containerd-fb4ee46b37dd944e682116ee2ee340ee0f00a3edb73952cb49cf133c275b8108.scope - libcontainer container fb4ee46b37dd944e682116ee2ee340ee0f00a3edb73952cb49cf133c275b8108. Sep 12 17:41:51.825976 systemd-resolved[1325]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:41:51.860018 containerd[1456]: time="2025-09-12T17:41:51.859940359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-hrtwm,Uid:5842d66e-40a9-47c3-82c8-fa74a7aa357d,Namespace:calico-system,Attempt:1,} returns sandbox id \"fb4ee46b37dd944e682116ee2ee340ee0f00a3edb73952cb49cf133c275b8108\"" Sep 12 17:41:52.323530 containerd[1456]: time="2025-09-12T17:41:52.323449606Z" level=info msg="StopPodSandbox for \"bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb\"" Sep 12 17:41:52.425767 containerd[1456]: 2025-09-12 17:41:52.384 [INFO][4830] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb" Sep 12 17:41:52.425767 containerd[1456]: 2025-09-12 17:41:52.384 [INFO][4830] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb" iface="eth0" netns="/var/run/netns/cni-1e0db980-e0c4-08f4-8f7f-042bb2a62cbc" Sep 12 17:41:52.425767 containerd[1456]: 2025-09-12 17:41:52.384 [INFO][4830] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb" iface="eth0" netns="/var/run/netns/cni-1e0db980-e0c4-08f4-8f7f-042bb2a62cbc" Sep 12 17:41:52.425767 containerd[1456]: 2025-09-12 17:41:52.385 [INFO][4830] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb" iface="eth0" netns="/var/run/netns/cni-1e0db980-e0c4-08f4-8f7f-042bb2a62cbc" Sep 12 17:41:52.425767 containerd[1456]: 2025-09-12 17:41:52.385 [INFO][4830] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb" Sep 12 17:41:52.425767 containerd[1456]: 2025-09-12 17:41:52.385 [INFO][4830] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb" Sep 12 17:41:52.425767 containerd[1456]: 2025-09-12 17:41:52.408 [INFO][4838] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb" HandleID="k8s-pod-network.bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb" Workload="localhost-k8s-calico--kube--controllers--86c7c74448--qxr9j-eth0" Sep 12 17:41:52.425767 containerd[1456]: 2025-09-12 17:41:52.408 [INFO][4838] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:41:52.425767 containerd[1456]: 2025-09-12 17:41:52.408 [INFO][4838] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:41:52.425767 containerd[1456]: 2025-09-12 17:41:52.416 [WARNING][4838] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb" HandleID="k8s-pod-network.bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb" Workload="localhost-k8s-calico--kube--controllers--86c7c74448--qxr9j-eth0" Sep 12 17:41:52.425767 containerd[1456]: 2025-09-12 17:41:52.416 [INFO][4838] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb" HandleID="k8s-pod-network.bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb" Workload="localhost-k8s-calico--kube--controllers--86c7c74448--qxr9j-eth0" Sep 12 17:41:52.425767 containerd[1456]: 2025-09-12 17:41:52.418 [INFO][4838] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:41:52.425767 containerd[1456]: 2025-09-12 17:41:52.421 [INFO][4830] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb" Sep 12 17:41:52.426790 containerd[1456]: time="2025-09-12T17:41:52.426175704Z" level=info msg="TearDown network for sandbox \"bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb\" successfully" Sep 12 17:41:52.426790 containerd[1456]: time="2025-09-12T17:41:52.426222312Z" level=info msg="StopPodSandbox for \"bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb\" returns successfully" Sep 12 17:41:52.427101 containerd[1456]: time="2025-09-12T17:41:52.427060281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86c7c74448-qxr9j,Uid:53e646fe-62f1-4e56-82b1-6d2004ca48b0,Namespace:calico-system,Attempt:1,}" Sep 12 17:41:52.495392 systemd[1]: run-netns-cni\x2d1e0db980\x2de0c4\x2d08f4\x2d8f7f\x2d042bb2a62cbc.mount: Deactivated successfully. Sep 12 17:41:52.562738 systemd-networkd[1395]: caliba6c8b1b226: Link UP Sep 12 17:41:52.564110 systemd-networkd[1395]: caliba6c8b1b226: Gained carrier Sep 12 17:41:52.586510 containerd[1456]: 2025-09-12 17:41:52.478 [INFO][4847] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--86c7c74448--qxr9j-eth0 calico-kube-controllers-86c7c74448- calico-system 53e646fe-62f1-4e56-82b1-6d2004ca48b0 1046 0 2025-09-12 17:41:26 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:86c7c74448 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-86c7c74448-qxr9j eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliba6c8b1b226 [] [] }} ContainerID="077fa35a05cc002cc3cb158b10b702f7f53b7ee2b9c8a7b4065425e5cc032ee9" Namespace="calico-system" Pod="calico-kube-controllers-86c7c74448-qxr9j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86c7c74448--qxr9j-" Sep 12 17:41:52.586510 containerd[1456]: 2025-09-12 17:41:52.478 [INFO][4847] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="077fa35a05cc002cc3cb158b10b702f7f53b7ee2b9c8a7b4065425e5cc032ee9" Namespace="calico-system" Pod="calico-kube-controllers-86c7c74448-qxr9j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86c7c74448--qxr9j-eth0" Sep 12 17:41:52.586510 containerd[1456]: 2025-09-12 17:41:52.506 [INFO][4861] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="077fa35a05cc002cc3cb158b10b702f7f53b7ee2b9c8a7b4065425e5cc032ee9" HandleID="k8s-pod-network.077fa35a05cc002cc3cb158b10b702f7f53b7ee2b9c8a7b4065425e5cc032ee9" Workload="localhost-k8s-calico--kube--controllers--86c7c74448--qxr9j-eth0" Sep 12 17:41:52.586510 containerd[1456]: 2025-09-12 17:41:52.506 [INFO][4861] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="077fa35a05cc002cc3cb158b10b702f7f53b7ee2b9c8a7b4065425e5cc032ee9" HandleID="k8s-pod-network.077fa35a05cc002cc3cb158b10b702f7f53b7ee2b9c8a7b4065425e5cc032ee9" Workload="localhost-k8s-calico--kube--controllers--86c7c74448--qxr9j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f7b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-86c7c74448-qxr9j", "timestamp":"2025-09-12 17:41:52.506631057 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:41:52.586510 containerd[1456]: 2025-09-12 17:41:52.506 [INFO][4861] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:41:52.586510 containerd[1456]: 2025-09-12 17:41:52.506 [INFO][4861] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:41:52.586510 containerd[1456]: 2025-09-12 17:41:52.506 [INFO][4861] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:41:52.586510 containerd[1456]: 2025-09-12 17:41:52.515 [INFO][4861] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.077fa35a05cc002cc3cb158b10b702f7f53b7ee2b9c8a7b4065425e5cc032ee9" host="localhost" Sep 12 17:41:52.586510 containerd[1456]: 2025-09-12 17:41:52.522 [INFO][4861] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:41:52.586510 containerd[1456]: 2025-09-12 17:41:52.533 [INFO][4861] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:41:52.586510 containerd[1456]: 2025-09-12 17:41:52.536 [INFO][4861] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:41:52.586510 containerd[1456]: 2025-09-12 17:41:52.539 [INFO][4861] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:41:52.586510 containerd[1456]: 2025-09-12 17:41:52.540 [INFO][4861] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.077fa35a05cc002cc3cb158b10b702f7f53b7ee2b9c8a7b4065425e5cc032ee9" host="localhost" Sep 12 17:41:52.586510 containerd[1456]: 2025-09-12 17:41:52.542 [INFO][4861] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.077fa35a05cc002cc3cb158b10b702f7f53b7ee2b9c8a7b4065425e5cc032ee9 Sep 12 17:41:52.586510 containerd[1456]: 2025-09-12 17:41:52.546 [INFO][4861] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.077fa35a05cc002cc3cb158b10b702f7f53b7ee2b9c8a7b4065425e5cc032ee9" host="localhost" Sep 12 17:41:52.586510 containerd[1456]: 2025-09-12 17:41:52.555 [INFO][4861] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.077fa35a05cc002cc3cb158b10b702f7f53b7ee2b9c8a7b4065425e5cc032ee9" host="localhost" Sep 12 17:41:52.586510 containerd[1456]: 2025-09-12 17:41:52.555 [INFO][4861] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.077fa35a05cc002cc3cb158b10b702f7f53b7ee2b9c8a7b4065425e5cc032ee9" host="localhost" Sep 12 17:41:52.586510 containerd[1456]: 2025-09-12 17:41:52.555 [INFO][4861] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:41:52.586510 containerd[1456]: 2025-09-12 17:41:52.555 [INFO][4861] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="077fa35a05cc002cc3cb158b10b702f7f53b7ee2b9c8a7b4065425e5cc032ee9" HandleID="k8s-pod-network.077fa35a05cc002cc3cb158b10b702f7f53b7ee2b9c8a7b4065425e5cc032ee9" Workload="localhost-k8s-calico--kube--controllers--86c7c74448--qxr9j-eth0" Sep 12 17:41:52.587766 containerd[1456]: 2025-09-12 17:41:52.559 [INFO][4847] cni-plugin/k8s.go 418: Populated endpoint ContainerID="077fa35a05cc002cc3cb158b10b702f7f53b7ee2b9c8a7b4065425e5cc032ee9" Namespace="calico-system" Pod="calico-kube-controllers-86c7c74448-qxr9j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86c7c74448--qxr9j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--86c7c74448--qxr9j-eth0", GenerateName:"calico-kube-controllers-86c7c74448-", Namespace:"calico-system", SelfLink:"", UID:"53e646fe-62f1-4e56-82b1-6d2004ca48b0", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 41, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86c7c74448", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-86c7c74448-qxr9j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliba6c8b1b226", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:41:52.587766 containerd[1456]: 2025-09-12 17:41:52.559 [INFO][4847] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="077fa35a05cc002cc3cb158b10b702f7f53b7ee2b9c8a7b4065425e5cc032ee9" Namespace="calico-system" Pod="calico-kube-controllers-86c7c74448-qxr9j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86c7c74448--qxr9j-eth0" Sep 12 17:41:52.587766 containerd[1456]: 2025-09-12 17:41:52.559 [INFO][4847] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliba6c8b1b226 ContainerID="077fa35a05cc002cc3cb158b10b702f7f53b7ee2b9c8a7b4065425e5cc032ee9" Namespace="calico-system" Pod="calico-kube-controllers-86c7c74448-qxr9j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86c7c74448--qxr9j-eth0" Sep 12 17:41:52.587766 containerd[1456]: 2025-09-12 17:41:52.567 [INFO][4847] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="077fa35a05cc002cc3cb158b10b702f7f53b7ee2b9c8a7b4065425e5cc032ee9" Namespace="calico-system" Pod="calico-kube-controllers-86c7c74448-qxr9j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86c7c74448--qxr9j-eth0" Sep 12 17:41:52.587766 containerd[1456]: 2025-09-12 17:41:52.568 [INFO][4847] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="077fa35a05cc002cc3cb158b10b702f7f53b7ee2b9c8a7b4065425e5cc032ee9" Namespace="calico-system" Pod="calico-kube-controllers-86c7c74448-qxr9j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86c7c74448--qxr9j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--86c7c74448--qxr9j-eth0", GenerateName:"calico-kube-controllers-86c7c74448-", Namespace:"calico-system", SelfLink:"", UID:"53e646fe-62f1-4e56-82b1-6d2004ca48b0", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 41, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86c7c74448", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"077fa35a05cc002cc3cb158b10b702f7f53b7ee2b9c8a7b4065425e5cc032ee9", Pod:"calico-kube-controllers-86c7c74448-qxr9j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliba6c8b1b226", MAC:"d2:31:6e:e7:92:a1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:41:52.587766 containerd[1456]: 2025-09-12 17:41:52.578 [INFO][4847] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="077fa35a05cc002cc3cb158b10b702f7f53b7ee2b9c8a7b4065425e5cc032ee9" Namespace="calico-system" Pod="calico-kube-controllers-86c7c74448-qxr9j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86c7c74448--qxr9j-eth0" Sep 12 17:41:52.608626 kubelet[2519]: E0912 17:41:52.607561 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:52.684801 containerd[1456]: time="2025-09-12T17:41:52.684617097Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:41:52.684801 containerd[1456]: time="2025-09-12T17:41:52.684737265Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:41:52.684801 containerd[1456]: time="2025-09-12T17:41:52.684755499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:41:52.685329 containerd[1456]: time="2025-09-12T17:41:52.684931393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:41:52.729453 systemd[1]: Started cri-containerd-077fa35a05cc002cc3cb158b10b702f7f53b7ee2b9c8a7b4065425e5cc032ee9.scope - libcontainer container 077fa35a05cc002cc3cb158b10b702f7f53b7ee2b9c8a7b4065425e5cc032ee9. Sep 12 17:41:52.744571 systemd-resolved[1325]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:41:52.778612 containerd[1456]: time="2025-09-12T17:41:52.778558780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86c7c74448-qxr9j,Uid:53e646fe-62f1-4e56-82b1-6d2004ca48b0,Namespace:calico-system,Attempt:1,} returns sandbox id \"077fa35a05cc002cc3cb158b10b702f7f53b7ee2b9c8a7b4065425e5cc032ee9\"" Sep 12 17:41:53.275635 systemd-networkd[1395]: cali28de6226a93: Gained IPv6LL Sep 12 17:41:53.323569 containerd[1456]: time="2025-09-12T17:41:53.323514772Z" level=info msg="StopPodSandbox for \"8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021\"" Sep 12 17:41:53.324535 containerd[1456]: time="2025-09-12T17:41:53.324066739Z" level=info msg="StopPodSandbox for \"fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83\"" Sep 12 17:41:53.375563 containerd[1456]: time="2025-09-12T17:41:53.375486914Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:53.376968 containerd[1456]: time="2025-09-12T17:41:53.376800474Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 12 17:41:53.378382 containerd[1456]: time="2025-09-12T17:41:53.378339611Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:53.381334 containerd[1456]: time="2025-09-12T17:41:53.381289653Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:53.382263 containerd[1456]: time="2025-09-12T17:41:53.382212612Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 2.516024898s" Sep 12 17:41:53.382308 containerd[1456]: time="2025-09-12T17:41:53.382268889Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 17:41:53.384156 containerd[1456]: time="2025-09-12T17:41:53.384127902Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 12 17:41:53.388848 containerd[1456]: time="2025-09-12T17:41:53.388815778Z" level=info msg="CreateContainer within sandbox \"ec73fa9fa58503330376402d78d4c6c56479bb0547f53f26b73898cc0d16f8a0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:41:53.407741 containerd[1456]: time="2025-09-12T17:41:53.407588531Z" level=info msg="CreateContainer within sandbox \"ec73fa9fa58503330376402d78d4c6c56479bb0547f53f26b73898cc0d16f8a0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"853a69448f8987101eca04a37d8799dde1fa4b67de47e0b43da59a6dbdedd147\"" Sep 12 17:41:53.409476 containerd[1456]: time="2025-09-12T17:41:53.409400035Z" level=info msg="StartContainer for \"853a69448f8987101eca04a37d8799dde1fa4b67de47e0b43da59a6dbdedd147\"" Sep 12 17:41:53.451395 systemd[1]: Started cri-containerd-853a69448f8987101eca04a37d8799dde1fa4b67de47e0b43da59a6dbdedd147.scope - libcontainer container 853a69448f8987101eca04a37d8799dde1fa4b67de47e0b43da59a6dbdedd147. Sep 12 17:41:53.455795 containerd[1456]: 2025-09-12 17:41:53.391 [INFO][4947] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021" Sep 12 17:41:53.455795 containerd[1456]: 2025-09-12 17:41:53.391 [INFO][4947] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021" iface="eth0" netns="/var/run/netns/cni-15134bae-2ba6-a040-fb11-eee356c4fc7e" Sep 12 17:41:53.455795 containerd[1456]: 2025-09-12 17:41:53.392 [INFO][4947] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021" iface="eth0" netns="/var/run/netns/cni-15134bae-2ba6-a040-fb11-eee356c4fc7e" Sep 12 17:41:53.455795 containerd[1456]: 2025-09-12 17:41:53.392 [INFO][4947] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021" iface="eth0" netns="/var/run/netns/cni-15134bae-2ba6-a040-fb11-eee356c4fc7e" Sep 12 17:41:53.455795 containerd[1456]: 2025-09-12 17:41:53.392 [INFO][4947] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021" Sep 12 17:41:53.455795 containerd[1456]: 2025-09-12 17:41:53.392 [INFO][4947] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021" Sep 12 17:41:53.455795 containerd[1456]: 2025-09-12 17:41:53.427 [INFO][4969] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021" HandleID="k8s-pod-network.8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021" Workload="localhost-k8s-coredns--674b8bbfcf--j8td4-eth0" Sep 12 17:41:53.455795 containerd[1456]: 2025-09-12 17:41:53.428 [INFO][4969] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:41:53.455795 containerd[1456]: 2025-09-12 17:41:53.428 [INFO][4969] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:41:53.455795 containerd[1456]: 2025-09-12 17:41:53.436 [WARNING][4969] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021" HandleID="k8s-pod-network.8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021" Workload="localhost-k8s-coredns--674b8bbfcf--j8td4-eth0" Sep 12 17:41:53.455795 containerd[1456]: 2025-09-12 17:41:53.436 [INFO][4969] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021" HandleID="k8s-pod-network.8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021" Workload="localhost-k8s-coredns--674b8bbfcf--j8td4-eth0" Sep 12 17:41:53.455795 containerd[1456]: 2025-09-12 17:41:53.439 [INFO][4969] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:41:53.455795 containerd[1456]: 2025-09-12 17:41:53.443 [INFO][4947] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021" Sep 12 17:41:53.455795 containerd[1456]: time="2025-09-12T17:41:53.455613401Z" level=info msg="TearDown network for sandbox \"8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021\" successfully" Sep 12 17:41:53.455795 containerd[1456]: time="2025-09-12T17:41:53.455652665Z" level=info msg="StopPodSandbox for \"8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021\" returns successfully" Sep 12 17:41:53.457923 containerd[1456]: time="2025-09-12T17:41:53.457141968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-j8td4,Uid:9db1f756-711b-4858-9a6e-e40374cd29db,Namespace:kube-system,Attempt:1,}" Sep 12 17:41:53.457968 kubelet[2519]: E0912 17:41:53.456045 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:53.466152 containerd[1456]: 2025-09-12 17:41:53.393 [INFO][4946] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83" Sep 12 17:41:53.466152 containerd[1456]: 2025-09-12 17:41:53.394 [INFO][4946] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83" iface="eth0" netns="/var/run/netns/cni-44cf5dd3-3468-0b00-e0ab-e221a388262c" Sep 12 17:41:53.466152 containerd[1456]: 2025-09-12 17:41:53.394 [INFO][4946] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83" iface="eth0" netns="/var/run/netns/cni-44cf5dd3-3468-0b00-e0ab-e221a388262c" Sep 12 17:41:53.466152 containerd[1456]: 2025-09-12 17:41:53.395 [INFO][4946] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83" iface="eth0" netns="/var/run/netns/cni-44cf5dd3-3468-0b00-e0ab-e221a388262c" Sep 12 17:41:53.466152 containerd[1456]: 2025-09-12 17:41:53.395 [INFO][4946] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83" Sep 12 17:41:53.466152 containerd[1456]: 2025-09-12 17:41:53.395 [INFO][4946] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83" Sep 12 17:41:53.466152 containerd[1456]: 2025-09-12 17:41:53.441 [INFO][4971] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83" HandleID="k8s-pod-network.fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83" Workload="localhost-k8s-csi--node--driver--lplbc-eth0" Sep 12 17:41:53.466152 containerd[1456]: 2025-09-12 17:41:53.442 [INFO][4971] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:41:53.466152 containerd[1456]: 2025-09-12 17:41:53.442 [INFO][4971] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:41:53.466152 containerd[1456]: 2025-09-12 17:41:53.450 [WARNING][4971] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83" HandleID="k8s-pod-network.fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83" Workload="localhost-k8s-csi--node--driver--lplbc-eth0" Sep 12 17:41:53.466152 containerd[1456]: 2025-09-12 17:41:53.451 [INFO][4971] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83" HandleID="k8s-pod-network.fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83" Workload="localhost-k8s-csi--node--driver--lplbc-eth0" Sep 12 17:41:53.466152 containerd[1456]: 2025-09-12 17:41:53.452 [INFO][4971] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:41:53.466152 containerd[1456]: 2025-09-12 17:41:53.458 [INFO][4946] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83" Sep 12 17:41:53.466152 containerd[1456]: time="2025-09-12T17:41:53.465985241Z" level=info msg="TearDown network for sandbox \"fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83\" successfully" Sep 12 17:41:53.466152 containerd[1456]: time="2025-09-12T17:41:53.466067656Z" level=info msg="StopPodSandbox for \"fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83\" returns successfully" Sep 12 17:41:53.467580 containerd[1456]: time="2025-09-12T17:41:53.467535159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lplbc,Uid:7bc16a13-1355-4530-b199-c12f8c96fcdd,Namespace:calico-system,Attempt:1,}" Sep 12 17:41:53.496657 systemd[1]: run-netns-cni\x2d15134bae\x2d2ba6\x2da040\x2dfb11\x2deee356c4fc7e.mount: Deactivated successfully. Sep 12 17:41:53.496797 systemd[1]: run-netns-cni\x2d44cf5dd3\x2d3468\x2d0b00\x2de0ab\x2de221a388262c.mount: Deactivated successfully. Sep 12 17:41:53.531503 systemd-networkd[1395]: califa9cc941912: Gained IPv6LL Sep 12 17:41:53.642714 containerd[1456]: time="2025-09-12T17:41:53.642652286Z" level=info msg="StartContainer for \"853a69448f8987101eca04a37d8799dde1fa4b67de47e0b43da59a6dbdedd147\" returns successfully" Sep 12 17:41:53.664323 kubelet[2519]: E0912 17:41:53.664181 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:53.884178 systemd-networkd[1395]: calif2c74eaab6d: Link UP Sep 12 17:41:53.885937 systemd-networkd[1395]: calif2c74eaab6d: Gained carrier Sep 12 17:41:53.939723 kubelet[2519]: I0912 17:41:53.939531 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6dc89697db-gkcqh" podStartSLOduration=27.349301924 podStartE2EDuration="30.939507676s" podCreationTimestamp="2025-09-12 17:41:23 +0000 UTC" firstStartedPulling="2025-09-12 17:41:49.793672247 +0000 UTC m=+42.569869867" lastFinishedPulling="2025-09-12 17:41:53.383877999 +0000 UTC m=+46.160075619" observedRunningTime="2025-09-12 17:41:53.669141576 +0000 UTC m=+46.445339206" watchObservedRunningTime="2025-09-12 17:41:53.939507676 +0000 UTC m=+46.715705296" Sep 12 17:41:53.944456 containerd[1456]: 2025-09-12 17:41:53.650 [INFO][5012] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--j8td4-eth0 coredns-674b8bbfcf- kube-system 9db1f756-711b-4858-9a6e-e40374cd29db 1057 0 2025-09-12 17:41:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-j8td4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif2c74eaab6d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="516d0e276e6effdfdce0ec55dc47e972ebbafb2a61697efbc3048712ca7aea6d" Namespace="kube-system" Pod="coredns-674b8bbfcf-j8td4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--j8td4-" Sep 12 17:41:53.944456 containerd[1456]: 2025-09-12 17:41:53.651 [INFO][5012] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="516d0e276e6effdfdce0ec55dc47e972ebbafb2a61697efbc3048712ca7aea6d" Namespace="kube-system" Pod="coredns-674b8bbfcf-j8td4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--j8td4-eth0" Sep 12 17:41:53.944456 containerd[1456]: 2025-09-12 17:41:53.707 [INFO][5054] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="516d0e276e6effdfdce0ec55dc47e972ebbafb2a61697efbc3048712ca7aea6d" HandleID="k8s-pod-network.516d0e276e6effdfdce0ec55dc47e972ebbafb2a61697efbc3048712ca7aea6d" Workload="localhost-k8s-coredns--674b8bbfcf--j8td4-eth0" Sep 12 17:41:53.944456 containerd[1456]: 2025-09-12 17:41:53.708 [INFO][5054] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="516d0e276e6effdfdce0ec55dc47e972ebbafb2a61697efbc3048712ca7aea6d" HandleID="k8s-pod-network.516d0e276e6effdfdce0ec55dc47e972ebbafb2a61697efbc3048712ca7aea6d" Workload="localhost-k8s-coredns--674b8bbfcf--j8td4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138e30), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-j8td4", "timestamp":"2025-09-12 17:41:53.707621714 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:41:53.944456 containerd[1456]: 2025-09-12 17:41:53.708 [INFO][5054] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:41:53.944456 containerd[1456]: 2025-09-12 17:41:53.708 [INFO][5054] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:41:53.944456 containerd[1456]: 2025-09-12 17:41:53.708 [INFO][5054] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:41:53.944456 containerd[1456]: 2025-09-12 17:41:53.718 [INFO][5054] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.516d0e276e6effdfdce0ec55dc47e972ebbafb2a61697efbc3048712ca7aea6d" host="localhost" Sep 12 17:41:53.944456 containerd[1456]: 2025-09-12 17:41:53.728 [INFO][5054] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:41:53.944456 containerd[1456]: 2025-09-12 17:41:53.736 [INFO][5054] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:41:53.944456 containerd[1456]: 2025-09-12 17:41:53.740 [INFO][5054] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:41:53.944456 containerd[1456]: 2025-09-12 17:41:53.746 [INFO][5054] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:41:53.944456 containerd[1456]: 2025-09-12 17:41:53.746 [INFO][5054] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.516d0e276e6effdfdce0ec55dc47e972ebbafb2a61697efbc3048712ca7aea6d" host="localhost" Sep 12 17:41:53.944456 containerd[1456]: 2025-09-12 17:41:53.748 [INFO][5054] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.516d0e276e6effdfdce0ec55dc47e972ebbafb2a61697efbc3048712ca7aea6d Sep 12 17:41:53.944456 containerd[1456]: 2025-09-12 17:41:53.766 [INFO][5054] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.516d0e276e6effdfdce0ec55dc47e972ebbafb2a61697efbc3048712ca7aea6d" host="localhost" Sep 12 17:41:53.944456 containerd[1456]: 2025-09-12 17:41:53.869 [INFO][5054] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.516d0e276e6effdfdce0ec55dc47e972ebbafb2a61697efbc3048712ca7aea6d" host="localhost" Sep 12 17:41:53.944456 containerd[1456]: 2025-09-12 17:41:53.869 [INFO][5054] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.516d0e276e6effdfdce0ec55dc47e972ebbafb2a61697efbc3048712ca7aea6d" host="localhost" Sep 12 17:41:53.944456 containerd[1456]: 2025-09-12 17:41:53.869 [INFO][5054] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:41:53.944456 containerd[1456]: 2025-09-12 17:41:53.869 [INFO][5054] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="516d0e276e6effdfdce0ec55dc47e972ebbafb2a61697efbc3048712ca7aea6d" HandleID="k8s-pod-network.516d0e276e6effdfdce0ec55dc47e972ebbafb2a61697efbc3048712ca7aea6d" Workload="localhost-k8s-coredns--674b8bbfcf--j8td4-eth0" Sep 12 17:41:53.945042 containerd[1456]: 2025-09-12 17:41:53.876 [INFO][5012] cni-plugin/k8s.go 418: Populated endpoint ContainerID="516d0e276e6effdfdce0ec55dc47e972ebbafb2a61697efbc3048712ca7aea6d" Namespace="kube-system" Pod="coredns-674b8bbfcf-j8td4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--j8td4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--j8td4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9db1f756-711b-4858-9a6e-e40374cd29db", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 41, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-j8td4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif2c74eaab6d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:41:53.945042 containerd[1456]: 2025-09-12 17:41:53.876 [INFO][5012] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="516d0e276e6effdfdce0ec55dc47e972ebbafb2a61697efbc3048712ca7aea6d" Namespace="kube-system" Pod="coredns-674b8bbfcf-j8td4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--j8td4-eth0" Sep 12 17:41:53.945042 containerd[1456]: 2025-09-12 17:41:53.876 [INFO][5012] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif2c74eaab6d ContainerID="516d0e276e6effdfdce0ec55dc47e972ebbafb2a61697efbc3048712ca7aea6d" Namespace="kube-system" Pod="coredns-674b8bbfcf-j8td4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--j8td4-eth0" Sep 12 17:41:53.945042 containerd[1456]: 2025-09-12 17:41:53.886 [INFO][5012] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="516d0e276e6effdfdce0ec55dc47e972ebbafb2a61697efbc3048712ca7aea6d" Namespace="kube-system" Pod="coredns-674b8bbfcf-j8td4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--j8td4-eth0" Sep 12 17:41:53.945042 containerd[1456]: 2025-09-12 17:41:53.889 [INFO][5012] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="516d0e276e6effdfdce0ec55dc47e972ebbafb2a61697efbc3048712ca7aea6d" Namespace="kube-system" Pod="coredns-674b8bbfcf-j8td4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--j8td4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--j8td4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9db1f756-711b-4858-9a6e-e40374cd29db", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 41, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"516d0e276e6effdfdce0ec55dc47e972ebbafb2a61697efbc3048712ca7aea6d", Pod:"coredns-674b8bbfcf-j8td4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif2c74eaab6d", MAC:"4a:c5:04:11:95:9b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:41:53.945042 containerd[1456]: 2025-09-12 17:41:53.940 [INFO][5012] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="516d0e276e6effdfdce0ec55dc47e972ebbafb2a61697efbc3048712ca7aea6d" Namespace="kube-system" Pod="coredns-674b8bbfcf-j8td4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--j8td4-eth0" Sep 12 17:41:54.025326 containerd[1456]: time="2025-09-12T17:41:54.020084306Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:41:54.025326 containerd[1456]: time="2025-09-12T17:41:54.020157995Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:41:54.025326 containerd[1456]: time="2025-09-12T17:41:54.020189255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:41:54.025326 containerd[1456]: time="2025-09-12T17:41:54.020352674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:41:54.041317 systemd-networkd[1395]: calia888803215c: Link UP Sep 12 17:41:54.041587 systemd-networkd[1395]: calia888803215c: Gained carrier Sep 12 17:41:54.066633 containerd[1456]: 2025-09-12 17:41:53.715 [INFO][5041] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--lplbc-eth0 csi-node-driver- calico-system 7bc16a13-1355-4530-b199-c12f8c96fcdd 1058 0 2025-09-12 17:41:26 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-lplbc eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia888803215c [] [] }} ContainerID="362c90275ab188d07a88d765ad89f8e92f0fb83108a94edf10963b3353d5a0a5" Namespace="calico-system" Pod="csi-node-driver-lplbc" WorkloadEndpoint="localhost-k8s-csi--node--driver--lplbc-" Sep 12 17:41:54.066633 containerd[1456]: 2025-09-12 17:41:53.716 [INFO][5041] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="362c90275ab188d07a88d765ad89f8e92f0fb83108a94edf10963b3353d5a0a5" Namespace="calico-system" Pod="csi-node-driver-lplbc" WorkloadEndpoint="localhost-k8s-csi--node--driver--lplbc-eth0" Sep 12 17:41:54.066633 containerd[1456]: 2025-09-12 17:41:53.761 [INFO][5065] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="362c90275ab188d07a88d765ad89f8e92f0fb83108a94edf10963b3353d5a0a5" HandleID="k8s-pod-network.362c90275ab188d07a88d765ad89f8e92f0fb83108a94edf10963b3353d5a0a5" Workload="localhost-k8s-csi--node--driver--lplbc-eth0" Sep 12 17:41:54.066633 containerd[1456]: 2025-09-12 17:41:53.761 [INFO][5065] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="362c90275ab188d07a88d765ad89f8e92f0fb83108a94edf10963b3353d5a0a5" HandleID="k8s-pod-network.362c90275ab188d07a88d765ad89f8e92f0fb83108a94edf10963b3353d5a0a5" Workload="localhost-k8s-csi--node--driver--lplbc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001236c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-lplbc", "timestamp":"2025-09-12 17:41:53.761760201 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:41:54.066633 containerd[1456]: 2025-09-12 17:41:53.762 [INFO][5065] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:41:54.066633 containerd[1456]: 2025-09-12 17:41:53.869 [INFO][5065] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:41:54.066633 containerd[1456]: 2025-09-12 17:41:53.869 [INFO][5065] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:41:54.066633 containerd[1456]: 2025-09-12 17:41:53.888 [INFO][5065] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.362c90275ab188d07a88d765ad89f8e92f0fb83108a94edf10963b3353d5a0a5" host="localhost" Sep 12 17:41:54.066633 containerd[1456]: 2025-09-12 17:41:53.896 [INFO][5065] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:41:54.066633 containerd[1456]: 2025-09-12 17:41:53.942 [INFO][5065] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:41:54.066633 containerd[1456]: 2025-09-12 17:41:53.945 [INFO][5065] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:41:54.066633 containerd[1456]: 2025-09-12 17:41:53.949 [INFO][5065] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:41:54.066633 containerd[1456]: 2025-09-12 17:41:53.956 [INFO][5065] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.362c90275ab188d07a88d765ad89f8e92f0fb83108a94edf10963b3353d5a0a5" host="localhost" Sep 12 17:41:54.066633 containerd[1456]: 2025-09-12 17:41:53.959 [INFO][5065] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.362c90275ab188d07a88d765ad89f8e92f0fb83108a94edf10963b3353d5a0a5 Sep 12 17:41:54.066633 containerd[1456]: 2025-09-12 17:41:53.979 [INFO][5065] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.362c90275ab188d07a88d765ad89f8e92f0fb83108a94edf10963b3353d5a0a5" host="localhost" Sep 12 17:41:54.066633 containerd[1456]: 2025-09-12 17:41:53.991 [INFO][5065] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.362c90275ab188d07a88d765ad89f8e92f0fb83108a94edf10963b3353d5a0a5" host="localhost" Sep 12 17:41:54.066633 containerd[1456]: 2025-09-12 17:41:53.993 [INFO][5065] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.362c90275ab188d07a88d765ad89f8e92f0fb83108a94edf10963b3353d5a0a5" host="localhost" Sep 12 17:41:54.066633 containerd[1456]: 2025-09-12 17:41:53.993 [INFO][5065] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:41:54.066633 containerd[1456]: 2025-09-12 17:41:53.993 [INFO][5065] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="362c90275ab188d07a88d765ad89f8e92f0fb83108a94edf10963b3353d5a0a5" HandleID="k8s-pod-network.362c90275ab188d07a88d765ad89f8e92f0fb83108a94edf10963b3353d5a0a5" Workload="localhost-k8s-csi--node--driver--lplbc-eth0" Sep 12 17:41:54.068290 containerd[1456]: 2025-09-12 17:41:54.010 [INFO][5041] cni-plugin/k8s.go 418: Populated endpoint ContainerID="362c90275ab188d07a88d765ad89f8e92f0fb83108a94edf10963b3353d5a0a5" Namespace="calico-system" Pod="csi-node-driver-lplbc" WorkloadEndpoint="localhost-k8s-csi--node--driver--lplbc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lplbc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7bc16a13-1355-4530-b199-c12f8c96fcdd", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 41, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-lplbc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia888803215c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:41:54.068290 containerd[1456]: 2025-09-12 17:41:54.010 [INFO][5041] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="362c90275ab188d07a88d765ad89f8e92f0fb83108a94edf10963b3353d5a0a5" Namespace="calico-system" Pod="csi-node-driver-lplbc" WorkloadEndpoint="localhost-k8s-csi--node--driver--lplbc-eth0" Sep 12 17:41:54.068290 containerd[1456]: 2025-09-12 17:41:54.010 [INFO][5041] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia888803215c ContainerID="362c90275ab188d07a88d765ad89f8e92f0fb83108a94edf10963b3353d5a0a5" Namespace="calico-system" Pod="csi-node-driver-lplbc" WorkloadEndpoint="localhost-k8s-csi--node--driver--lplbc-eth0" Sep 12 17:41:54.068290 containerd[1456]: 2025-09-12 17:41:54.045 [INFO][5041] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="362c90275ab188d07a88d765ad89f8e92f0fb83108a94edf10963b3353d5a0a5" Namespace="calico-system" Pod="csi-node-driver-lplbc" WorkloadEndpoint="localhost-k8s-csi--node--driver--lplbc-eth0" Sep 12 17:41:54.068290 containerd[1456]: 2025-09-12 17:41:54.047 [INFO][5041] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="362c90275ab188d07a88d765ad89f8e92f0fb83108a94edf10963b3353d5a0a5" Namespace="calico-system" Pod="csi-node-driver-lplbc" WorkloadEndpoint="localhost-k8s-csi--node--driver--lplbc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lplbc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7bc16a13-1355-4530-b199-c12f8c96fcdd", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 41, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"362c90275ab188d07a88d765ad89f8e92f0fb83108a94edf10963b3353d5a0a5", Pod:"csi-node-driver-lplbc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia888803215c", MAC:"7a:57:18:18:0c:e1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:41:54.068290 containerd[1456]: 2025-09-12 17:41:54.056 [INFO][5041] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="362c90275ab188d07a88d765ad89f8e92f0fb83108a94edf10963b3353d5a0a5" Namespace="calico-system" Pod="csi-node-driver-lplbc" WorkloadEndpoint="localhost-k8s-csi--node--driver--lplbc-eth0" Sep 12 17:41:54.085552 systemd[1]: Started cri-containerd-516d0e276e6effdfdce0ec55dc47e972ebbafb2a61697efbc3048712ca7aea6d.scope - libcontainer container 516d0e276e6effdfdce0ec55dc47e972ebbafb2a61697efbc3048712ca7aea6d. Sep 12 17:41:54.104594 containerd[1456]: time="2025-09-12T17:41:54.102509809Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:41:54.104594 containerd[1456]: time="2025-09-12T17:41:54.102584690Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:41:54.104594 containerd[1456]: time="2025-09-12T17:41:54.102598016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:41:54.104594 containerd[1456]: time="2025-09-12T17:41:54.102697164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:41:54.107303 systemd-resolved[1325]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:41:54.138758 systemd[1]: Started cri-containerd-362c90275ab188d07a88d765ad89f8e92f0fb83108a94edf10963b3353d5a0a5.scope - libcontainer container 362c90275ab188d07a88d765ad89f8e92f0fb83108a94edf10963b3353d5a0a5. Sep 12 17:41:54.166039 containerd[1456]: time="2025-09-12T17:41:54.165907314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-j8td4,Uid:9db1f756-711b-4858-9a6e-e40374cd29db,Namespace:kube-system,Attempt:1,} returns sandbox id \"516d0e276e6effdfdce0ec55dc47e972ebbafb2a61697efbc3048712ca7aea6d\"" Sep 12 17:41:54.168136 systemd-resolved[1325]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:41:54.168516 kubelet[2519]: E0912 17:41:54.167807 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:54.176668 containerd[1456]: time="2025-09-12T17:41:54.176616056Z" level=info msg="CreateContainer within sandbox \"516d0e276e6effdfdce0ec55dc47e972ebbafb2a61697efbc3048712ca7aea6d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:41:54.191193 containerd[1456]: time="2025-09-12T17:41:54.189790572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lplbc,Uid:7bc16a13-1355-4530-b199-c12f8c96fcdd,Namespace:calico-system,Attempt:1,} returns sandbox id \"362c90275ab188d07a88d765ad89f8e92f0fb83108a94edf10963b3353d5a0a5\"" Sep 12 17:41:54.199117 containerd[1456]: time="2025-09-12T17:41:54.198976748Z" level=info msg="CreateContainer within sandbox \"516d0e276e6effdfdce0ec55dc47e972ebbafb2a61697efbc3048712ca7aea6d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c6cbe4e3ccef249a7946f556d28d95e630d0611c0b776e5d6adb01d7f2bcb3a7\"" Sep 12 17:41:54.200336 containerd[1456]: time="2025-09-12T17:41:54.199703887Z" level=info msg="StartContainer for \"c6cbe4e3ccef249a7946f556d28d95e630d0611c0b776e5d6adb01d7f2bcb3a7\"" Sep 12 17:41:54.239444 systemd[1]: Started cri-containerd-c6cbe4e3ccef249a7946f556d28d95e630d0611c0b776e5d6adb01d7f2bcb3a7.scope - libcontainer container c6cbe4e3ccef249a7946f556d28d95e630d0611c0b776e5d6adb01d7f2bcb3a7. Sep 12 17:41:54.282712 containerd[1456]: time="2025-09-12T17:41:54.282659063Z" level=info msg="StartContainer for \"c6cbe4e3ccef249a7946f556d28d95e630d0611c0b776e5d6adb01d7f2bcb3a7\" returns successfully" Sep 12 17:41:54.496363 systemd-networkd[1395]: caliba6c8b1b226: Gained IPv6LL Sep 12 17:41:54.496395 systemd[1]: run-containerd-runc-k8s.io-516d0e276e6effdfdce0ec55dc47e972ebbafb2a61697efbc3048712ca7aea6d-runc.wyQc3t.mount: Deactivated successfully. Sep 12 17:41:54.668045 kubelet[2519]: E0912 17:41:54.667790 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:54.669343 kubelet[2519]: I0912 17:41:54.669325 2519 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:41:54.683770 kubelet[2519]: I0912 17:41:54.683523 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-j8td4" podStartSLOduration=42.683501812 podStartE2EDuration="42.683501812s" podCreationTimestamp="2025-09-12 17:41:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:41:54.682835098 +0000 UTC m=+47.459032738" watchObservedRunningTime="2025-09-12 17:41:54.683501812 +0000 UTC m=+47.459699432" Sep 12 17:41:54.952057 systemd[1]: Started sshd@9-10.0.0.139:22-10.0.0.1:33080.service - OpenSSH per-connection server daemon (10.0.0.1:33080). Sep 12 17:41:55.011749 sshd[5227]: Accepted publickey for core from 10.0.0.1 port 33080 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:41:55.014223 sshd[5227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:41:55.021464 systemd-logind[1445]: New session 10 of user core. Sep 12 17:41:55.031508 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 17:41:55.240948 sshd[5227]: pam_unix(sshd:session): session closed for user core Sep 12 17:41:55.246102 systemd[1]: sshd@9-10.0.0.139:22-10.0.0.1:33080.service: Deactivated successfully. Sep 12 17:41:55.248317 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 17:41:55.249827 systemd-logind[1445]: Session 10 logged out. Waiting for processes to exit. Sep 12 17:41:55.250941 systemd-logind[1445]: Removed session 10. Sep 12 17:41:55.643827 systemd-networkd[1395]: calia888803215c: Gained IPv6LL Sep 12 17:41:55.672524 kubelet[2519]: E0912 17:41:55.672487 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:55.807570 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1447329097.mount: Deactivated successfully. Sep 12 17:41:55.835444 systemd-networkd[1395]: calif2c74eaab6d: Gained IPv6LL Sep 12 17:41:55.963778 containerd[1456]: time="2025-09-12T17:41:55.963715510Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:55.964542 containerd[1456]: time="2025-09-12T17:41:55.964491861Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 12 17:41:55.965913 containerd[1456]: time="2025-09-12T17:41:55.965802614Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:55.967892 containerd[1456]: time="2025-09-12T17:41:55.967860613Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:55.968634 containerd[1456]: time="2025-09-12T17:41:55.968602669Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 2.58444589s" Sep 12 17:41:55.968698 containerd[1456]: time="2025-09-12T17:41:55.968633828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 12 17:41:55.970617 containerd[1456]: time="2025-09-12T17:41:55.970544587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:41:55.974460 containerd[1456]: time="2025-09-12T17:41:55.974411163Z" level=info msg="CreateContainer within sandbox \"081c3d21e5ae6a7a6dfe91f84fc41a8800e6ce6fd64b5cf905705a8f7a11c929\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 12 17:41:55.989306 containerd[1456]: time="2025-09-12T17:41:55.989255064Z" level=info msg="CreateContainer within sandbox \"081c3d21e5ae6a7a6dfe91f84fc41a8800e6ce6fd64b5cf905705a8f7a11c929\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"8120a77107226f5a01745a1caa265f1189eb9f518483b2f6c90548c754326e34\"" Sep 12 17:41:55.991652 containerd[1456]: time="2025-09-12T17:41:55.991610787Z" level=info msg="StartContainer for \"8120a77107226f5a01745a1caa265f1189eb9f518483b2f6c90548c754326e34\"" Sep 12 17:41:56.032496 systemd[1]: Started cri-containerd-8120a77107226f5a01745a1caa265f1189eb9f518483b2f6c90548c754326e34.scope - libcontainer container 8120a77107226f5a01745a1caa265f1189eb9f518483b2f6c90548c754326e34. Sep 12 17:41:56.480158 containerd[1456]: time="2025-09-12T17:41:56.480088137Z" level=info msg="StartContainer for \"8120a77107226f5a01745a1caa265f1189eb9f518483b2f6c90548c754326e34\" returns successfully" Sep 12 17:41:56.532092 containerd[1456]: time="2025-09-12T17:41:56.532010708Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:56.532912 containerd[1456]: time="2025-09-12T17:41:56.532814821Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 12 17:41:56.535219 containerd[1456]: time="2025-09-12T17:41:56.535166455Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 564.592421ms" Sep 12 17:41:56.535219 containerd[1456]: time="2025-09-12T17:41:56.535209667Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 17:41:56.536445 containerd[1456]: time="2025-09-12T17:41:56.536407736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 12 17:41:56.542869 containerd[1456]: time="2025-09-12T17:41:56.542797828Z" level=info msg="CreateContainer within sandbox \"219f4c3e4970c497c8e03f94efe438e7efd81f42c17a54b25264b9b3400a04fd\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:41:56.559604 containerd[1456]: time="2025-09-12T17:41:56.559539076Z" level=info msg="CreateContainer within sandbox \"219f4c3e4970c497c8e03f94efe438e7efd81f42c17a54b25264b9b3400a04fd\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6f23437b7951ae8fa90ce89e84f492ca41af76680e0d392a1314f71847badb3b\"" Sep 12 17:41:56.560719 containerd[1456]: time="2025-09-12T17:41:56.560670800Z" level=info msg="StartContainer for \"6f23437b7951ae8fa90ce89e84f492ca41af76680e0d392a1314f71847badb3b\"" Sep 12 17:41:56.597489 systemd[1]: Started cri-containerd-6f23437b7951ae8fa90ce89e84f492ca41af76680e0d392a1314f71847badb3b.scope - libcontainer container 6f23437b7951ae8fa90ce89e84f492ca41af76680e0d392a1314f71847badb3b. Sep 12 17:41:56.649132 containerd[1456]: time="2025-09-12T17:41:56.649079033Z" level=info msg="StartContainer for \"6f23437b7951ae8fa90ce89e84f492ca41af76680e0d392a1314f71847badb3b\" returns successfully" Sep 12 17:41:56.686761 kubelet[2519]: E0912 17:41:56.686703 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:41:56.695172 kubelet[2519]: I0912 17:41:56.693963 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5bd969bccd-8dfn8" podStartSLOduration=2.135219139 podStartE2EDuration="8.693944307s" podCreationTimestamp="2025-09-12 17:41:48 +0000 UTC" firstStartedPulling="2025-09-12 17:41:49.410945173 +0000 UTC m=+42.187142793" lastFinishedPulling="2025-09-12 17:41:55.969670341 +0000 UTC m=+48.745867961" observedRunningTime="2025-09-12 17:41:56.691219015 +0000 UTC m=+49.467416655" watchObservedRunningTime="2025-09-12 17:41:56.693944307 +0000 UTC m=+49.470141927" Sep 12 17:41:56.814825 systemd[1]: run-containerd-runc-k8s.io-8120a77107226f5a01745a1caa265f1189eb9f518483b2f6c90548c754326e34-runc.zDlywT.mount: Deactivated successfully. Sep 12 17:41:57.687626 kubelet[2519]: I0912 17:41:57.687570 2519 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:41:58.657073 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount228881733.mount: Deactivated successfully. Sep 12 17:41:59.909997 containerd[1456]: time="2025-09-12T17:41:59.909847574Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:59.912170 containerd[1456]: time="2025-09-12T17:41:59.912112681Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 12 17:41:59.914272 containerd[1456]: time="2025-09-12T17:41:59.914207305Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:59.917886 containerd[1456]: time="2025-09-12T17:41:59.917842515Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:59.919049 containerd[1456]: time="2025-09-12T17:41:59.919008412Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 3.382567652s" Sep 12 17:41:59.919049 containerd[1456]: time="2025-09-12T17:41:59.919038979Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 12 17:41:59.920227 containerd[1456]: time="2025-09-12T17:41:59.920191571Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 12 17:41:59.925993 containerd[1456]: time="2025-09-12T17:41:59.925920574Z" level=info msg="CreateContainer within sandbox \"fb4ee46b37dd944e682116ee2ee340ee0f00a3edb73952cb49cf133c275b8108\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 12 17:41:59.944305 containerd[1456]: time="2025-09-12T17:41:59.944232060Z" level=info msg="CreateContainer within sandbox \"fb4ee46b37dd944e682116ee2ee340ee0f00a3edb73952cb49cf133c275b8108\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"b7fd45072f19f6fb0025ba7da068a2a5e4f126bd07b2cafb5af7b98a3ff3f446\"" Sep 12 17:41:59.944929 containerd[1456]: time="2025-09-12T17:41:59.944902770Z" level=info msg="StartContainer for \"b7fd45072f19f6fb0025ba7da068a2a5e4f126bd07b2cafb5af7b98a3ff3f446\"" Sep 12 17:41:59.990677 systemd[1]: Started cri-containerd-b7fd45072f19f6fb0025ba7da068a2a5e4f126bd07b2cafb5af7b98a3ff3f446.scope - libcontainer container b7fd45072f19f6fb0025ba7da068a2a5e4f126bd07b2cafb5af7b98a3ff3f446. Sep 12 17:42:00.212425 containerd[1456]: time="2025-09-12T17:42:00.212351652Z" level=info msg="StartContainer for \"b7fd45072f19f6fb0025ba7da068a2a5e4f126bd07b2cafb5af7b98a3ff3f446\" returns successfully" Sep 12 17:42:00.257958 systemd[1]: Started sshd@10-10.0.0.139:22-10.0.0.1:37908.service - OpenSSH per-connection server daemon (10.0.0.1:37908). Sep 12 17:42:00.317489 sshd[5399]: Accepted publickey for core from 10.0.0.1 port 37908 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:42:00.319546 sshd[5399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:42:00.324054 systemd-logind[1445]: New session 11 of user core. Sep 12 17:42:00.336440 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 17:42:00.511391 sshd[5399]: pam_unix(sshd:session): session closed for user core Sep 12 17:42:00.524545 systemd[1]: sshd@10-10.0.0.139:22-10.0.0.1:37908.service: Deactivated successfully. Sep 12 17:42:00.527532 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 17:42:00.530774 systemd-logind[1445]: Session 11 logged out. Waiting for processes to exit. Sep 12 17:42:00.544945 systemd[1]: Started sshd@11-10.0.0.139:22-10.0.0.1:37914.service - OpenSSH per-connection server daemon (10.0.0.1:37914). Sep 12 17:42:00.546436 systemd-logind[1445]: Removed session 11. Sep 12 17:42:00.583092 sshd[5414]: Accepted publickey for core from 10.0.0.1 port 37914 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:42:00.585433 sshd[5414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:42:00.590717 systemd-logind[1445]: New session 12 of user core. Sep 12 17:42:00.604591 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 17:42:00.810358 kubelet[2519]: I0912 17:42:00.810184 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6dc89697db-nnwcs" podStartSLOduration=33.053361071 podStartE2EDuration="37.810106207s" podCreationTimestamp="2025-09-12 17:41:23 +0000 UTC" firstStartedPulling="2025-09-12 17:41:51.779463212 +0000 UTC m=+44.555660832" lastFinishedPulling="2025-09-12 17:41:56.536208348 +0000 UTC m=+49.312405968" observedRunningTime="2025-09-12 17:41:56.709498486 +0000 UTC m=+49.485696116" watchObservedRunningTime="2025-09-12 17:42:00.810106207 +0000 UTC m=+53.586303827" Sep 12 17:42:01.427825 kubelet[2519]: I0912 17:42:01.427039 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-hrtwm" podStartSLOduration=28.369127641 podStartE2EDuration="36.427010713s" podCreationTimestamp="2025-09-12 17:41:25 +0000 UTC" firstStartedPulling="2025-09-12 17:41:51.862005686 +0000 UTC m=+44.638203306" lastFinishedPulling="2025-09-12 17:41:59.919888758 +0000 UTC m=+52.696086378" observedRunningTime="2025-09-12 17:42:00.809983956 +0000 UTC m=+53.586181576" watchObservedRunningTime="2025-09-12 17:42:01.427010713 +0000 UTC m=+54.203208333" Sep 12 17:42:01.435790 sshd[5414]: pam_unix(sshd:session): session closed for user core Sep 12 17:42:01.446727 systemd[1]: sshd@11-10.0.0.139:22-10.0.0.1:37914.service: Deactivated successfully. Sep 12 17:42:01.449071 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 17:42:01.450902 systemd-logind[1445]: Session 12 logged out. Waiting for processes to exit. Sep 12 17:42:01.466656 systemd[1]: Started sshd@12-10.0.0.139:22-10.0.0.1:37930.service - OpenSSH per-connection server daemon (10.0.0.1:37930). Sep 12 17:42:01.467979 systemd-logind[1445]: Removed session 12. Sep 12 17:42:01.503884 sshd[5452]: Accepted publickey for core from 10.0.0.1 port 37930 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:42:01.507537 sshd[5452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:42:01.516402 systemd-logind[1445]: New session 13 of user core. Sep 12 17:42:01.522640 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 17:42:01.789959 sshd[5452]: pam_unix(sshd:session): session closed for user core Sep 12 17:42:01.793768 systemd[1]: sshd@12-10.0.0.139:22-10.0.0.1:37930.service: Deactivated successfully. Sep 12 17:42:01.796452 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 17:42:01.797403 systemd-logind[1445]: Session 13 logged out. Waiting for processes to exit. Sep 12 17:42:01.798375 systemd-logind[1445]: Removed session 13. Sep 12 17:42:04.965905 containerd[1456]: time="2025-09-12T17:42:04.965812064Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:04.967447 containerd[1456]: time="2025-09-12T17:42:04.967377273Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 12 17:42:04.969293 containerd[1456]: time="2025-09-12T17:42:04.969217393Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:04.976608 containerd[1456]: time="2025-09-12T17:42:04.976527503Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:04.977498 containerd[1456]: time="2025-09-12T17:42:04.977440279Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 5.057215646s" Sep 12 17:42:04.977498 containerd[1456]: time="2025-09-12T17:42:04.977489763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 12 17:42:04.978883 containerd[1456]: time="2025-09-12T17:42:04.978825517Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 12 17:42:05.010155 containerd[1456]: time="2025-09-12T17:42:05.010077892Z" level=info msg="CreateContainer within sandbox \"077fa35a05cc002cc3cb158b10b702f7f53b7ee2b9c8a7b4065425e5cc032ee9\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 12 17:42:05.139362 containerd[1456]: time="2025-09-12T17:42:05.139276805Z" level=info msg="CreateContainer within sandbox \"077fa35a05cc002cc3cb158b10b702f7f53b7ee2b9c8a7b4065425e5cc032ee9\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"2200fb188faefc68e8c4f5e81a938e019fa88f992c768b05ea7e73468e0cc5be\"" Sep 12 17:42:05.140099 containerd[1456]: time="2025-09-12T17:42:05.140048543Z" level=info msg="StartContainer for \"2200fb188faefc68e8c4f5e81a938e019fa88f992c768b05ea7e73468e0cc5be\"" Sep 12 17:42:05.214499 systemd[1]: Started cri-containerd-2200fb188faefc68e8c4f5e81a938e019fa88f992c768b05ea7e73468e0cc5be.scope - libcontainer container 2200fb188faefc68e8c4f5e81a938e019fa88f992c768b05ea7e73468e0cc5be. Sep 12 17:42:05.268150 containerd[1456]: time="2025-09-12T17:42:05.267973628Z" level=info msg="StartContainer for \"2200fb188faefc68e8c4f5e81a938e019fa88f992c768b05ea7e73468e0cc5be\" returns successfully" Sep 12 17:42:05.743039 kubelet[2519]: I0912 17:42:05.742963 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-86c7c74448-qxr9j" podStartSLOduration=27.544795193 podStartE2EDuration="39.74294046s" podCreationTimestamp="2025-09-12 17:41:26 +0000 UTC" firstStartedPulling="2025-09-12 17:41:52.780459484 +0000 UTC m=+45.556657104" lastFinishedPulling="2025-09-12 17:42:04.978604751 +0000 UTC m=+57.754802371" observedRunningTime="2025-09-12 17:42:05.742542979 +0000 UTC m=+58.518740609" watchObservedRunningTime="2025-09-12 17:42:05.74294046 +0000 UTC m=+58.519138090" Sep 12 17:42:06.806936 systemd[1]: Started sshd@13-10.0.0.139:22-10.0.0.1:37934.service - OpenSSH per-connection server daemon (10.0.0.1:37934). Sep 12 17:42:06.877608 sshd[5548]: Accepted publickey for core from 10.0.0.1 port 37934 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:42:06.879286 sshd[5548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:42:06.885982 systemd-logind[1445]: New session 14 of user core. Sep 12 17:42:06.893382 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 17:42:06.908268 containerd[1456]: time="2025-09-12T17:42:06.908207028Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:06.909031 containerd[1456]: time="2025-09-12T17:42:06.908984578Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 12 17:42:06.910866 containerd[1456]: time="2025-09-12T17:42:06.910809136Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:06.916673 containerd[1456]: time="2025-09-12T17:42:06.916611072Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:06.917598 containerd[1456]: time="2025-09-12T17:42:06.917567369Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.938695273s" Sep 12 17:42:06.917637 containerd[1456]: time="2025-09-12T17:42:06.917598498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 12 17:42:07.012813 containerd[1456]: time="2025-09-12T17:42:07.012763991Z" level=info msg="CreateContainer within sandbox \"362c90275ab188d07a88d765ad89f8e92f0fb83108a94edf10963b3353d5a0a5\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 12 17:42:07.310797 containerd[1456]: time="2025-09-12T17:42:07.310416835Z" level=info msg="StopPodSandbox for \"03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c\"" Sep 12 17:42:07.366282 containerd[1456]: time="2025-09-12T17:42:07.366176648Z" level=info msg="CreateContainer within sandbox \"362c90275ab188d07a88d765ad89f8e92f0fb83108a94edf10963b3353d5a0a5\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"b59d71312927291fcb48dacd47f2a29535e5ce927a9c7a476300c61564b42253\"" Sep 12 17:42:07.367493 containerd[1456]: time="2025-09-12T17:42:07.367455996Z" level=info msg="StartContainer for \"b59d71312927291fcb48dacd47f2a29535e5ce927a9c7a476300c61564b42253\"" Sep 12 17:42:07.423846 systemd[1]: Started cri-containerd-b59d71312927291fcb48dacd47f2a29535e5ce927a9c7a476300c61564b42253.scope - libcontainer container b59d71312927291fcb48dacd47f2a29535e5ce927a9c7a476300c61564b42253. Sep 12 17:42:07.424101 sshd[5548]: pam_unix(sshd:session): session closed for user core Sep 12 17:42:07.429094 systemd[1]: sshd@13-10.0.0.139:22-10.0.0.1:37934.service: Deactivated successfully. Sep 12 17:42:07.432960 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 17:42:07.436666 systemd-logind[1445]: Session 14 logged out. Waiting for processes to exit. Sep 12 17:42:07.438574 systemd-logind[1445]: Removed session 14. Sep 12 17:42:07.462902 containerd[1456]: time="2025-09-12T17:42:07.462841566Z" level=info msg="StartContainer for \"b59d71312927291fcb48dacd47f2a29535e5ce927a9c7a476300c61564b42253\" returns successfully" Sep 12 17:42:07.465696 containerd[1456]: time="2025-09-12T17:42:07.465653679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 12 17:42:07.546639 containerd[1456]: 2025-09-12 17:42:07.498 [WARNING][5579] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6dc89697db--gkcqh-eth0", GenerateName:"calico-apiserver-6dc89697db-", Namespace:"calico-apiserver", SelfLink:"", UID:"b66f1c9c-ade5-4ae5-9f94-5011eb972cbd", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 41, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dc89697db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ec73fa9fa58503330376402d78d4c6c56479bb0547f53f26b73898cc0d16f8a0", Pod:"calico-apiserver-6dc89697db-gkcqh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0764c52122a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:42:07.546639 containerd[1456]: 2025-09-12 17:42:07.499 [INFO][5579] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c" Sep 12 17:42:07.546639 containerd[1456]: 2025-09-12 17:42:07.499 [INFO][5579] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c" iface="eth0" netns="" Sep 12 17:42:07.546639 containerd[1456]: 2025-09-12 17:42:07.499 [INFO][5579] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c" Sep 12 17:42:07.546639 containerd[1456]: 2025-09-12 17:42:07.500 [INFO][5579] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c" Sep 12 17:42:07.546639 containerd[1456]: 2025-09-12 17:42:07.527 [INFO][5618] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c" HandleID="k8s-pod-network.03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c" Workload="localhost-k8s-calico--apiserver--6dc89697db--gkcqh-eth0" Sep 12 17:42:07.546639 containerd[1456]: 2025-09-12 17:42:07.527 [INFO][5618] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:42:07.546639 containerd[1456]: 2025-09-12 17:42:07.527 [INFO][5618] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:42:07.546639 containerd[1456]: 2025-09-12 17:42:07.535 [WARNING][5618] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c" HandleID="k8s-pod-network.03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c" Workload="localhost-k8s-calico--apiserver--6dc89697db--gkcqh-eth0" Sep 12 17:42:07.546639 containerd[1456]: 2025-09-12 17:42:07.535 [INFO][5618] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c" HandleID="k8s-pod-network.03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c" Workload="localhost-k8s-calico--apiserver--6dc89697db--gkcqh-eth0" Sep 12 17:42:07.546639 containerd[1456]: 2025-09-12 17:42:07.537 [INFO][5618] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:42:07.546639 containerd[1456]: 2025-09-12 17:42:07.542 [INFO][5579] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c" Sep 12 17:42:07.560492 containerd[1456]: time="2025-09-12T17:42:07.560422785Z" level=info msg="TearDown network for sandbox \"03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c\" successfully" Sep 12 17:42:07.560492 containerd[1456]: time="2025-09-12T17:42:07.560468000Z" level=info msg="StopPodSandbox for \"03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c\" returns successfully" Sep 12 17:42:07.597654 containerd[1456]: time="2025-09-12T17:42:07.597491610Z" level=info msg="RemovePodSandbox for \"03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c\"" Sep 12 17:42:07.600014 containerd[1456]: time="2025-09-12T17:42:07.599973109Z" level=info msg="Forcibly stopping sandbox \"03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c\"" Sep 12 17:42:07.680086 containerd[1456]: 2025-09-12 17:42:07.644 [WARNING][5635] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6dc89697db--gkcqh-eth0", GenerateName:"calico-apiserver-6dc89697db-", Namespace:"calico-apiserver", SelfLink:"", UID:"b66f1c9c-ade5-4ae5-9f94-5011eb972cbd", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 41, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dc89697db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ec73fa9fa58503330376402d78d4c6c56479bb0547f53f26b73898cc0d16f8a0", Pod:"calico-apiserver-6dc89697db-gkcqh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0764c52122a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:42:07.680086 containerd[1456]: 2025-09-12 17:42:07.644 [INFO][5635] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c" Sep 12 17:42:07.680086 containerd[1456]: 2025-09-12 17:42:07.644 [INFO][5635] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c" iface="eth0" netns="" Sep 12 17:42:07.680086 containerd[1456]: 2025-09-12 17:42:07.644 [INFO][5635] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c" Sep 12 17:42:07.680086 containerd[1456]: 2025-09-12 17:42:07.644 [INFO][5635] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c" Sep 12 17:42:07.680086 containerd[1456]: 2025-09-12 17:42:07.666 [INFO][5644] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c" HandleID="k8s-pod-network.03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c" Workload="localhost-k8s-calico--apiserver--6dc89697db--gkcqh-eth0" Sep 12 17:42:07.680086 containerd[1456]: 2025-09-12 17:42:07.666 [INFO][5644] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:42:07.680086 containerd[1456]: 2025-09-12 17:42:07.666 [INFO][5644] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:42:07.680086 containerd[1456]: 2025-09-12 17:42:07.672 [WARNING][5644] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c" HandleID="k8s-pod-network.03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c" Workload="localhost-k8s-calico--apiserver--6dc89697db--gkcqh-eth0" Sep 12 17:42:07.680086 containerd[1456]: 2025-09-12 17:42:07.672 [INFO][5644] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c" HandleID="k8s-pod-network.03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c" Workload="localhost-k8s-calico--apiserver--6dc89697db--gkcqh-eth0" Sep 12 17:42:07.680086 containerd[1456]: 2025-09-12 17:42:07.674 [INFO][5644] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:42:07.680086 containerd[1456]: 2025-09-12 17:42:07.676 [INFO][5635] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c" Sep 12 17:42:07.681038 containerd[1456]: time="2025-09-12T17:42:07.680124112Z" level=info msg="TearDown network for sandbox \"03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c\" successfully" Sep 12 17:42:07.757523 containerd[1456]: time="2025-09-12T17:42:07.757214582Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:42:07.757523 containerd[1456]: time="2025-09-12T17:42:07.757387308Z" level=info msg="RemovePodSandbox \"03b016cc625760135f9f81ff46831df4dc7cb2230cb73ade4768a0de40d5c99c\" returns successfully" Sep 12 17:42:07.764705 containerd[1456]: time="2025-09-12T17:42:07.764658507Z" level=info msg="StopPodSandbox for \"8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021\"" Sep 12 17:42:07.849480 containerd[1456]: 2025-09-12 17:42:07.803 [WARNING][5661] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--j8td4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9db1f756-711b-4858-9a6e-e40374cd29db", ResourceVersion:"1087", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 41, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"516d0e276e6effdfdce0ec55dc47e972ebbafb2a61697efbc3048712ca7aea6d", Pod:"coredns-674b8bbfcf-j8td4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif2c74eaab6d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:42:07.849480 containerd[1456]: 2025-09-12 17:42:07.803 [INFO][5661] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021" Sep 12 17:42:07.849480 containerd[1456]: 2025-09-12 17:42:07.803 [INFO][5661] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021" iface="eth0" netns="" Sep 12 17:42:07.849480 containerd[1456]: 2025-09-12 17:42:07.803 [INFO][5661] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021" Sep 12 17:42:07.849480 containerd[1456]: 2025-09-12 17:42:07.803 [INFO][5661] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021" Sep 12 17:42:07.849480 containerd[1456]: 2025-09-12 17:42:07.833 [INFO][5670] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021" HandleID="k8s-pod-network.8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021" Workload="localhost-k8s-coredns--674b8bbfcf--j8td4-eth0" Sep 12 17:42:07.849480 containerd[1456]: 2025-09-12 17:42:07.833 [INFO][5670] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:42:07.849480 containerd[1456]: 2025-09-12 17:42:07.833 [INFO][5670] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:42:07.849480 containerd[1456]: 2025-09-12 17:42:07.841 [WARNING][5670] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021" HandleID="k8s-pod-network.8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021" Workload="localhost-k8s-coredns--674b8bbfcf--j8td4-eth0" Sep 12 17:42:07.849480 containerd[1456]: 2025-09-12 17:42:07.841 [INFO][5670] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021" HandleID="k8s-pod-network.8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021" Workload="localhost-k8s-coredns--674b8bbfcf--j8td4-eth0" Sep 12 17:42:07.849480 containerd[1456]: 2025-09-12 17:42:07.842 [INFO][5670] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:42:07.849480 containerd[1456]: 2025-09-12 17:42:07.845 [INFO][5661] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021" Sep 12 17:42:07.850041 containerd[1456]: time="2025-09-12T17:42:07.849464487Z" level=info msg="TearDown network for sandbox \"8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021\" successfully" Sep 12 17:42:07.850041 containerd[1456]: time="2025-09-12T17:42:07.849522867Z" level=info msg="StopPodSandbox for \"8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021\" returns successfully" Sep 12 17:42:07.850538 containerd[1456]: time="2025-09-12T17:42:07.850496197Z" level=info msg="RemovePodSandbox for \"8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021\"" Sep 12 17:42:07.850538 containerd[1456]: time="2025-09-12T17:42:07.850533366Z" level=info msg="Forcibly stopping sandbox \"8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021\"" Sep 12 17:42:07.922519 containerd[1456]: 2025-09-12 17:42:07.887 [WARNING][5688] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--j8td4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9db1f756-711b-4858-9a6e-e40374cd29db", ResourceVersion:"1087", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 41, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"516d0e276e6effdfdce0ec55dc47e972ebbafb2a61697efbc3048712ca7aea6d", Pod:"coredns-674b8bbfcf-j8td4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif2c74eaab6d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:42:07.922519 containerd[1456]: 2025-09-12 17:42:07.887 [INFO][5688] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021" Sep 12 17:42:07.922519 containerd[1456]: 2025-09-12 17:42:07.887 [INFO][5688] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021" iface="eth0" netns="" Sep 12 17:42:07.922519 containerd[1456]: 2025-09-12 17:42:07.887 [INFO][5688] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021" Sep 12 17:42:07.922519 containerd[1456]: 2025-09-12 17:42:07.887 [INFO][5688] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021" Sep 12 17:42:07.922519 containerd[1456]: 2025-09-12 17:42:07.909 [INFO][5697] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021" HandleID="k8s-pod-network.8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021" Workload="localhost-k8s-coredns--674b8bbfcf--j8td4-eth0" Sep 12 17:42:07.922519 containerd[1456]: 2025-09-12 17:42:07.910 [INFO][5697] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:42:07.922519 containerd[1456]: 2025-09-12 17:42:07.910 [INFO][5697] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:42:07.922519 containerd[1456]: 2025-09-12 17:42:07.915 [WARNING][5697] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021" HandleID="k8s-pod-network.8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021" Workload="localhost-k8s-coredns--674b8bbfcf--j8td4-eth0" Sep 12 17:42:07.922519 containerd[1456]: 2025-09-12 17:42:07.915 [INFO][5697] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021" HandleID="k8s-pod-network.8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021" Workload="localhost-k8s-coredns--674b8bbfcf--j8td4-eth0" Sep 12 17:42:07.922519 containerd[1456]: 2025-09-12 17:42:07.916 [INFO][5697] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:42:07.922519 containerd[1456]: 2025-09-12 17:42:07.919 [INFO][5688] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021" Sep 12 17:42:07.923717 containerd[1456]: time="2025-09-12T17:42:07.922563865Z" level=info msg="TearDown network for sandbox \"8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021\" successfully" Sep 12 17:42:07.926692 containerd[1456]: time="2025-09-12T17:42:07.926649525Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:42:07.926692 containerd[1456]: time="2025-09-12T17:42:07.926706372Z" level=info msg="RemovePodSandbox \"8280ce758d5cee863396b561d1f73544a5bb433612a7a6ec91ccf9382cb0d021\" returns successfully" Sep 12 17:42:07.927188 containerd[1456]: time="2025-09-12T17:42:07.927164168Z" level=info msg="StopPodSandbox for \"a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66\"" Sep 12 17:42:08.000402 containerd[1456]: 2025-09-12 17:42:07.963 [WARNING][5714] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66" WorkloadEndpoint="localhost-k8s-whisker--77d9d656dc--fm5zv-eth0" Sep 12 17:42:08.000402 containerd[1456]: 2025-09-12 17:42:07.963 [INFO][5714] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66" Sep 12 17:42:08.000402 containerd[1456]: 2025-09-12 17:42:07.963 [INFO][5714] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66" iface="eth0" netns="" Sep 12 17:42:08.000402 containerd[1456]: 2025-09-12 17:42:07.963 [INFO][5714] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66" Sep 12 17:42:08.000402 containerd[1456]: 2025-09-12 17:42:07.963 [INFO][5714] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66" Sep 12 17:42:08.000402 containerd[1456]: 2025-09-12 17:42:07.986 [INFO][5723] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66" HandleID="k8s-pod-network.a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66" Workload="localhost-k8s-whisker--77d9d656dc--fm5zv-eth0" Sep 12 17:42:08.000402 containerd[1456]: 2025-09-12 17:42:07.986 [INFO][5723] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:42:08.000402 containerd[1456]: 2025-09-12 17:42:07.986 [INFO][5723] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:42:08.000402 containerd[1456]: 2025-09-12 17:42:07.992 [WARNING][5723] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66" HandleID="k8s-pod-network.a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66" Workload="localhost-k8s-whisker--77d9d656dc--fm5zv-eth0" Sep 12 17:42:08.000402 containerd[1456]: 2025-09-12 17:42:07.992 [INFO][5723] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66" HandleID="k8s-pod-network.a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66" Workload="localhost-k8s-whisker--77d9d656dc--fm5zv-eth0" Sep 12 17:42:08.000402 containerd[1456]: 2025-09-12 17:42:07.993 [INFO][5723] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:42:08.000402 containerd[1456]: 2025-09-12 17:42:07.997 [INFO][5714] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66" Sep 12 17:42:08.001628 containerd[1456]: time="2025-09-12T17:42:08.000444028Z" level=info msg="TearDown network for sandbox \"a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66\" successfully" Sep 12 17:42:08.001628 containerd[1456]: time="2025-09-12T17:42:08.000472721Z" level=info msg="StopPodSandbox for \"a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66\" returns successfully" Sep 12 17:42:08.001628 containerd[1456]: time="2025-09-12T17:42:08.001029624Z" level=info msg="RemovePodSandbox for \"a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66\"" Sep 12 17:42:08.001628 containerd[1456]: time="2025-09-12T17:42:08.001059279Z" level=info msg="Forcibly stopping sandbox \"a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66\"" Sep 12 17:42:08.077804 containerd[1456]: 2025-09-12 17:42:08.038 [WARNING][5741] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66" WorkloadEndpoint="localhost-k8s-whisker--77d9d656dc--fm5zv-eth0" Sep 12 17:42:08.077804 containerd[1456]: 2025-09-12 17:42:08.039 [INFO][5741] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66" Sep 12 17:42:08.077804 containerd[1456]: 2025-09-12 17:42:08.039 [INFO][5741] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66" iface="eth0" netns="" Sep 12 17:42:08.077804 containerd[1456]: 2025-09-12 17:42:08.039 [INFO][5741] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66" Sep 12 17:42:08.077804 containerd[1456]: 2025-09-12 17:42:08.039 [INFO][5741] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66" Sep 12 17:42:08.077804 containerd[1456]: 2025-09-12 17:42:08.061 [INFO][5750] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66" HandleID="k8s-pod-network.a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66" Workload="localhost-k8s-whisker--77d9d656dc--fm5zv-eth0" Sep 12 17:42:08.077804 containerd[1456]: 2025-09-12 17:42:08.061 [INFO][5750] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:42:08.077804 containerd[1456]: 2025-09-12 17:42:08.061 [INFO][5750] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:42:08.077804 containerd[1456]: 2025-09-12 17:42:08.070 [WARNING][5750] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66" HandleID="k8s-pod-network.a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66" Workload="localhost-k8s-whisker--77d9d656dc--fm5zv-eth0" Sep 12 17:42:08.077804 containerd[1456]: 2025-09-12 17:42:08.070 [INFO][5750] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66" HandleID="k8s-pod-network.a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66" Workload="localhost-k8s-whisker--77d9d656dc--fm5zv-eth0" Sep 12 17:42:08.077804 containerd[1456]: 2025-09-12 17:42:08.071 [INFO][5750] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:42:08.077804 containerd[1456]: 2025-09-12 17:42:08.074 [INFO][5741] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66" Sep 12 17:42:08.078381 containerd[1456]: time="2025-09-12T17:42:08.077832825Z" level=info msg="TearDown network for sandbox \"a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66\" successfully" Sep 12 17:42:08.090188 containerd[1456]: time="2025-09-12T17:42:08.090035881Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:42:08.090188 containerd[1456]: time="2025-09-12T17:42:08.090134647Z" level=info msg="RemovePodSandbox \"a3c6ee90b36a6fc0b82e168043d24cb88e8684a441d7139e4f51c339c7392f66\" returns successfully" Sep 12 17:42:08.091073 containerd[1456]: time="2025-09-12T17:42:08.090718400Z" level=info msg="StopPodSandbox for \"bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb\"" Sep 12 17:42:08.162613 containerd[1456]: 2025-09-12 17:42:08.125 [WARNING][5767] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--86c7c74448--qxr9j-eth0", GenerateName:"calico-kube-controllers-86c7c74448-", Namespace:"calico-system", SelfLink:"", UID:"53e646fe-62f1-4e56-82b1-6d2004ca48b0", ResourceVersion:"1181", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 41, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86c7c74448", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"077fa35a05cc002cc3cb158b10b702f7f53b7ee2b9c8a7b4065425e5cc032ee9", Pod:"calico-kube-controllers-86c7c74448-qxr9j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliba6c8b1b226", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:42:08.162613 containerd[1456]: 2025-09-12 17:42:08.126 [INFO][5767] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb" Sep 12 17:42:08.162613 containerd[1456]: 2025-09-12 17:42:08.126 [INFO][5767] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb" iface="eth0" netns="" Sep 12 17:42:08.162613 containerd[1456]: 2025-09-12 17:42:08.126 [INFO][5767] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb" Sep 12 17:42:08.162613 containerd[1456]: 2025-09-12 17:42:08.126 [INFO][5767] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb" Sep 12 17:42:08.162613 containerd[1456]: 2025-09-12 17:42:08.148 [INFO][5776] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb" HandleID="k8s-pod-network.bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb" Workload="localhost-k8s-calico--kube--controllers--86c7c74448--qxr9j-eth0" Sep 12 17:42:08.162613 containerd[1456]: 2025-09-12 17:42:08.148 [INFO][5776] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:42:08.162613 containerd[1456]: 2025-09-12 17:42:08.148 [INFO][5776] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:42:08.162613 containerd[1456]: 2025-09-12 17:42:08.155 [WARNING][5776] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb" HandleID="k8s-pod-network.bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb" Workload="localhost-k8s-calico--kube--controllers--86c7c74448--qxr9j-eth0" Sep 12 17:42:08.162613 containerd[1456]: 2025-09-12 17:42:08.155 [INFO][5776] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb" HandleID="k8s-pod-network.bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb" Workload="localhost-k8s-calico--kube--controllers--86c7c74448--qxr9j-eth0" Sep 12 17:42:08.162613 containerd[1456]: 2025-09-12 17:42:08.156 [INFO][5776] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:42:08.162613 containerd[1456]: 2025-09-12 17:42:08.159 [INFO][5767] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb" Sep 12 17:42:08.162613 containerd[1456]: time="2025-09-12T17:42:08.162580477Z" level=info msg="TearDown network for sandbox \"bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb\" successfully" Sep 12 17:42:08.162613 containerd[1456]: time="2025-09-12T17:42:08.162610123Z" level=info msg="StopPodSandbox for \"bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb\" returns successfully" Sep 12 17:42:08.164203 containerd[1456]: time="2025-09-12T17:42:08.163380528Z" level=info msg="RemovePodSandbox for \"bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb\"" Sep 12 17:42:08.164203 containerd[1456]: time="2025-09-12T17:42:08.163431715Z" level=info msg="Forcibly stopping sandbox \"bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb\"" Sep 12 17:42:08.231712 containerd[1456]: 2025-09-12 17:42:08.197 [WARNING][5794] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--86c7c74448--qxr9j-eth0", GenerateName:"calico-kube-controllers-86c7c74448-", Namespace:"calico-system", SelfLink:"", UID:"53e646fe-62f1-4e56-82b1-6d2004ca48b0", ResourceVersion:"1181", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 41, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86c7c74448", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"077fa35a05cc002cc3cb158b10b702f7f53b7ee2b9c8a7b4065425e5cc032ee9", Pod:"calico-kube-controllers-86c7c74448-qxr9j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliba6c8b1b226", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:42:08.231712 containerd[1456]: 2025-09-12 17:42:08.197 [INFO][5794] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb" Sep 12 17:42:08.231712 containerd[1456]: 2025-09-12 17:42:08.197 [INFO][5794] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb" iface="eth0" netns="" Sep 12 17:42:08.231712 containerd[1456]: 2025-09-12 17:42:08.197 [INFO][5794] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb" Sep 12 17:42:08.231712 containerd[1456]: 2025-09-12 17:42:08.197 [INFO][5794] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb" Sep 12 17:42:08.231712 containerd[1456]: 2025-09-12 17:42:08.218 [INFO][5804] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb" HandleID="k8s-pod-network.bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb" Workload="localhost-k8s-calico--kube--controllers--86c7c74448--qxr9j-eth0" Sep 12 17:42:08.231712 containerd[1456]: 2025-09-12 17:42:08.218 [INFO][5804] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:42:08.231712 containerd[1456]: 2025-09-12 17:42:08.218 [INFO][5804] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:42:08.231712 containerd[1456]: 2025-09-12 17:42:08.224 [WARNING][5804] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb" HandleID="k8s-pod-network.bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb" Workload="localhost-k8s-calico--kube--controllers--86c7c74448--qxr9j-eth0" Sep 12 17:42:08.231712 containerd[1456]: 2025-09-12 17:42:08.224 [INFO][5804] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb" HandleID="k8s-pod-network.bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb" Workload="localhost-k8s-calico--kube--controllers--86c7c74448--qxr9j-eth0" Sep 12 17:42:08.231712 containerd[1456]: 2025-09-12 17:42:08.226 [INFO][5804] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:42:08.231712 containerd[1456]: 2025-09-12 17:42:08.228 [INFO][5794] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb" Sep 12 17:42:08.232205 containerd[1456]: time="2025-09-12T17:42:08.231760767Z" level=info msg="TearDown network for sandbox \"bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb\" successfully" Sep 12 17:42:08.300014 containerd[1456]: time="2025-09-12T17:42:08.299930187Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:42:08.300096 containerd[1456]: time="2025-09-12T17:42:08.300058780Z" level=info msg="RemovePodSandbox \"bc1259ed194350e754b186b9a19b2fcc54b937f1c7440671d1bbb08cd14bb7eb\" returns successfully" Sep 12 17:42:08.302334 containerd[1456]: time="2025-09-12T17:42:08.302288433Z" level=info msg="StopPodSandbox for \"70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed\"" Sep 12 17:42:08.461115 containerd[1456]: 2025-09-12 17:42:08.425 [WARNING][5822] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--545s4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"212a184c-5a29-4b83-a7ee-b13bac18f280", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 41, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3075cf08dc04273e10bf1953b469a7f3ad8eb66ef58bda20970bf7891d10b95e", Pod:"coredns-674b8bbfcf-545s4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali908fef8736c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:42:08.461115 containerd[1456]: 2025-09-12 17:42:08.426 [INFO][5822] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed" Sep 12 17:42:08.461115 containerd[1456]: 2025-09-12 17:42:08.426 [INFO][5822] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed" iface="eth0" netns="" Sep 12 17:42:08.461115 containerd[1456]: 2025-09-12 17:42:08.426 [INFO][5822] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed" Sep 12 17:42:08.461115 containerd[1456]: 2025-09-12 17:42:08.426 [INFO][5822] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed" Sep 12 17:42:08.461115 containerd[1456]: 2025-09-12 17:42:08.447 [INFO][5831] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed" HandleID="k8s-pod-network.70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed" Workload="localhost-k8s-coredns--674b8bbfcf--545s4-eth0" Sep 12 17:42:08.461115 containerd[1456]: 2025-09-12 17:42:08.448 [INFO][5831] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:42:08.461115 containerd[1456]: 2025-09-12 17:42:08.448 [INFO][5831] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:42:08.461115 containerd[1456]: 2025-09-12 17:42:08.453 [WARNING][5831] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed" HandleID="k8s-pod-network.70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed" Workload="localhost-k8s-coredns--674b8bbfcf--545s4-eth0" Sep 12 17:42:08.461115 containerd[1456]: 2025-09-12 17:42:08.454 [INFO][5831] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed" HandleID="k8s-pod-network.70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed" Workload="localhost-k8s-coredns--674b8bbfcf--545s4-eth0" Sep 12 17:42:08.461115 containerd[1456]: 2025-09-12 17:42:08.455 [INFO][5831] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:42:08.461115 containerd[1456]: 2025-09-12 17:42:08.458 [INFO][5822] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed" Sep 12 17:42:08.461572 containerd[1456]: time="2025-09-12T17:42:08.461168049Z" level=info msg="TearDown network for sandbox \"70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed\" successfully" Sep 12 17:42:08.461572 containerd[1456]: time="2025-09-12T17:42:08.461197183Z" level=info msg="StopPodSandbox for \"70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed\" returns successfully" Sep 12 17:42:08.461859 containerd[1456]: time="2025-09-12T17:42:08.461834809Z" level=info msg="RemovePodSandbox for \"70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed\"" Sep 12 17:42:08.461904 containerd[1456]: time="2025-09-12T17:42:08.461864775Z" level=info msg="Forcibly stopping sandbox \"70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed\"" Sep 12 17:42:08.559805 containerd[1456]: 2025-09-12 17:42:08.521 [WARNING][5848] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--545s4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"212a184c-5a29-4b83-a7ee-b13bac18f280", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 41, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3075cf08dc04273e10bf1953b469a7f3ad8eb66ef58bda20970bf7891d10b95e", Pod:"coredns-674b8bbfcf-545s4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali908fef8736c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:42:08.559805 containerd[1456]: 2025-09-12 17:42:08.521 [INFO][5848] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed" Sep 12 17:42:08.559805 containerd[1456]: 2025-09-12 17:42:08.521 [INFO][5848] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed" iface="eth0" netns="" Sep 12 17:42:08.559805 containerd[1456]: 2025-09-12 17:42:08.524 [INFO][5848] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed" Sep 12 17:42:08.559805 containerd[1456]: 2025-09-12 17:42:08.524 [INFO][5848] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed" Sep 12 17:42:08.559805 containerd[1456]: 2025-09-12 17:42:08.545 [INFO][5857] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed" HandleID="k8s-pod-network.70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed" Workload="localhost-k8s-coredns--674b8bbfcf--545s4-eth0" Sep 12 17:42:08.559805 containerd[1456]: 2025-09-12 17:42:08.546 [INFO][5857] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:42:08.559805 containerd[1456]: 2025-09-12 17:42:08.546 [INFO][5857] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:42:08.559805 containerd[1456]: 2025-09-12 17:42:08.552 [WARNING][5857] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed" HandleID="k8s-pod-network.70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed" Workload="localhost-k8s-coredns--674b8bbfcf--545s4-eth0" Sep 12 17:42:08.559805 containerd[1456]: 2025-09-12 17:42:08.552 [INFO][5857] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed" HandleID="k8s-pod-network.70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed" Workload="localhost-k8s-coredns--674b8bbfcf--545s4-eth0" Sep 12 17:42:08.559805 containerd[1456]: 2025-09-12 17:42:08.553 [INFO][5857] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:42:08.559805 containerd[1456]: 2025-09-12 17:42:08.556 [INFO][5848] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed" Sep 12 17:42:08.560429 containerd[1456]: time="2025-09-12T17:42:08.559867679Z" level=info msg="TearDown network for sandbox \"70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed\" successfully" Sep 12 17:42:08.571783 containerd[1456]: time="2025-09-12T17:42:08.571713599Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:42:08.571893 containerd[1456]: time="2025-09-12T17:42:08.571796516Z" level=info msg="RemovePodSandbox \"70aff4d52527fb027bb1cc2aad49e987d980915633139ba8ffe61d02e188dbed\" returns successfully" Sep 12 17:42:08.572742 containerd[1456]: time="2025-09-12T17:42:08.572433079Z" level=info msg="StopPodSandbox for \"fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83\"" Sep 12 17:42:08.646416 containerd[1456]: 2025-09-12 17:42:08.608 [WARNING][5876] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lplbc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7bc16a13-1355-4530-b199-c12f8c96fcdd", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 41, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"362c90275ab188d07a88d765ad89f8e92f0fb83108a94edf10963b3353d5a0a5", Pod:"csi-node-driver-lplbc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia888803215c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:42:08.646416 containerd[1456]: 2025-09-12 17:42:08.609 [INFO][5876] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83" Sep 12 17:42:08.646416 containerd[1456]: 2025-09-12 17:42:08.609 [INFO][5876] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83" iface="eth0" netns="" Sep 12 17:42:08.646416 containerd[1456]: 2025-09-12 17:42:08.609 [INFO][5876] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83" Sep 12 17:42:08.646416 containerd[1456]: 2025-09-12 17:42:08.609 [INFO][5876] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83" Sep 12 17:42:08.646416 containerd[1456]: 2025-09-12 17:42:08.631 [INFO][5884] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83" HandleID="k8s-pod-network.fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83" Workload="localhost-k8s-csi--node--driver--lplbc-eth0" Sep 12 17:42:08.646416 containerd[1456]: 2025-09-12 17:42:08.631 [INFO][5884] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:42:08.646416 containerd[1456]: 2025-09-12 17:42:08.631 [INFO][5884] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:42:08.646416 containerd[1456]: 2025-09-12 17:42:08.638 [WARNING][5884] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83" HandleID="k8s-pod-network.fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83" Workload="localhost-k8s-csi--node--driver--lplbc-eth0" Sep 12 17:42:08.646416 containerd[1456]: 2025-09-12 17:42:08.638 [INFO][5884] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83" HandleID="k8s-pod-network.fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83" Workload="localhost-k8s-csi--node--driver--lplbc-eth0" Sep 12 17:42:08.646416 containerd[1456]: 2025-09-12 17:42:08.640 [INFO][5884] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:42:08.646416 containerd[1456]: 2025-09-12 17:42:08.643 [INFO][5876] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83" Sep 12 17:42:08.647550 containerd[1456]: time="2025-09-12T17:42:08.646464874Z" level=info msg="TearDown network for sandbox \"fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83\" successfully" Sep 12 17:42:08.647550 containerd[1456]: time="2025-09-12T17:42:08.646502516Z" level=info msg="StopPodSandbox for \"fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83\" returns successfully" Sep 12 17:42:08.647550 containerd[1456]: time="2025-09-12T17:42:08.647101737Z" level=info msg="RemovePodSandbox for \"fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83\"" Sep 12 17:42:08.647550 containerd[1456]: time="2025-09-12T17:42:08.647130211Z" level=info msg="Forcibly stopping sandbox \"fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83\"" Sep 12 17:42:08.718156 containerd[1456]: 2025-09-12 17:42:08.682 [WARNING][5902] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lplbc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7bc16a13-1355-4530-b199-c12f8c96fcdd", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 41, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"362c90275ab188d07a88d765ad89f8e92f0fb83108a94edf10963b3353d5a0a5", Pod:"csi-node-driver-lplbc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia888803215c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:42:08.718156 containerd[1456]: 2025-09-12 17:42:08.683 [INFO][5902] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83" Sep 12 17:42:08.718156 containerd[1456]: 2025-09-12 17:42:08.683 [INFO][5902] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83" iface="eth0" netns="" Sep 12 17:42:08.718156 containerd[1456]: 2025-09-12 17:42:08.683 [INFO][5902] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83" Sep 12 17:42:08.718156 containerd[1456]: 2025-09-12 17:42:08.683 [INFO][5902] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83" Sep 12 17:42:08.718156 containerd[1456]: 2025-09-12 17:42:08.704 [INFO][5910] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83" HandleID="k8s-pod-network.fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83" Workload="localhost-k8s-csi--node--driver--lplbc-eth0" Sep 12 17:42:08.718156 containerd[1456]: 2025-09-12 17:42:08.704 [INFO][5910] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:42:08.718156 containerd[1456]: 2025-09-12 17:42:08.704 [INFO][5910] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:42:08.718156 containerd[1456]: 2025-09-12 17:42:08.710 [WARNING][5910] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83" HandleID="k8s-pod-network.fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83" Workload="localhost-k8s-csi--node--driver--lplbc-eth0" Sep 12 17:42:08.718156 containerd[1456]: 2025-09-12 17:42:08.710 [INFO][5910] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83" HandleID="k8s-pod-network.fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83" Workload="localhost-k8s-csi--node--driver--lplbc-eth0" Sep 12 17:42:08.718156 containerd[1456]: 2025-09-12 17:42:08.712 [INFO][5910] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:42:08.718156 containerd[1456]: 2025-09-12 17:42:08.714 [INFO][5902] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83" Sep 12 17:42:08.718156 containerd[1456]: time="2025-09-12T17:42:08.718114179Z" level=info msg="TearDown network for sandbox \"fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83\" successfully" Sep 12 17:42:08.722425 containerd[1456]: time="2025-09-12T17:42:08.722362966Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:42:08.722425 containerd[1456]: time="2025-09-12T17:42:08.722420325Z" level=info msg="RemovePodSandbox \"fca4b97d889e19df8b9435244945818a1b5e0c350b3e39f1f3273f6b394d6c83\" returns successfully" Sep 12 17:42:08.723025 containerd[1456]: time="2025-09-12T17:42:08.722985753Z" level=info msg="StopPodSandbox for \"7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4\"" Sep 12 17:42:08.800579 containerd[1456]: 2025-09-12 17:42:08.762 [WARNING][5927] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--hrtwm-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"5842d66e-40a9-47c3-82c8-fa74a7aa357d", ResourceVersion:"1143", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 41, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fb4ee46b37dd944e682116ee2ee340ee0f00a3edb73952cb49cf133c275b8108", Pod:"goldmane-54d579b49d-hrtwm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califa9cc941912", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:42:08.800579 containerd[1456]: 2025-09-12 17:42:08.764 [INFO][5927] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4" Sep 12 17:42:08.800579 containerd[1456]: 2025-09-12 17:42:08.764 [INFO][5927] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4" iface="eth0" netns="" Sep 12 17:42:08.800579 containerd[1456]: 2025-09-12 17:42:08.764 [INFO][5927] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4" Sep 12 17:42:08.800579 containerd[1456]: 2025-09-12 17:42:08.764 [INFO][5927] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4" Sep 12 17:42:08.800579 containerd[1456]: 2025-09-12 17:42:08.785 [INFO][5937] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4" HandleID="k8s-pod-network.7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4" Workload="localhost-k8s-goldmane--54d579b49d--hrtwm-eth0" Sep 12 17:42:08.800579 containerd[1456]: 2025-09-12 17:42:08.785 [INFO][5937] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:42:08.800579 containerd[1456]: 2025-09-12 17:42:08.785 [INFO][5937] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:42:08.800579 containerd[1456]: 2025-09-12 17:42:08.793 [WARNING][5937] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4" HandleID="k8s-pod-network.7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4" Workload="localhost-k8s-goldmane--54d579b49d--hrtwm-eth0" Sep 12 17:42:08.800579 containerd[1456]: 2025-09-12 17:42:08.793 [INFO][5937] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4" HandleID="k8s-pod-network.7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4" Workload="localhost-k8s-goldmane--54d579b49d--hrtwm-eth0" Sep 12 17:42:08.800579 containerd[1456]: 2025-09-12 17:42:08.794 [INFO][5937] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:42:08.800579 containerd[1456]: 2025-09-12 17:42:08.797 [INFO][5927] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4" Sep 12 17:42:08.801062 containerd[1456]: time="2025-09-12T17:42:08.800629723Z" level=info msg="TearDown network for sandbox \"7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4\" successfully" Sep 12 17:42:08.801062 containerd[1456]: time="2025-09-12T17:42:08.800657836Z" level=info msg="StopPodSandbox for \"7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4\" returns successfully" Sep 12 17:42:08.801421 containerd[1456]: time="2025-09-12T17:42:08.801324666Z" level=info msg="RemovePodSandbox for \"7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4\"" Sep 12 17:42:08.801421 containerd[1456]: time="2025-09-12T17:42:08.801364902Z" level=info msg="Forcibly stopping sandbox \"7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4\"" Sep 12 17:42:08.870634 containerd[1456]: 2025-09-12 17:42:08.837 [WARNING][5955] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--hrtwm-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"5842d66e-40a9-47c3-82c8-fa74a7aa357d", ResourceVersion:"1143", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 41, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fb4ee46b37dd944e682116ee2ee340ee0f00a3edb73952cb49cf133c275b8108", Pod:"goldmane-54d579b49d-hrtwm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califa9cc941912", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:42:08.870634 containerd[1456]: 2025-09-12 17:42:08.837 [INFO][5955] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4" Sep 12 17:42:08.870634 containerd[1456]: 2025-09-12 17:42:08.837 [INFO][5955] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4" iface="eth0" netns="" Sep 12 17:42:08.870634 containerd[1456]: 2025-09-12 17:42:08.837 [INFO][5955] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4" Sep 12 17:42:08.870634 containerd[1456]: 2025-09-12 17:42:08.837 [INFO][5955] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4" Sep 12 17:42:08.870634 containerd[1456]: 2025-09-12 17:42:08.857 [INFO][5964] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4" HandleID="k8s-pod-network.7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4" Workload="localhost-k8s-goldmane--54d579b49d--hrtwm-eth0" Sep 12 17:42:08.870634 containerd[1456]: 2025-09-12 17:42:08.857 [INFO][5964] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:42:08.870634 containerd[1456]: 2025-09-12 17:42:08.858 [INFO][5964] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:42:08.870634 containerd[1456]: 2025-09-12 17:42:08.864 [WARNING][5964] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4" HandleID="k8s-pod-network.7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4" Workload="localhost-k8s-goldmane--54d579b49d--hrtwm-eth0" Sep 12 17:42:08.870634 containerd[1456]: 2025-09-12 17:42:08.864 [INFO][5964] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4" HandleID="k8s-pod-network.7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4" Workload="localhost-k8s-goldmane--54d579b49d--hrtwm-eth0" Sep 12 17:42:08.870634 containerd[1456]: 2025-09-12 17:42:08.865 [INFO][5964] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:42:08.870634 containerd[1456]: 2025-09-12 17:42:08.867 [INFO][5955] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4" Sep 12 17:42:08.871073 containerd[1456]: time="2025-09-12T17:42:08.870700014Z" level=info msg="TearDown network for sandbox \"7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4\" successfully" Sep 12 17:42:09.036315 containerd[1456]: time="2025-09-12T17:42:09.036144232Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:42:09.036315 containerd[1456]: time="2025-09-12T17:42:09.036253709Z" level=info msg="RemovePodSandbox \"7604f0ce7e2d00e450b7bea34853133d15245ddc60e03e900f13abc8370dddf4\" returns successfully" Sep 12 17:42:09.036739 containerd[1456]: time="2025-09-12T17:42:09.036632083Z" level=info msg="StopPodSandbox for \"425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b\"" Sep 12 17:42:09.200526 containerd[1456]: 2025-09-12 17:42:09.125 [WARNING][5981] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6dc89697db--nnwcs-eth0", GenerateName:"calico-apiserver-6dc89697db-", Namespace:"calico-apiserver", SelfLink:"", UID:"1dd46bd3-f845-44a4-81fe-9f42d3bc81d7", ResourceVersion:"1110", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 41, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dc89697db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"219f4c3e4970c497c8e03f94efe438e7efd81f42c17a54b25264b9b3400a04fd", Pod:"calico-apiserver-6dc89697db-nnwcs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali28de6226a93", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:42:09.200526 containerd[1456]: 2025-09-12 17:42:09.126 [INFO][5981] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b" Sep 12 17:42:09.200526 containerd[1456]: 2025-09-12 17:42:09.126 [INFO][5981] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b" iface="eth0" netns="" Sep 12 17:42:09.200526 containerd[1456]: 2025-09-12 17:42:09.126 [INFO][5981] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b" Sep 12 17:42:09.200526 containerd[1456]: 2025-09-12 17:42:09.126 [INFO][5981] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b" Sep 12 17:42:09.200526 containerd[1456]: 2025-09-12 17:42:09.185 [INFO][5993] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b" HandleID="k8s-pod-network.425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b" Workload="localhost-k8s-calico--apiserver--6dc89697db--nnwcs-eth0" Sep 12 17:42:09.200526 containerd[1456]: 2025-09-12 17:42:09.185 [INFO][5993] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:42:09.200526 containerd[1456]: 2025-09-12 17:42:09.185 [INFO][5993] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:42:09.200526 containerd[1456]: 2025-09-12 17:42:09.190 [WARNING][5993] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b" HandleID="k8s-pod-network.425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b" Workload="localhost-k8s-calico--apiserver--6dc89697db--nnwcs-eth0" Sep 12 17:42:09.200526 containerd[1456]: 2025-09-12 17:42:09.190 [INFO][5993] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b" HandleID="k8s-pod-network.425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b" Workload="localhost-k8s-calico--apiserver--6dc89697db--nnwcs-eth0" Sep 12 17:42:09.200526 containerd[1456]: 2025-09-12 17:42:09.192 [INFO][5993] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:42:09.200526 containerd[1456]: 2025-09-12 17:42:09.195 [INFO][5981] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b" Sep 12 17:42:09.200526 containerd[1456]: time="2025-09-12T17:42:09.200354056Z" level=info msg="TearDown network for sandbox \"425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b\" successfully" Sep 12 17:42:09.200526 containerd[1456]: time="2025-09-12T17:42:09.200384173Z" level=info msg="StopPodSandbox for \"425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b\" returns successfully" Sep 12 17:42:09.201983 containerd[1456]: time="2025-09-12T17:42:09.200987774Z" level=info msg="RemovePodSandbox for \"425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b\"" Sep 12 17:42:09.201983 containerd[1456]: time="2025-09-12T17:42:09.201023220Z" level=info msg="Forcibly stopping sandbox \"425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b\"" Sep 12 17:42:09.282863 containerd[1456]: 2025-09-12 17:42:09.238 [WARNING][6014] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6dc89697db--nnwcs-eth0", GenerateName:"calico-apiserver-6dc89697db-", Namespace:"calico-apiserver", SelfLink:"", UID:"1dd46bd3-f845-44a4-81fe-9f42d3bc81d7", ResourceVersion:"1110", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 41, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dc89697db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"219f4c3e4970c497c8e03f94efe438e7efd81f42c17a54b25264b9b3400a04fd", Pod:"calico-apiserver-6dc89697db-nnwcs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali28de6226a93", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:42:09.282863 containerd[1456]: 2025-09-12 17:42:09.239 [INFO][6014] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b" Sep 12 17:42:09.282863 containerd[1456]: 2025-09-12 17:42:09.239 [INFO][6014] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b" iface="eth0" netns="" Sep 12 17:42:09.282863 containerd[1456]: 2025-09-12 17:42:09.239 [INFO][6014] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b" Sep 12 17:42:09.282863 containerd[1456]: 2025-09-12 17:42:09.239 [INFO][6014] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b" Sep 12 17:42:09.282863 containerd[1456]: 2025-09-12 17:42:09.264 [INFO][6023] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b" HandleID="k8s-pod-network.425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b" Workload="localhost-k8s-calico--apiserver--6dc89697db--nnwcs-eth0" Sep 12 17:42:09.282863 containerd[1456]: 2025-09-12 17:42:09.265 [INFO][6023] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:42:09.282863 containerd[1456]: 2025-09-12 17:42:09.265 [INFO][6023] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:42:09.282863 containerd[1456]: 2025-09-12 17:42:09.275 [WARNING][6023] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b" HandleID="k8s-pod-network.425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b" Workload="localhost-k8s-calico--apiserver--6dc89697db--nnwcs-eth0" Sep 12 17:42:09.282863 containerd[1456]: 2025-09-12 17:42:09.275 [INFO][6023] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b" HandleID="k8s-pod-network.425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b" Workload="localhost-k8s-calico--apiserver--6dc89697db--nnwcs-eth0" Sep 12 17:42:09.282863 containerd[1456]: 2025-09-12 17:42:09.276 [INFO][6023] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:42:09.282863 containerd[1456]: 2025-09-12 17:42:09.279 [INFO][6014] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b" Sep 12 17:42:09.283326 containerd[1456]: time="2025-09-12T17:42:09.282893419Z" level=info msg="TearDown network for sandbox \"425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b\" successfully" Sep 12 17:42:09.287629 containerd[1456]: time="2025-09-12T17:42:09.287550956Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:42:09.287629 containerd[1456]: time="2025-09-12T17:42:09.287615659Z" level=info msg="RemovePodSandbox \"425a23f57be27aaaf510fc0188d44498c64b19aab9dd1fa2a6d6c5cb8965102b\" returns successfully" Sep 12 17:42:09.351399 containerd[1456]: time="2025-09-12T17:42:09.351338498Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:09.352105 containerd[1456]: time="2025-09-12T17:42:09.352040254Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 12 17:42:09.353207 containerd[1456]: time="2025-09-12T17:42:09.353170710Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:09.355282 containerd[1456]: time="2025-09-12T17:42:09.355249837Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:42:09.355900 containerd[1456]: time="2025-09-12T17:42:09.355867573Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 1.890169711s" Sep 12 17:42:09.355929 containerd[1456]: time="2025-09-12T17:42:09.355897290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 12 17:42:09.360361 containerd[1456]: time="2025-09-12T17:42:09.360316838Z" level=info msg="CreateContainer within sandbox \"362c90275ab188d07a88d765ad89f8e92f0fb83108a94edf10963b3353d5a0a5\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 12 17:42:09.372677 containerd[1456]: time="2025-09-12T17:42:09.372623064Z" level=info msg="CreateContainer within sandbox \"362c90275ab188d07a88d765ad89f8e92f0fb83108a94edf10963b3353d5a0a5\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a76a9b1d8c7b398ba51153aa135c48567e48ebba1d0e3cfcfb80ee9949db95be\"" Sep 12 17:42:09.373114 containerd[1456]: time="2025-09-12T17:42:09.373041855Z" level=info msg="StartContainer for \"a76a9b1d8c7b398ba51153aa135c48567e48ebba1d0e3cfcfb80ee9949db95be\"" Sep 12 17:42:09.414416 systemd[1]: Started cri-containerd-a76a9b1d8c7b398ba51153aa135c48567e48ebba1d0e3cfcfb80ee9949db95be.scope - libcontainer container a76a9b1d8c7b398ba51153aa135c48567e48ebba1d0e3cfcfb80ee9949db95be. Sep 12 17:42:09.445986 containerd[1456]: time="2025-09-12T17:42:09.445943673Z" level=info msg="StartContainer for \"a76a9b1d8c7b398ba51153aa135c48567e48ebba1d0e3cfcfb80ee9949db95be\" returns successfully" Sep 12 17:42:10.443959 kubelet[2519]: I0912 17:42:10.443903 2519 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 12 17:42:10.445936 kubelet[2519]: I0912 17:42:10.445707 2519 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 12 17:42:12.436881 systemd[1]: Started sshd@14-10.0.0.139:22-10.0.0.1:55286.service - OpenSSH per-connection server daemon (10.0.0.1:55286). Sep 12 17:42:12.494500 sshd[6077]: Accepted publickey for core from 10.0.0.1 port 55286 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:42:12.496663 sshd[6077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:42:12.501867 systemd-logind[1445]: New session 15 of user core. Sep 12 17:42:12.509379 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 17:42:12.637100 sshd[6077]: pam_unix(sshd:session): session closed for user core Sep 12 17:42:12.642529 systemd[1]: sshd@14-10.0.0.139:22-10.0.0.1:55286.service: Deactivated successfully. Sep 12 17:42:12.644814 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 17:42:12.645509 systemd-logind[1445]: Session 15 logged out. Waiting for processes to exit. Sep 12 17:42:12.647038 systemd-logind[1445]: Removed session 15. Sep 12 17:42:17.651222 systemd[1]: Started sshd@15-10.0.0.139:22-10.0.0.1:55298.service - OpenSSH per-connection server daemon (10.0.0.1:55298). Sep 12 17:42:17.690770 sshd[6096]: Accepted publickey for core from 10.0.0.1 port 55298 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:42:17.692477 sshd[6096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:42:17.696676 systemd-logind[1445]: New session 16 of user core. Sep 12 17:42:17.707401 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 17:42:17.873202 sshd[6096]: pam_unix(sshd:session): session closed for user core Sep 12 17:42:17.877553 systemd[1]: sshd@15-10.0.0.139:22-10.0.0.1:55298.service: Deactivated successfully. Sep 12 17:42:17.879955 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 17:42:17.880693 systemd-logind[1445]: Session 16 logged out. Waiting for processes to exit. Sep 12 17:42:17.881613 systemd-logind[1445]: Removed session 16. Sep 12 17:42:19.688759 kubelet[2519]: I0912 17:42:19.687611 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-lplbc" podStartSLOduration=38.522689068 podStartE2EDuration="53.687588945s" podCreationTimestamp="2025-09-12 17:41:26 +0000 UTC" firstStartedPulling="2025-09-12 17:41:54.191683719 +0000 UTC m=+46.967881349" lastFinishedPulling="2025-09-12 17:42:09.356583606 +0000 UTC m=+62.132781226" observedRunningTime="2025-09-12 17:42:09.768820691 +0000 UTC m=+62.545018331" watchObservedRunningTime="2025-09-12 17:42:19.687588945 +0000 UTC m=+72.463786565" Sep 12 17:42:20.322427 kubelet[2519]: E0912 17:42:20.322353 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:42:22.895087 systemd[1]: Started sshd@16-10.0.0.139:22-10.0.0.1:49136.service - OpenSSH per-connection server daemon (10.0.0.1:49136). Sep 12 17:42:22.937485 sshd[6134]: Accepted publickey for core from 10.0.0.1 port 49136 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:42:22.939503 sshd[6134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:42:22.944736 systemd-logind[1445]: New session 17 of user core. Sep 12 17:42:22.958439 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 17:42:23.086071 sshd[6134]: pam_unix(sshd:session): session closed for user core Sep 12 17:42:23.090926 systemd[1]: sshd@16-10.0.0.139:22-10.0.0.1:49136.service: Deactivated successfully. Sep 12 17:42:23.093431 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 17:42:23.094185 systemd-logind[1445]: Session 17 logged out. Waiting for processes to exit. Sep 12 17:42:23.095362 systemd-logind[1445]: Removed session 17. Sep 12 17:42:26.323034 kubelet[2519]: E0912 17:42:26.322969 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:42:27.205265 kubelet[2519]: I0912 17:42:27.204625 2519 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:42:28.100985 systemd[1]: Started sshd@17-10.0.0.139:22-10.0.0.1:49138.service - OpenSSH per-connection server daemon (10.0.0.1:49138). Sep 12 17:42:28.159500 sshd[6152]: Accepted publickey for core from 10.0.0.1 port 49138 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:42:28.161656 sshd[6152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:42:28.166291 systemd-logind[1445]: New session 18 of user core. Sep 12 17:42:28.180381 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 17:42:28.443787 sshd[6152]: pam_unix(sshd:session): session closed for user core Sep 12 17:42:28.455304 systemd[1]: sshd@17-10.0.0.139:22-10.0.0.1:49138.service: Deactivated successfully. Sep 12 17:42:28.458439 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 17:42:28.461014 systemd-logind[1445]: Session 18 logged out. Waiting for processes to exit. Sep 12 17:42:28.469733 systemd[1]: Started sshd@18-10.0.0.139:22-10.0.0.1:49140.service - OpenSSH per-connection server daemon (10.0.0.1:49140). Sep 12 17:42:28.472025 systemd-logind[1445]: Removed session 18. Sep 12 17:42:28.506023 sshd[6166]: Accepted publickey for core from 10.0.0.1 port 49140 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:42:28.508341 sshd[6166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:42:28.514004 systemd-logind[1445]: New session 19 of user core. Sep 12 17:42:28.530517 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 17:42:28.893217 sshd[6166]: pam_unix(sshd:session): session closed for user core Sep 12 17:42:28.909712 systemd[1]: sshd@18-10.0.0.139:22-10.0.0.1:49140.service: Deactivated successfully. Sep 12 17:42:28.912194 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 17:42:28.914361 systemd-logind[1445]: Session 19 logged out. Waiting for processes to exit. Sep 12 17:42:28.920513 systemd[1]: Started sshd@19-10.0.0.139:22-10.0.0.1:49156.service - OpenSSH per-connection server daemon (10.0.0.1:49156). Sep 12 17:42:28.922047 systemd-logind[1445]: Removed session 19. Sep 12 17:42:28.973508 sshd[6178]: Accepted publickey for core from 10.0.0.1 port 49156 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:42:28.975579 sshd[6178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:42:28.980795 systemd-logind[1445]: New session 20 of user core. Sep 12 17:42:28.987486 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 17:42:29.660393 sshd[6178]: pam_unix(sshd:session): session closed for user core Sep 12 17:42:29.670797 systemd[1]: sshd@19-10.0.0.139:22-10.0.0.1:49156.service: Deactivated successfully. Sep 12 17:42:29.673165 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 17:42:29.676338 systemd-logind[1445]: Session 20 logged out. Waiting for processes to exit. Sep 12 17:42:29.683236 systemd[1]: Started sshd@20-10.0.0.139:22-10.0.0.1:49166.service - OpenSSH per-connection server daemon (10.0.0.1:49166). Sep 12 17:42:29.686789 systemd-logind[1445]: Removed session 20. Sep 12 17:42:29.726818 sshd[6200]: Accepted publickey for core from 10.0.0.1 port 49166 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:42:29.728726 sshd[6200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:42:29.733945 systemd-logind[1445]: New session 21 of user core. Sep 12 17:42:29.744508 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 17:42:30.138272 sshd[6200]: pam_unix(sshd:session): session closed for user core Sep 12 17:42:30.148290 systemd[1]: sshd@20-10.0.0.139:22-10.0.0.1:49166.service: Deactivated successfully. Sep 12 17:42:30.150821 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 17:42:30.156436 systemd-logind[1445]: Session 21 logged out. Waiting for processes to exit. Sep 12 17:42:30.165491 systemd[1]: Started sshd@21-10.0.0.139:22-10.0.0.1:44304.service - OpenSSH per-connection server daemon (10.0.0.1:44304). Sep 12 17:42:30.168113 systemd-logind[1445]: Removed session 21. Sep 12 17:42:30.213628 sshd[6212]: Accepted publickey for core from 10.0.0.1 port 44304 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:42:30.215562 sshd[6212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:42:30.221553 systemd-logind[1445]: New session 22 of user core. Sep 12 17:42:30.227406 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 17:42:30.322518 kubelet[2519]: E0912 17:42:30.322462 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:42:30.363141 sshd[6212]: pam_unix(sshd:session): session closed for user core Sep 12 17:42:30.368320 systemd[1]: sshd@21-10.0.0.139:22-10.0.0.1:44304.service: Deactivated successfully. Sep 12 17:42:30.370743 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 17:42:30.371432 systemd-logind[1445]: Session 22 logged out. Waiting for processes to exit. Sep 12 17:42:30.372301 systemd-logind[1445]: Removed session 22. Sep 12 17:42:35.378130 systemd[1]: Started sshd@22-10.0.0.139:22-10.0.0.1:44306.service - OpenSSH per-connection server daemon (10.0.0.1:44306). Sep 12 17:42:35.428269 sshd[6254]: Accepted publickey for core from 10.0.0.1 port 44306 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:42:35.431083 sshd[6254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:42:35.437908 systemd-logind[1445]: New session 23 of user core. Sep 12 17:42:35.452491 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 17:42:35.656643 sshd[6254]: pam_unix(sshd:session): session closed for user core Sep 12 17:42:35.662048 systemd[1]: sshd@22-10.0.0.139:22-10.0.0.1:44306.service: Deactivated successfully. Sep 12 17:42:35.664613 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 17:42:35.665449 systemd-logind[1445]: Session 23 logged out. Waiting for processes to exit. Sep 12 17:42:35.666683 systemd-logind[1445]: Removed session 23. Sep 12 17:42:36.388465 kubelet[2519]: I0912 17:42:36.388382 2519 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:42:40.322932 kubelet[2519]: E0912 17:42:40.322847 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:42:40.675222 systemd[1]: Started sshd@23-10.0.0.139:22-10.0.0.1:54044.service - OpenSSH per-connection server daemon (10.0.0.1:54044). Sep 12 17:42:40.730684 sshd[6291]: Accepted publickey for core from 10.0.0.1 port 54044 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:42:40.733375 sshd[6291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:42:40.738279 systemd-logind[1445]: New session 24 of user core. Sep 12 17:42:40.746392 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 17:42:40.973988 sshd[6291]: pam_unix(sshd:session): session closed for user core Sep 12 17:42:40.978844 systemd[1]: sshd@23-10.0.0.139:22-10.0.0.1:54044.service: Deactivated successfully. Sep 12 17:42:40.981220 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 17:42:40.982452 systemd-logind[1445]: Session 24 logged out. Waiting for processes to exit. Sep 12 17:42:40.983988 systemd-logind[1445]: Removed session 24. Sep 12 17:42:46.004715 systemd[1]: Started sshd@24-10.0.0.139:22-10.0.0.1:54060.service - OpenSSH per-connection server daemon (10.0.0.1:54060). Sep 12 17:42:46.045098 sshd[6309]: Accepted publickey for core from 10.0.0.1 port 54060 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:42:46.047918 sshd[6309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:42:46.054170 systemd-logind[1445]: New session 25 of user core. Sep 12 17:42:46.062607 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 17:42:46.494656 sshd[6309]: pam_unix(sshd:session): session closed for user core Sep 12 17:42:46.499817 systemd[1]: sshd@24-10.0.0.139:22-10.0.0.1:54060.service: Deactivated successfully. Sep 12 17:42:46.502165 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 17:42:46.502872 systemd-logind[1445]: Session 25 logged out. Waiting for processes to exit. Sep 12 17:42:46.503834 systemd-logind[1445]: Removed session 25.