Sep 10 00:31:14.927607 kernel: Linux version 6.6.104-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Sep 9 22:56:44 -00 2025 Sep 10 00:31:14.927642 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a3dbdfb50e14c8de85dda26f853cdd6055239b4b8b15c08fb0eb00b67ce87a58 Sep 10 00:31:14.927656 kernel: BIOS-provided physical RAM map: Sep 10 00:31:14.927662 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 10 00:31:14.927669 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 10 00:31:14.927675 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 10 00:31:14.927683 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 10 00:31:14.927689 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 10 00:31:14.927696 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 10 00:31:14.927705 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 10 00:31:14.927712 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 10 00:31:14.927718 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 10 00:31:14.927728 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 10 00:31:14.927735 kernel: NX (Execute Disable) protection: active Sep 10 00:31:14.927743 kernel: APIC: Static calls initialized Sep 10 00:31:14.927755 kernel: SMBIOS 2.8 present. Sep 10 00:31:14.927762 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 10 00:31:14.927769 kernel: Hypervisor detected: KVM Sep 10 00:31:14.927776 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 10 00:31:14.927783 kernel: kvm-clock: using sched offset of 2937071266 cycles Sep 10 00:31:14.927791 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 10 00:31:14.927799 kernel: tsc: Detected 2794.748 MHz processor Sep 10 00:31:14.927806 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 10 00:31:14.927814 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 10 00:31:14.927824 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 10 00:31:14.927832 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 10 00:31:14.927839 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 10 00:31:14.927846 kernel: Using GB pages for direct mapping Sep 10 00:31:14.927853 kernel: ACPI: Early table checksum verification disabled Sep 10 00:31:14.927861 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 10 00:31:14.927868 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:31:14.927875 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:31:14.927883 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:31:14.927892 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 10 00:31:14.927900 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:31:14.927907 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:31:14.927915 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:31:14.927922 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:31:14.927929 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 10 00:31:14.927936 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 10 00:31:14.927984 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 10 00:31:14.927994 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 10 00:31:14.928002 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 10 00:31:14.928010 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 10 00:31:14.928017 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 10 00:31:14.928027 kernel: No NUMA configuration found Sep 10 00:31:14.928035 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 10 00:31:14.928046 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Sep 10 00:31:14.928053 kernel: Zone ranges: Sep 10 00:31:14.928061 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 10 00:31:14.928069 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 10 00:31:14.928076 kernel: Normal empty Sep 10 00:31:14.928084 kernel: Movable zone start for each node Sep 10 00:31:14.928091 kernel: Early memory node ranges Sep 10 00:31:14.928098 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 10 00:31:14.928106 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 10 00:31:14.928113 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 10 00:31:14.928124 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 10 00:31:14.928134 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 10 00:31:14.928142 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 10 00:31:14.928149 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 10 00:31:14.928157 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 10 00:31:14.928164 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 10 00:31:14.928172 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 10 00:31:14.928179 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 10 00:31:14.928187 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 10 00:31:14.928197 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 10 00:31:14.928205 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 10 00:31:14.928213 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 10 00:31:14.928220 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 10 00:31:14.928228 kernel: TSC deadline timer available Sep 10 00:31:14.928235 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 10 00:31:14.928243 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 10 00:31:14.928250 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 10 00:31:14.928260 kernel: kvm-guest: setup PV sched yield Sep 10 00:31:14.928271 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 10 00:31:14.928278 kernel: Booting paravirtualized kernel on KVM Sep 10 00:31:14.928286 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 10 00:31:14.928294 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 10 00:31:14.928302 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u524288 Sep 10 00:31:14.928309 kernel: pcpu-alloc: s197160 r8192 d32216 u524288 alloc=1*2097152 Sep 10 00:31:14.928317 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 10 00:31:14.928324 kernel: kvm-guest: PV spinlocks enabled Sep 10 00:31:14.928332 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 10 00:31:14.928343 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a3dbdfb50e14c8de85dda26f853cdd6055239b4b8b15c08fb0eb00b67ce87a58 Sep 10 00:31:14.928351 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 10 00:31:14.928358 kernel: random: crng init done Sep 10 00:31:14.928366 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 10 00:31:14.928374 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 10 00:31:14.928381 kernel: Fallback order for Node 0: 0 Sep 10 00:31:14.928389 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Sep 10 00:31:14.928402 kernel: Policy zone: DMA32 Sep 10 00:31:14.928413 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 10 00:31:14.928421 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 136900K reserved, 0K cma-reserved) Sep 10 00:31:14.928430 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 10 00:31:14.928437 kernel: ftrace: allocating 37969 entries in 149 pages Sep 10 00:31:14.928445 kernel: ftrace: allocated 149 pages with 4 groups Sep 10 00:31:14.928452 kernel: Dynamic Preempt: voluntary Sep 10 00:31:14.928459 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 10 00:31:14.928470 kernel: rcu: RCU event tracing is enabled. Sep 10 00:31:14.928478 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 10 00:31:14.928489 kernel: Trampoline variant of Tasks RCU enabled. Sep 10 00:31:14.928496 kernel: Rude variant of Tasks RCU enabled. Sep 10 00:31:14.928504 kernel: Tracing variant of Tasks RCU enabled. Sep 10 00:31:14.928512 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 10 00:31:14.928521 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 10 00:31:14.928529 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 10 00:31:14.928537 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 10 00:31:14.928544 kernel: Console: colour VGA+ 80x25 Sep 10 00:31:14.928552 kernel: printk: console [ttyS0] enabled Sep 10 00:31:14.928562 kernel: ACPI: Core revision 20230628 Sep 10 00:31:14.928570 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 10 00:31:14.928577 kernel: APIC: Switch to symmetric I/O mode setup Sep 10 00:31:14.928585 kernel: x2apic enabled Sep 10 00:31:14.928592 kernel: APIC: Switched APIC routing to: physical x2apic Sep 10 00:31:14.928600 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 10 00:31:14.928608 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 10 00:31:14.928615 kernel: kvm-guest: setup PV IPIs Sep 10 00:31:14.928634 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 10 00:31:14.928642 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 10 00:31:14.928650 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 10 00:31:14.928658 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 10 00:31:14.928668 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 10 00:31:14.928676 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 10 00:31:14.928684 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 10 00:31:14.928692 kernel: Spectre V2 : Mitigation: Retpolines Sep 10 00:31:14.928700 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 10 00:31:14.928711 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 10 00:31:14.928719 kernel: active return thunk: retbleed_return_thunk Sep 10 00:31:14.928729 kernel: RETBleed: Mitigation: untrained return thunk Sep 10 00:31:14.928737 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 10 00:31:14.928745 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 10 00:31:14.928753 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 10 00:31:14.928762 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 10 00:31:14.928769 kernel: active return thunk: srso_return_thunk Sep 10 00:31:14.928780 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 10 00:31:14.928788 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 10 00:31:14.928796 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 10 00:31:14.928804 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 10 00:31:14.928812 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 10 00:31:14.928820 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 10 00:31:14.928828 kernel: Freeing SMP alternatives memory: 32K Sep 10 00:31:14.928836 kernel: pid_max: default: 32768 minimum: 301 Sep 10 00:31:14.928844 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 10 00:31:14.928854 kernel: landlock: Up and running. Sep 10 00:31:14.928862 kernel: SELinux: Initializing. Sep 10 00:31:14.928870 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 00:31:14.928878 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 00:31:14.928886 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 10 00:31:14.928894 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 10 00:31:14.928902 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 10 00:31:14.928910 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 10 00:31:14.928920 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 10 00:31:14.928931 kernel: ... version: 0 Sep 10 00:31:14.928939 kernel: ... bit width: 48 Sep 10 00:31:14.929002 kernel: ... generic registers: 6 Sep 10 00:31:14.929010 kernel: ... value mask: 0000ffffffffffff Sep 10 00:31:14.929018 kernel: ... max period: 00007fffffffffff Sep 10 00:31:14.929026 kernel: ... fixed-purpose events: 0 Sep 10 00:31:14.929034 kernel: ... event mask: 000000000000003f Sep 10 00:31:14.929041 kernel: signal: max sigframe size: 1776 Sep 10 00:31:14.929049 kernel: rcu: Hierarchical SRCU implementation. Sep 10 00:31:14.929061 kernel: rcu: Max phase no-delay instances is 400. Sep 10 00:31:14.929069 kernel: smp: Bringing up secondary CPUs ... Sep 10 00:31:14.929077 kernel: smpboot: x86: Booting SMP configuration: Sep 10 00:31:14.929085 kernel: .... node #0, CPUs: #1 #2 #3 Sep 10 00:31:14.929092 kernel: smp: Brought up 1 node, 4 CPUs Sep 10 00:31:14.929100 kernel: smpboot: Max logical packages: 1 Sep 10 00:31:14.929108 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 10 00:31:14.929116 kernel: devtmpfs: initialized Sep 10 00:31:14.929124 kernel: x86/mm: Memory block size: 128MB Sep 10 00:31:14.929135 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 10 00:31:14.929143 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 10 00:31:14.929150 kernel: pinctrl core: initialized pinctrl subsystem Sep 10 00:31:14.929158 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 10 00:31:14.929166 kernel: audit: initializing netlink subsys (disabled) Sep 10 00:31:14.929174 kernel: audit: type=2000 audit(1757464274.324:1): state=initialized audit_enabled=0 res=1 Sep 10 00:31:14.929189 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 10 00:31:14.929211 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 10 00:31:14.929224 kernel: cpuidle: using governor menu Sep 10 00:31:14.929236 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 10 00:31:14.929243 kernel: dca service started, version 1.12.1 Sep 10 00:31:14.929252 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 10 00:31:14.929259 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 10 00:31:14.929267 kernel: PCI: Using configuration type 1 for base access Sep 10 00:31:14.929275 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 10 00:31:14.929283 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 10 00:31:14.929291 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 10 00:31:14.929299 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 10 00:31:14.929310 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 10 00:31:14.929317 kernel: ACPI: Added _OSI(Module Device) Sep 10 00:31:14.929332 kernel: ACPI: Added _OSI(Processor Device) Sep 10 00:31:14.929351 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 10 00:31:14.929370 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 10 00:31:14.929400 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 10 00:31:14.929419 kernel: ACPI: Interpreter enabled Sep 10 00:31:14.929436 kernel: ACPI: PM: (supports S0 S3 S5) Sep 10 00:31:14.929455 kernel: ACPI: Using IOAPIC for interrupt routing Sep 10 00:31:14.929481 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 10 00:31:14.929503 kernel: PCI: Using E820 reservations for host bridge windows Sep 10 00:31:14.929522 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 10 00:31:14.929541 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 10 00:31:14.929844 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 10 00:31:14.930002 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 10 00:31:14.930131 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 10 00:31:14.930142 kernel: PCI host bridge to bus 0000:00 Sep 10 00:31:14.930362 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 10 00:31:14.930532 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 10 00:31:14.930723 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 10 00:31:14.930919 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 10 00:31:14.931327 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 10 00:31:14.931503 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 10 00:31:14.931627 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 10 00:31:14.931790 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 10 00:31:14.932092 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 10 00:31:14.932471 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Sep 10 00:31:14.932604 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Sep 10 00:31:14.932847 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Sep 10 00:31:14.933075 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 10 00:31:14.933322 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 10 00:31:14.933589 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Sep 10 00:31:14.933784 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Sep 10 00:31:14.934032 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Sep 10 00:31:14.934234 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 10 00:31:14.934446 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Sep 10 00:31:14.934782 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Sep 10 00:31:14.934972 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Sep 10 00:31:14.935275 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 10 00:31:14.935526 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Sep 10 00:31:14.935723 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Sep 10 00:31:14.936041 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 10 00:31:14.936191 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Sep 10 00:31:14.936400 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 10 00:31:14.936733 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 10 00:31:14.936899 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 10 00:31:14.937104 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Sep 10 00:31:14.937280 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Sep 10 00:31:14.937590 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 10 00:31:14.937721 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Sep 10 00:31:14.937737 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 10 00:31:14.937745 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 10 00:31:14.937759 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 10 00:31:14.937782 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 10 00:31:14.937798 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 10 00:31:14.937821 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 10 00:31:14.937841 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 10 00:31:14.937861 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 10 00:31:14.937877 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 10 00:31:14.937900 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 10 00:31:14.937909 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 10 00:31:14.937917 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 10 00:31:14.937925 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 10 00:31:14.937933 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 10 00:31:14.937953 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 10 00:31:14.937962 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 10 00:31:14.937969 kernel: iommu: Default domain type: Translated Sep 10 00:31:14.937977 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 10 00:31:14.937989 kernel: PCI: Using ACPI for IRQ routing Sep 10 00:31:14.938006 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 10 00:31:14.938022 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 10 00:31:14.938045 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 10 00:31:14.938445 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 10 00:31:14.938657 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 10 00:31:14.939071 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 10 00:31:14.939105 kernel: vgaarb: loaded Sep 10 00:31:14.939133 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 10 00:31:14.939156 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 10 00:31:14.939173 kernel: clocksource: Switched to clocksource kvm-clock Sep 10 00:31:14.939194 kernel: VFS: Disk quotas dquot_6.6.0 Sep 10 00:31:14.939203 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 10 00:31:14.939214 kernel: pnp: PnP ACPI init Sep 10 00:31:14.939695 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 10 00:31:14.939748 kernel: pnp: PnP ACPI: found 6 devices Sep 10 00:31:14.939804 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 10 00:31:14.939838 kernel: NET: Registered PF_INET protocol family Sep 10 00:31:14.939875 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 10 00:31:14.939904 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 10 00:31:14.939924 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 10 00:31:14.940085 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 10 00:31:14.940118 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 10 00:31:14.940135 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 10 00:31:14.940155 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 00:31:14.940175 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 00:31:14.940184 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 10 00:31:14.940192 kernel: NET: Registered PF_XDP protocol family Sep 10 00:31:14.940319 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 10 00:31:14.940520 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 10 00:31:14.940749 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 10 00:31:14.941086 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 10 00:31:14.941234 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 10 00:31:14.941367 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 10 00:31:14.941410 kernel: PCI: CLS 0 bytes, default 64 Sep 10 00:31:14.941424 kernel: Initialise system trusted keyrings Sep 10 00:31:14.941433 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 10 00:31:14.941441 kernel: Key type asymmetric registered Sep 10 00:31:14.941459 kernel: Asymmetric key parser 'x509' registered Sep 10 00:31:14.941476 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 10 00:31:14.941484 kernel: io scheduler mq-deadline registered Sep 10 00:31:14.941492 kernel: io scheduler kyber registered Sep 10 00:31:14.941499 kernel: io scheduler bfq registered Sep 10 00:31:14.941511 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 10 00:31:14.941519 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 10 00:31:14.941527 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 10 00:31:14.941535 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 10 00:31:14.941543 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 10 00:31:14.941551 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 10 00:31:14.941571 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 10 00:31:14.941591 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 10 00:31:14.941608 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 10 00:31:14.942100 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 10 00:31:14.942146 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 10 00:31:14.942315 kernel: rtc_cmos 00:04: registered as rtc0 Sep 10 00:31:14.942443 kernel: rtc_cmos 00:04: setting system clock to 2025-09-10T00:31:14 UTC (1757464274) Sep 10 00:31:14.942681 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 10 00:31:14.942701 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 10 00:31:14.942722 kernel: NET: Registered PF_INET6 protocol family Sep 10 00:31:14.942749 kernel: Segment Routing with IPv6 Sep 10 00:31:14.942758 kernel: In-situ OAM (IOAM) with IPv6 Sep 10 00:31:14.942766 kernel: NET: Registered PF_PACKET protocol family Sep 10 00:31:14.942774 kernel: Key type dns_resolver registered Sep 10 00:31:14.942782 kernel: IPI shorthand broadcast: enabled Sep 10 00:31:14.942790 kernel: sched_clock: Marking stable (723002852, 106171001)->(888372813, -59198960) Sep 10 00:31:14.942798 kernel: registered taskstats version 1 Sep 10 00:31:14.942806 kernel: Loading compiled-in X.509 certificates Sep 10 00:31:14.942814 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.104-flatcar: a614f1c62f27a560d677bbf0283703118c9005ec' Sep 10 00:31:14.942822 kernel: Key type .fscrypt registered Sep 10 00:31:14.942856 kernel: Key type fscrypt-provisioning registered Sep 10 00:31:14.942872 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 10 00:31:14.942881 kernel: ima: Allocated hash algorithm: sha1 Sep 10 00:31:14.942889 kernel: ima: No architecture policies found Sep 10 00:31:14.942897 kernel: clk: Disabling unused clocks Sep 10 00:31:14.942905 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 10 00:31:14.942913 kernel: Write protecting the kernel read-only data: 36864k Sep 10 00:31:14.942921 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 10 00:31:14.942937 kernel: Run /init as init process Sep 10 00:31:14.943001 kernel: with arguments: Sep 10 00:31:14.943017 kernel: /init Sep 10 00:31:14.943025 kernel: with environment: Sep 10 00:31:14.943033 kernel: HOME=/ Sep 10 00:31:14.943047 kernel: TERM=linux Sep 10 00:31:14.943067 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 10 00:31:14.943091 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 10 00:31:14.943122 systemd[1]: Detected virtualization kvm. Sep 10 00:31:14.943132 systemd[1]: Detected architecture x86-64. Sep 10 00:31:14.943140 systemd[1]: Running in initrd. Sep 10 00:31:14.943148 systemd[1]: No hostname configured, using default hostname. Sep 10 00:31:14.943156 systemd[1]: Hostname set to . Sep 10 00:31:14.943165 systemd[1]: Initializing machine ID from VM UUID. Sep 10 00:31:14.943174 systemd[1]: Queued start job for default target initrd.target. Sep 10 00:31:14.943182 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 00:31:14.943194 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 00:31:14.943203 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 10 00:31:14.943224 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 10 00:31:14.943236 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 10 00:31:14.943245 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 10 00:31:14.943258 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 10 00:31:14.943267 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 10 00:31:14.943276 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 00:31:14.943284 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 10 00:31:14.943293 systemd[1]: Reached target paths.target - Path Units. Sep 10 00:31:14.943302 systemd[1]: Reached target slices.target - Slice Units. Sep 10 00:31:14.943310 systemd[1]: Reached target swap.target - Swaps. Sep 10 00:31:14.943319 systemd[1]: Reached target timers.target - Timer Units. Sep 10 00:31:14.943345 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 10 00:31:14.943358 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 10 00:31:14.943378 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 10 00:31:14.943403 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 10 00:31:14.943423 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 10 00:31:14.943432 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 10 00:31:14.943441 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 00:31:14.943450 systemd[1]: Reached target sockets.target - Socket Units. Sep 10 00:31:14.943458 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 10 00:31:14.943486 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 10 00:31:14.943508 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 10 00:31:14.943531 systemd[1]: Starting systemd-fsck-usr.service... Sep 10 00:31:14.943543 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 10 00:31:14.943552 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 10 00:31:14.943561 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 00:31:14.943570 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 10 00:31:14.943579 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 00:31:14.943599 systemd[1]: Finished systemd-fsck-usr.service. Sep 10 00:31:14.943610 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 10 00:31:14.943647 systemd-journald[193]: Collecting audit messages is disabled. Sep 10 00:31:14.943683 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 10 00:31:14.943692 systemd-journald[193]: Journal started Sep 10 00:31:14.943715 systemd-journald[193]: Runtime Journal (/run/log/journal/d421701082e44967a91ed09f38bb0799) is 6.0M, max 48.4M, 42.3M free. Sep 10 00:31:14.949071 kernel: Bridge firewalling registered Sep 10 00:31:14.904877 systemd-modules-load[194]: Inserted module 'overlay' Sep 10 00:31:14.952736 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 00:31:14.952762 systemd[1]: Started systemd-journald.service - Journal Service. Sep 10 00:31:14.945072 systemd-modules-load[194]: Inserted module 'br_netfilter' Sep 10 00:31:14.954927 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 10 00:31:14.957288 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 10 00:31:14.981223 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 10 00:31:14.985148 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 00:31:14.988221 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 10 00:31:14.996063 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 10 00:31:15.009641 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 00:31:15.012278 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 00:31:15.015054 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 00:31:15.027166 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 10 00:31:15.028432 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 00:31:15.032452 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 10 00:31:15.041831 dracut-cmdline[226]: dracut-dracut-053 Sep 10 00:31:15.047912 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a3dbdfb50e14c8de85dda26f853cdd6055239b4b8b15c08fb0eb00b67ce87a58 Sep 10 00:31:15.079875 systemd-resolved[231]: Positive Trust Anchors: Sep 10 00:31:15.079897 systemd-resolved[231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 00:31:15.079929 systemd-resolved[231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 10 00:31:15.082653 systemd-resolved[231]: Defaulting to hostname 'linux'. Sep 10 00:31:15.083828 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 10 00:31:15.088746 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 10 00:31:15.152979 kernel: SCSI subsystem initialized Sep 10 00:31:15.162971 kernel: Loading iSCSI transport class v2.0-870. Sep 10 00:31:15.172970 kernel: iscsi: registered transport (tcp) Sep 10 00:31:15.194975 kernel: iscsi: registered transport (qla4xxx) Sep 10 00:31:15.195004 kernel: QLogic iSCSI HBA Driver Sep 10 00:31:15.251084 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 10 00:31:15.262087 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 10 00:31:15.287812 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 10 00:31:15.287885 kernel: device-mapper: uevent: version 1.0.3 Sep 10 00:31:15.287899 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 10 00:31:15.330999 kernel: raid6: avx2x4 gen() 30651 MB/s Sep 10 00:31:15.347973 kernel: raid6: avx2x2 gen() 31236 MB/s Sep 10 00:31:15.364994 kernel: raid6: avx2x1 gen() 26047 MB/s Sep 10 00:31:15.365034 kernel: raid6: using algorithm avx2x2 gen() 31236 MB/s Sep 10 00:31:15.383003 kernel: raid6: .... xor() 19929 MB/s, rmw enabled Sep 10 00:31:15.383049 kernel: raid6: using avx2x2 recovery algorithm Sep 10 00:31:15.402967 kernel: xor: automatically using best checksumming function avx Sep 10 00:31:15.562982 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 10 00:31:15.577956 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 10 00:31:15.588319 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 00:31:15.601126 systemd-udevd[413]: Using default interface naming scheme 'v255'. Sep 10 00:31:15.606044 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 00:31:15.618196 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 10 00:31:15.635303 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation Sep 10 00:31:15.671620 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 10 00:31:15.685166 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 10 00:31:15.752982 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 00:31:15.767098 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 10 00:31:15.781806 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 10 00:31:15.784968 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 10 00:31:15.787805 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 00:31:15.790659 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 10 00:31:15.796984 kernel: cryptd: max_cpu_qlen set to 1000 Sep 10 00:31:15.809142 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 10 00:31:15.813186 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 10 00:31:15.807122 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 10 00:31:15.818130 kernel: AVX2 version of gcm_enc/dec engaged. Sep 10 00:31:15.818155 kernel: AES CTR mode by8 optimization enabled Sep 10 00:31:15.818428 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 10 00:31:15.818563 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 00:31:15.823141 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 10 00:31:15.828632 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 10 00:31:15.828660 kernel: GPT:9289727 != 19775487 Sep 10 00:31:15.828671 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 10 00:31:15.828681 kernel: GPT:9289727 != 19775487 Sep 10 00:31:15.828691 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 10 00:31:15.828702 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:31:15.826787 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 00:31:15.827154 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 00:31:15.828386 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 00:31:15.835686 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 00:31:15.842128 kernel: libata version 3.00 loaded. Sep 10 00:31:15.846313 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 10 00:31:15.853888 kernel: ahci 0000:00:1f.2: version 3.0 Sep 10 00:31:15.854160 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 10 00:31:15.857900 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 10 00:31:15.858111 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 10 00:31:15.868980 kernel: BTRFS: device fsid 47ffa5df-7ab2-4f1a-b68f-595717991426 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (470) Sep 10 00:31:15.872965 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (459) Sep 10 00:31:15.872988 kernel: scsi host0: ahci Sep 10 00:31:15.879973 kernel: scsi host1: ahci Sep 10 00:31:15.880991 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 10 00:31:15.909849 kernel: scsi host2: ahci Sep 10 00:31:15.910055 kernel: scsi host3: ahci Sep 10 00:31:15.910213 kernel: scsi host4: ahci Sep 10 00:31:15.910362 kernel: scsi host5: ahci Sep 10 00:31:15.910527 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Sep 10 00:31:15.910539 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Sep 10 00:31:15.910549 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Sep 10 00:31:15.910564 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Sep 10 00:31:15.910574 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Sep 10 00:31:15.910585 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Sep 10 00:31:15.914504 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 10 00:31:15.917131 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 00:31:15.922969 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 10 00:31:15.923422 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 10 00:31:15.928344 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 10 00:31:15.939231 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 10 00:31:15.942355 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 10 00:31:15.961326 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 00:31:16.200998 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 10 00:31:16.201098 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 10 00:31:16.201997 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 10 00:31:16.202093 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 10 00:31:16.202985 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 10 00:31:16.203979 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 10 00:31:16.204982 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 10 00:31:16.204996 kernel: ata3.00: applying bridge limits Sep 10 00:31:16.205975 kernel: ata3.00: configured for UDMA/100 Sep 10 00:31:16.206013 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 10 00:31:16.269983 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 10 00:31:16.270211 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 10 00:31:16.283972 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 10 00:31:16.300528 disk-uuid[568]: Primary Header is updated. Sep 10 00:31:16.300528 disk-uuid[568]: Secondary Entries is updated. Sep 10 00:31:16.300528 disk-uuid[568]: Secondary Header is updated. Sep 10 00:31:16.304968 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:31:16.309980 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:31:17.366002 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:31:17.366114 disk-uuid[578]: The operation has completed successfully. Sep 10 00:31:17.395789 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 10 00:31:17.395921 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 10 00:31:17.421077 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 10 00:31:17.426586 sh[593]: Success Sep 10 00:31:17.438968 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 10 00:31:17.470931 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 10 00:31:17.526544 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 10 00:31:17.529690 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 10 00:31:17.543253 kernel: BTRFS info (device dm-0): first mount of filesystem 47ffa5df-7ab2-4f1a-b68f-595717991426 Sep 10 00:31:17.543287 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 10 00:31:17.543299 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 10 00:31:17.545501 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 10 00:31:17.545523 kernel: BTRFS info (device dm-0): using free space tree Sep 10 00:31:17.549894 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 10 00:31:17.552487 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 10 00:31:17.564083 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 10 00:31:17.566597 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 10 00:31:17.575068 kernel: BTRFS info (device vda6): first mount of filesystem 81146077-6e72-4c2f-a205-63f64096a038 Sep 10 00:31:17.575104 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 10 00:31:17.575119 kernel: BTRFS info (device vda6): using free space tree Sep 10 00:31:17.578969 kernel: BTRFS info (device vda6): auto enabling async discard Sep 10 00:31:17.587737 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 10 00:31:17.589981 kernel: BTRFS info (device vda6): last unmount of filesystem 81146077-6e72-4c2f-a205-63f64096a038 Sep 10 00:31:17.821589 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 10 00:31:17.834095 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 10 00:31:17.857818 systemd-networkd[771]: lo: Link UP Sep 10 00:31:17.857829 systemd-networkd[771]: lo: Gained carrier Sep 10 00:31:17.859626 systemd-networkd[771]: Enumeration completed Sep 10 00:31:17.860086 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 00:31:17.860091 systemd-networkd[771]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 00:31:17.860414 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 10 00:31:17.861134 systemd-networkd[771]: eth0: Link UP Sep 10 00:31:17.861139 systemd-networkd[771]: eth0: Gained carrier Sep 10 00:31:17.861148 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 00:31:17.869047 systemd[1]: Reached target network.target - Network. Sep 10 00:31:17.890989 systemd-networkd[771]: eth0: DHCPv4 address 10.0.0.21/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 10 00:31:17.985395 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 10 00:31:18.037088 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 10 00:31:18.135315 ignition[776]: Ignition 2.19.0 Sep 10 00:31:18.135330 ignition[776]: Stage: fetch-offline Sep 10 00:31:18.135392 ignition[776]: no configs at "/usr/lib/ignition/base.d" Sep 10 00:31:18.135406 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:31:18.135568 ignition[776]: parsed url from cmdline: "" Sep 10 00:31:18.135573 ignition[776]: no config URL provided Sep 10 00:31:18.135579 ignition[776]: reading system config file "/usr/lib/ignition/user.ign" Sep 10 00:31:18.135589 ignition[776]: no config at "/usr/lib/ignition/user.ign" Sep 10 00:31:18.135625 ignition[776]: op(1): [started] loading QEMU firmware config module Sep 10 00:31:18.135632 ignition[776]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 10 00:31:18.150378 ignition[776]: op(1): [finished] loading QEMU firmware config module Sep 10 00:31:18.187356 ignition[776]: parsing config with SHA512: e03ab054722eed7e0e663e3b8da63805b775244a9cf9206b320a71502b07cacdc8915e36c06ecc0a33f7aa90e4b43606036c7693b175ac629c857d8bc63b5437 Sep 10 00:31:18.192988 unknown[776]: fetched base config from "system" Sep 10 00:31:18.193001 unknown[776]: fetched user config from "qemu" Sep 10 00:31:18.193511 ignition[776]: fetch-offline: fetch-offline passed Sep 10 00:31:18.193995 systemd-resolved[231]: Detected conflict on linux IN A 10.0.0.21 Sep 10 00:31:18.193607 ignition[776]: Ignition finished successfully Sep 10 00:31:18.194006 systemd-resolved[231]: Hostname conflict, changing published hostname from 'linux' to 'linux8'. Sep 10 00:31:18.201041 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 10 00:31:18.203505 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 10 00:31:18.221272 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 10 00:31:18.238782 ignition[787]: Ignition 2.19.0 Sep 10 00:31:18.238794 ignition[787]: Stage: kargs Sep 10 00:31:18.238986 ignition[787]: no configs at "/usr/lib/ignition/base.d" Sep 10 00:31:18.238998 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:31:18.242557 ignition[787]: kargs: kargs passed Sep 10 00:31:18.242611 ignition[787]: Ignition finished successfully Sep 10 00:31:18.247062 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 10 00:31:18.259242 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 10 00:31:18.274845 ignition[795]: Ignition 2.19.0 Sep 10 00:31:18.274861 ignition[795]: Stage: disks Sep 10 00:31:18.275101 ignition[795]: no configs at "/usr/lib/ignition/base.d" Sep 10 00:31:18.275118 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:31:18.276067 ignition[795]: disks: disks passed Sep 10 00:31:18.276116 ignition[795]: Ignition finished successfully Sep 10 00:31:18.279401 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 10 00:31:18.281585 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 10 00:31:18.283667 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 10 00:31:18.285918 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 10 00:31:18.287779 systemd[1]: Reached target sysinit.target - System Initialization. Sep 10 00:31:18.288271 systemd[1]: Reached target basic.target - Basic System. Sep 10 00:31:18.300276 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 10 00:31:18.315606 systemd-fsck[806]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 10 00:31:18.321831 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 10 00:31:18.342182 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 10 00:31:18.522977 kernel: EXT4-fs (vda9): mounted filesystem 0a9bf3c7-f8cd-4d40-b949-283957ba2f96 r/w with ordered data mode. Quota mode: none. Sep 10 00:31:18.524107 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 10 00:31:18.526426 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 10 00:31:18.548030 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 10 00:31:18.550705 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 10 00:31:18.553228 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 10 00:31:18.553283 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 10 00:31:18.555150 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 10 00:31:18.557018 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (814) Sep 10 00:31:18.559233 kernel: BTRFS info (device vda6): first mount of filesystem 81146077-6e72-4c2f-a205-63f64096a038 Sep 10 00:31:18.559258 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 10 00:31:18.559273 kernel: BTRFS info (device vda6): using free space tree Sep 10 00:31:18.562972 kernel: BTRFS info (device vda6): auto enabling async discard Sep 10 00:31:18.565220 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 10 00:31:18.567129 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 10 00:31:18.579104 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 10 00:31:18.636710 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Sep 10 00:31:18.641211 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Sep 10 00:31:18.646473 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Sep 10 00:31:18.651518 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Sep 10 00:31:18.742939 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 10 00:31:18.757053 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 10 00:31:18.760403 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 10 00:31:18.764621 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 10 00:31:18.765976 kernel: BTRFS info (device vda6): last unmount of filesystem 81146077-6e72-4c2f-a205-63f64096a038 Sep 10 00:31:18.786862 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 10 00:31:18.795882 ignition[927]: INFO : Ignition 2.19.0 Sep 10 00:31:18.795882 ignition[927]: INFO : Stage: mount Sep 10 00:31:18.797492 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 00:31:18.797492 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:31:18.797492 ignition[927]: INFO : mount: mount passed Sep 10 00:31:18.797492 ignition[927]: INFO : Ignition finished successfully Sep 10 00:31:18.798752 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 10 00:31:18.805037 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 10 00:31:18.814176 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 10 00:31:18.826970 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (940) Sep 10 00:31:18.829166 kernel: BTRFS info (device vda6): first mount of filesystem 81146077-6e72-4c2f-a205-63f64096a038 Sep 10 00:31:18.829183 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 10 00:31:18.829194 kernel: BTRFS info (device vda6): using free space tree Sep 10 00:31:18.831978 kernel: BTRFS info (device vda6): auto enabling async discard Sep 10 00:31:18.833822 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 10 00:31:18.857168 ignition[957]: INFO : Ignition 2.19.0 Sep 10 00:31:18.857168 ignition[957]: INFO : Stage: files Sep 10 00:31:18.858935 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 00:31:18.858935 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:31:18.858935 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Sep 10 00:31:18.862410 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 10 00:31:18.862410 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 10 00:31:18.862410 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 10 00:31:18.862410 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 10 00:31:18.867764 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 10 00:31:18.867764 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 10 00:31:18.867764 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 10 00:31:18.867764 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 10 00:31:18.867764 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 10 00:31:18.862431 unknown[957]: wrote ssh authorized keys file for user: core Sep 10 00:31:18.955422 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 10 00:31:19.106149 systemd-networkd[771]: eth0: Gained IPv6LL Sep 10 00:31:19.526911 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 10 00:31:19.526911 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 10 00:31:19.530937 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 10 00:31:19.532654 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 10 00:31:19.534415 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 10 00:31:19.536038 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 00:31:19.537739 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 00:31:19.539369 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 00:31:19.541138 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 00:31:19.543032 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 00:31:19.544871 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 00:31:19.546813 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 10 00:31:19.549676 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 10 00:31:19.552027 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 10 00:31:19.554074 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 10 00:31:19.896417 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 10 00:31:20.566647 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 10 00:31:20.566647 ignition[957]: INFO : files: op(c): [started] processing unit "containerd.service" Sep 10 00:31:20.570314 ignition[957]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 10 00:31:20.570314 ignition[957]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 10 00:31:20.570314 ignition[957]: INFO : files: op(c): [finished] processing unit "containerd.service" Sep 10 00:31:20.570314 ignition[957]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Sep 10 00:31:20.570314 ignition[957]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 00:31:20.570314 ignition[957]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 00:31:20.570314 ignition[957]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Sep 10 00:31:20.570314 ignition[957]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Sep 10 00:31:20.570314 ignition[957]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 10 00:31:20.570314 ignition[957]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 10 00:31:20.570314 ignition[957]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Sep 10 00:31:20.570314 ignition[957]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Sep 10 00:31:20.612093 ignition[957]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 10 00:31:20.619526 ignition[957]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 10 00:31:20.621286 ignition[957]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Sep 10 00:31:20.621286 ignition[957]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Sep 10 00:31:20.623980 ignition[957]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Sep 10 00:31:20.625440 ignition[957]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 10 00:31:20.627198 ignition[957]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 10 00:31:20.628815 ignition[957]: INFO : files: files passed Sep 10 00:31:20.629541 ignition[957]: INFO : Ignition finished successfully Sep 10 00:31:20.632638 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 10 00:31:20.645090 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 10 00:31:20.646862 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 10 00:31:20.648919 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 10 00:31:20.649069 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 10 00:31:20.657866 initrd-setup-root-after-ignition[985]: grep: /sysroot/oem/oem-release: No such file or directory Sep 10 00:31:20.661285 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 10 00:31:20.661285 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 10 00:31:20.664529 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 10 00:31:20.671024 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 10 00:31:20.672838 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 10 00:31:20.689103 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 10 00:31:20.715107 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 10 00:31:20.715270 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 10 00:31:20.717675 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 10 00:31:20.719664 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 10 00:31:20.721732 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 10 00:31:20.732094 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 10 00:31:20.746331 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 10 00:31:20.762080 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 10 00:31:20.771330 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 10 00:31:20.772774 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 00:31:20.775277 systemd[1]: Stopped target timers.target - Timer Units. Sep 10 00:31:20.777606 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 10 00:31:20.777753 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 10 00:31:20.780258 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 10 00:31:20.782150 systemd[1]: Stopped target basic.target - Basic System. Sep 10 00:31:20.784461 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 10 00:31:20.786807 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 10 00:31:20.789157 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 10 00:31:20.791633 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 10 00:31:20.793897 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 10 00:31:20.796059 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 10 00:31:20.797926 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 10 00:31:20.799999 systemd[1]: Stopped target swap.target - Swaps. Sep 10 00:31:20.801660 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 10 00:31:20.801806 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 10 00:31:20.803815 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 10 00:31:20.805335 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 00:31:20.807303 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 10 00:31:20.807432 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 00:31:20.809405 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 10 00:31:20.809526 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 10 00:31:20.811697 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 10 00:31:20.811822 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 10 00:31:20.813743 systemd[1]: Stopped target paths.target - Path Units. Sep 10 00:31:20.815434 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 10 00:31:20.819023 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 00:31:20.820858 systemd[1]: Stopped target slices.target - Slice Units. Sep 10 00:31:20.822828 systemd[1]: Stopped target sockets.target - Socket Units. Sep 10 00:31:20.824549 systemd[1]: iscsid.socket: Deactivated successfully. Sep 10 00:31:20.824654 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 10 00:31:20.826451 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 10 00:31:20.826548 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 10 00:31:20.828773 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 10 00:31:20.828894 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 10 00:31:20.831328 systemd[1]: ignition-files.service: Deactivated successfully. Sep 10 00:31:20.831447 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 10 00:31:20.850163 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 10 00:31:20.852131 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 10 00:31:20.853259 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 10 00:31:20.853388 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 00:31:20.856131 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 10 00:31:20.856356 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 10 00:31:20.863145 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 10 00:31:20.867542 ignition[1011]: INFO : Ignition 2.19.0 Sep 10 00:31:20.867542 ignition[1011]: INFO : Stage: umount Sep 10 00:31:20.867542 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 00:31:20.867542 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:31:20.867542 ignition[1011]: INFO : umount: umount passed Sep 10 00:31:20.867542 ignition[1011]: INFO : Ignition finished successfully Sep 10 00:31:20.863465 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 10 00:31:20.868758 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 10 00:31:20.868929 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 10 00:31:20.871461 systemd[1]: Stopped target network.target - Network. Sep 10 00:31:20.873029 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 10 00:31:20.873101 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 10 00:31:20.875614 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 10 00:31:20.875667 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 10 00:31:20.877730 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 10 00:31:20.877787 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 10 00:31:20.879989 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 10 00:31:20.880045 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 10 00:31:20.889603 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 10 00:31:20.894983 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 10 00:31:20.897047 systemd-networkd[771]: eth0: DHCPv6 lease lost Sep 10 00:31:20.899035 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 10 00:31:20.900494 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 10 00:31:20.901554 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 10 00:31:20.904667 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 10 00:31:20.905718 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 10 00:31:20.909465 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 10 00:31:20.909523 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 10 00:31:20.924051 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 10 00:31:20.924976 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 10 00:31:20.925915 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 10 00:31:20.928151 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 10 00:31:20.928209 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 10 00:31:20.931769 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 10 00:31:20.931820 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 10 00:31:20.933974 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 10 00:31:20.934026 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 00:31:20.939612 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 00:31:20.961249 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 10 00:31:20.961557 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 00:31:20.962824 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 10 00:31:20.962934 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 10 00:31:20.965333 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 10 00:31:20.965431 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 00:31:20.967287 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 10 00:31:20.967372 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 10 00:31:20.970030 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 10 00:31:20.970090 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 10 00:31:20.970803 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 10 00:31:20.970917 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 00:31:20.982218 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 10 00:31:20.982676 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 10 00:31:20.982755 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 00:31:20.983251 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 10 00:31:20.983305 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 10 00:31:20.987071 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 10 00:31:20.987162 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 00:31:20.987549 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 00:31:20.987611 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 00:31:20.988370 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 10 00:31:20.988569 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 10 00:31:20.996978 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 10 00:31:20.997098 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 10 00:31:21.034703 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 10 00:31:21.034828 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 10 00:31:21.036669 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 10 00:31:21.038386 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 10 00:31:21.038446 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 10 00:31:21.047208 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 10 00:31:21.054723 systemd[1]: Switching root. Sep 10 00:31:21.089852 systemd-journald[193]: Journal stopped Sep 10 00:31:22.259818 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Sep 10 00:31:22.259894 kernel: SELinux: policy capability network_peer_controls=1 Sep 10 00:31:22.259929 kernel: SELinux: policy capability open_perms=1 Sep 10 00:31:22.259941 kernel: SELinux: policy capability extended_socket_class=1 Sep 10 00:31:22.259965 kernel: SELinux: policy capability always_check_network=0 Sep 10 00:31:22.260428 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 10 00:31:22.260443 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 10 00:31:22.260461 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 10 00:31:22.260473 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 10 00:31:22.260485 kernel: audit: type=1403 audit(1757464281.470:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 10 00:31:22.260506 systemd[1]: Successfully loaded SELinux policy in 41.827ms. Sep 10 00:31:22.260526 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.500ms. Sep 10 00:31:22.260540 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 10 00:31:22.260553 systemd[1]: Detected virtualization kvm. Sep 10 00:31:22.260565 systemd[1]: Detected architecture x86-64. Sep 10 00:31:22.260578 systemd[1]: Detected first boot. Sep 10 00:31:22.260595 systemd[1]: Initializing machine ID from VM UUID. Sep 10 00:31:22.260607 zram_generator::config[1071]: No configuration found. Sep 10 00:31:22.260621 systemd[1]: Populated /etc with preset unit settings. Sep 10 00:31:22.260636 systemd[1]: Queued start job for default target multi-user.target. Sep 10 00:31:22.260648 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 10 00:31:22.260661 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 10 00:31:22.260673 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 10 00:31:22.260685 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 10 00:31:22.260697 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 10 00:31:22.260710 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 10 00:31:22.260723 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 10 00:31:22.260738 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 10 00:31:22.260751 systemd[1]: Created slice user.slice - User and Session Slice. Sep 10 00:31:22.260763 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 00:31:22.260776 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 00:31:22.260789 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 10 00:31:22.260802 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 10 00:31:22.260815 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 10 00:31:22.260828 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 10 00:31:22.260840 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 10 00:31:22.260859 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 00:31:22.260872 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 10 00:31:22.260884 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 00:31:22.260896 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 10 00:31:22.260908 systemd[1]: Reached target slices.target - Slice Units. Sep 10 00:31:22.260920 systemd[1]: Reached target swap.target - Swaps. Sep 10 00:31:22.260933 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 10 00:31:22.261041 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 10 00:31:22.261060 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 10 00:31:22.261072 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 10 00:31:22.261084 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 10 00:31:22.261096 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 10 00:31:22.261108 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 00:31:22.261120 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 10 00:31:22.261138 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 10 00:31:22.261150 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 10 00:31:22.261163 systemd[1]: Mounting media.mount - External Media Directory... Sep 10 00:31:22.261182 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:31:22.261201 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 10 00:31:22.261214 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 10 00:31:22.261227 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 10 00:31:22.261245 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 10 00:31:22.261257 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 00:31:22.261270 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 10 00:31:22.261282 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 10 00:31:22.261294 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 00:31:22.261311 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 10 00:31:22.261323 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 00:31:22.261335 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 10 00:31:22.261349 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 00:31:22.261361 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 10 00:31:22.261374 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 10 00:31:22.261387 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Sep 10 00:31:22.261399 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 10 00:31:22.261417 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 10 00:31:22.261429 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 10 00:31:22.261441 kernel: loop: module loaded Sep 10 00:31:22.261453 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 10 00:31:22.261466 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 10 00:31:22.261479 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:31:22.261492 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 10 00:31:22.261504 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 10 00:31:22.261517 systemd[1]: Mounted media.mount - External Media Directory. Sep 10 00:31:22.261535 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 10 00:31:22.261547 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 10 00:31:22.261560 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 10 00:31:22.261572 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 00:31:22.261584 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 10 00:31:22.261608 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 10 00:31:22.261621 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:31:22.261634 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 00:31:22.261653 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 00:31:22.261667 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 00:31:22.261679 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 00:31:22.261692 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 00:31:22.261705 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 10 00:31:22.261723 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 10 00:31:22.261755 systemd-journald[1156]: Collecting audit messages is disabled. Sep 10 00:31:22.261778 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 10 00:31:22.261790 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 10 00:31:22.261803 kernel: ACPI: bus type drm_connector registered Sep 10 00:31:22.261814 systemd-journald[1156]: Journal started Sep 10 00:31:22.261842 systemd-journald[1156]: Runtime Journal (/run/log/journal/d421701082e44967a91ed09f38bb0799) is 6.0M, max 48.4M, 42.3M free. Sep 10 00:31:22.263012 kernel: fuse: init (API version 7.39) Sep 10 00:31:22.266024 systemd[1]: Started systemd-journald.service - Journal Service. Sep 10 00:31:22.267288 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 00:31:22.267527 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 10 00:31:22.268965 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 10 00:31:22.269237 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 10 00:31:22.285421 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 10 00:31:22.294028 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 10 00:31:22.296377 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 10 00:31:22.297485 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 10 00:31:22.301021 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 10 00:31:22.305512 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 10 00:31:22.306700 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 00:31:22.312919 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 10 00:31:22.314226 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 10 00:31:22.316523 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 00:31:22.319521 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 10 00:31:22.323899 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 10 00:31:22.330801 systemd-journald[1156]: Time spent on flushing to /var/log/journal/d421701082e44967a91ed09f38bb0799 is 16.699ms for 944 entries. Sep 10 00:31:22.330801 systemd-journald[1156]: System Journal (/var/log/journal/d421701082e44967a91ed09f38bb0799) is 8.0M, max 195.6M, 187.6M free. Sep 10 00:31:22.465171 systemd-journald[1156]: Received client request to flush runtime journal. Sep 10 00:31:22.331156 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 10 00:31:22.333472 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 00:31:22.346092 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 10 00:31:22.355681 udevadm[1214]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 10 00:31:22.445251 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 00:31:22.456389 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 10 00:31:22.458648 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 10 00:31:22.458765 systemd-tmpfiles[1208]: ACLs are not supported, ignoring. Sep 10 00:31:22.458780 systemd-tmpfiles[1208]: ACLs are not supported, ignoring. Sep 10 00:31:22.465117 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 10 00:31:22.468600 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 10 00:31:22.477103 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 10 00:31:22.504543 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 10 00:31:22.511121 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 10 00:31:22.529817 systemd-tmpfiles[1230]: ACLs are not supported, ignoring. Sep 10 00:31:22.529840 systemd-tmpfiles[1230]: ACLs are not supported, ignoring. Sep 10 00:31:22.536624 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 00:31:23.104909 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 10 00:31:23.116103 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 00:31:23.143499 systemd-udevd[1236]: Using default interface naming scheme 'v255'. Sep 10 00:31:23.160238 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 00:31:23.171221 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 10 00:31:23.177920 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 10 00:31:23.205229 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Sep 10 00:31:23.206960 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1239) Sep 10 00:31:23.261077 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 10 00:31:23.305969 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 10 00:31:23.307719 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 10 00:31:23.309768 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 10 00:31:23.310067 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 10 00:31:23.312971 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 10 00:31:23.314181 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 10 00:31:23.336042 kernel: ACPI: button: Power Button [PWRF] Sep 10 00:31:23.340707 systemd-networkd[1240]: lo: Link UP Sep 10 00:31:23.340718 systemd-networkd[1240]: lo: Gained carrier Sep 10 00:31:23.342854 systemd-networkd[1240]: Enumeration completed Sep 10 00:31:23.343292 systemd-networkd[1240]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 00:31:23.343297 systemd-networkd[1240]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 00:31:23.343681 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 10 00:31:23.344926 systemd-networkd[1240]: eth0: Link UP Sep 10 00:31:23.344934 systemd-networkd[1240]: eth0: Gained carrier Sep 10 00:31:23.344956 systemd-networkd[1240]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 00:31:23.352117 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 10 00:31:23.384029 systemd-networkd[1240]: eth0: DHCPv4 address 10.0.0.21/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 10 00:31:23.397970 kernel: mousedev: PS/2 mouse device common for all mice Sep 10 00:31:23.437429 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 00:31:23.451137 kernel: kvm_amd: TSC scaling supported Sep 10 00:31:23.451186 kernel: kvm_amd: Nested Virtualization enabled Sep 10 00:31:23.451200 kernel: kvm_amd: Nested Paging enabled Sep 10 00:31:23.451212 kernel: kvm_amd: LBR virtualization supported Sep 10 00:31:23.452163 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 10 00:31:23.452179 kernel: kvm_amd: Virtual GIF supported Sep 10 00:31:23.475019 kernel: EDAC MC: Ver: 3.0.0 Sep 10 00:31:23.505633 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 10 00:31:23.529834 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 00:31:23.537111 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 10 00:31:23.548998 lvm[1283]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 10 00:31:23.591415 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 10 00:31:23.592917 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 10 00:31:23.605077 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 10 00:31:23.611594 lvm[1286]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 10 00:31:23.652909 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 10 00:31:23.654475 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 10 00:31:23.655724 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 10 00:31:23.655752 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 10 00:31:23.656757 systemd[1]: Reached target machines.target - Containers. Sep 10 00:31:23.658835 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 10 00:31:23.668073 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 10 00:31:23.670509 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 10 00:31:23.671668 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 00:31:23.672840 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 10 00:31:23.675352 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 10 00:31:23.679115 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 10 00:31:23.681305 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 10 00:31:23.692195 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 10 00:31:23.694962 kernel: loop0: detected capacity change from 0 to 221472 Sep 10 00:31:23.702637 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 10 00:31:23.703508 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 10 00:31:23.715985 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 10 00:31:23.743976 kernel: loop1: detected capacity change from 0 to 140768 Sep 10 00:31:23.794976 kernel: loop2: detected capacity change from 0 to 142488 Sep 10 00:31:23.836973 kernel: loop3: detected capacity change from 0 to 221472 Sep 10 00:31:23.846970 kernel: loop4: detected capacity change from 0 to 140768 Sep 10 00:31:23.857385 kernel: loop5: detected capacity change from 0 to 142488 Sep 10 00:31:23.867108 (sd-merge)[1306]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 10 00:31:23.867860 (sd-merge)[1306]: Merged extensions into '/usr'. Sep 10 00:31:23.917861 systemd[1]: Reloading requested from client PID 1294 ('systemd-sysext') (unit systemd-sysext.service)... Sep 10 00:31:23.917887 systemd[1]: Reloading... Sep 10 00:31:24.000351 zram_generator::config[1334]: No configuration found. Sep 10 00:31:24.077171 ldconfig[1290]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 10 00:31:24.162321 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:31:24.228359 systemd[1]: Reloading finished in 309 ms. Sep 10 00:31:24.249913 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 10 00:31:24.251629 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 10 00:31:24.269086 systemd[1]: Starting ensure-sysext.service... Sep 10 00:31:24.271509 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 10 00:31:24.278269 systemd[1]: Reloading requested from client PID 1378 ('systemctl') (unit ensure-sysext.service)... Sep 10 00:31:24.278288 systemd[1]: Reloading... Sep 10 00:31:24.336969 zram_generator::config[1406]: No configuration found. Sep 10 00:31:24.352050 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 10 00:31:24.352456 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 10 00:31:24.353530 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 10 00:31:24.353838 systemd-tmpfiles[1379]: ACLs are not supported, ignoring. Sep 10 00:31:24.353928 systemd-tmpfiles[1379]: ACLs are not supported, ignoring. Sep 10 00:31:24.357414 systemd-tmpfiles[1379]: Detected autofs mount point /boot during canonicalization of boot. Sep 10 00:31:24.357427 systemd-tmpfiles[1379]: Skipping /boot Sep 10 00:31:24.370421 systemd-tmpfiles[1379]: Detected autofs mount point /boot during canonicalization of boot. Sep 10 00:31:24.370436 systemd-tmpfiles[1379]: Skipping /boot Sep 10 00:31:24.467273 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:31:24.535067 systemd[1]: Reloading finished in 256 ms. Sep 10 00:31:24.556390 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 00:31:24.574110 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 10 00:31:24.576541 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 10 00:31:24.579033 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 10 00:31:24.583137 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 10 00:31:24.587303 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 10 00:31:24.595631 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:31:24.595815 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 00:31:24.598266 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 00:31:24.603210 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 00:31:24.606114 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 00:31:24.607436 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 00:31:24.607673 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:31:24.610600 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:31:24.611368 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 00:31:24.618652 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 10 00:31:24.621257 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 00:31:24.621490 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 00:31:24.623449 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 00:31:24.623867 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 00:31:24.626330 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 10 00:31:24.635864 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:31:24.636550 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 00:31:24.639528 augenrules[1487]: No rules Sep 10 00:31:24.644197 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 00:31:24.649210 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 10 00:31:24.657186 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 00:31:24.663265 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 00:31:24.664438 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 00:31:24.666711 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 10 00:31:24.667736 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:31:24.669573 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 10 00:31:24.671462 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 10 00:31:24.673284 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:31:24.674249 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 00:31:24.675960 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 00:31:24.676192 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 10 00:31:24.677718 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 00:31:24.677936 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 00:31:24.679625 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 00:31:24.679869 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 00:31:24.685357 systemd[1]: Finished ensure-sysext.service. Sep 10 00:31:24.691156 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 10 00:31:24.694410 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 00:31:24.694526 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 10 00:31:24.698940 systemd-resolved[1456]: Positive Trust Anchors: Sep 10 00:31:24.699087 systemd-resolved[1456]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 00:31:24.699120 systemd-resolved[1456]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 10 00:31:24.703171 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 10 00:31:24.703241 systemd-resolved[1456]: Defaulting to hostname 'linux'. Sep 10 00:31:24.704284 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 10 00:31:24.705488 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 10 00:31:24.706656 systemd[1]: Reached target network.target - Network. Sep 10 00:31:24.707545 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 10 00:31:24.765112 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 10 00:31:25.292352 systemd-resolved[1456]: Clock change detected. Flushing caches. Sep 10 00:31:25.292391 systemd-timesyncd[1514]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 10 00:31:25.292450 systemd-timesyncd[1514]: Initial clock synchronization to Wed 2025-09-10 00:31:25.292294 UTC. Sep 10 00:31:25.293075 systemd[1]: Reached target sysinit.target - System Initialization. Sep 10 00:31:25.294205 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 10 00:31:25.295465 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 10 00:31:25.296697 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 10 00:31:25.297925 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 10 00:31:25.297951 systemd[1]: Reached target paths.target - Path Units. Sep 10 00:31:25.298838 systemd[1]: Reached target time-set.target - System Time Set. Sep 10 00:31:25.300028 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 10 00:31:25.301203 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 10 00:31:25.302425 systemd[1]: Reached target timers.target - Timer Units. Sep 10 00:31:25.304108 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 10 00:31:25.307248 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 10 00:31:25.309656 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 10 00:31:25.318570 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 10 00:31:25.319672 systemd[1]: Reached target sockets.target - Socket Units. Sep 10 00:31:25.320624 systemd[1]: Reached target basic.target - Basic System. Sep 10 00:31:25.321703 systemd[1]: System is tainted: cgroupsv1 Sep 10 00:31:25.321747 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 10 00:31:25.321770 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 10 00:31:25.323068 systemd[1]: Starting containerd.service - containerd container runtime... Sep 10 00:31:25.325312 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 10 00:31:25.327366 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 10 00:31:25.330577 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 10 00:31:25.330645 systemd-networkd[1240]: eth0: Gained IPv6LL Sep 10 00:31:25.331636 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 10 00:31:25.335462 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 10 00:31:25.339877 jq[1520]: false Sep 10 00:31:25.340372 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 10 00:31:25.345808 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 10 00:31:25.350260 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 10 00:31:25.360070 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 10 00:31:25.362808 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 10 00:31:25.364526 systemd[1]: Starting update-engine.service - Update Engine... Sep 10 00:31:25.366500 extend-filesystems[1521]: Found loop3 Sep 10 00:31:25.366500 extend-filesystems[1521]: Found loop4 Sep 10 00:31:25.366500 extend-filesystems[1521]: Found loop5 Sep 10 00:31:25.366500 extend-filesystems[1521]: Found sr0 Sep 10 00:31:25.366500 extend-filesystems[1521]: Found vda Sep 10 00:31:25.366500 extend-filesystems[1521]: Found vda1 Sep 10 00:31:25.366500 extend-filesystems[1521]: Found vda2 Sep 10 00:31:25.366500 extend-filesystems[1521]: Found vda3 Sep 10 00:31:25.366500 extend-filesystems[1521]: Found usr Sep 10 00:31:25.366500 extend-filesystems[1521]: Found vda4 Sep 10 00:31:25.366500 extend-filesystems[1521]: Found vda6 Sep 10 00:31:25.366500 extend-filesystems[1521]: Found vda7 Sep 10 00:31:25.366500 extend-filesystems[1521]: Found vda9 Sep 10 00:31:25.366500 extend-filesystems[1521]: Checking size of /dev/vda9 Sep 10 00:31:25.418299 extend-filesystems[1521]: Resized partition /dev/vda9 Sep 10 00:31:25.377634 dbus-daemon[1519]: [system] SELinux support is enabled Sep 10 00:31:25.433466 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 10 00:31:25.433503 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1247) Sep 10 00:31:25.367482 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 10 00:31:25.433630 extend-filesystems[1560]: resize2fs 1.47.1 (20-May-2024) Sep 10 00:31:25.373800 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 10 00:31:25.436166 jq[1539]: true Sep 10 00:31:25.378322 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 10 00:31:25.437316 update_engine[1537]: I20250910 00:31:25.408907 1537 main.cc:92] Flatcar Update Engine starting Sep 10 00:31:25.437316 update_engine[1537]: I20250910 00:31:25.410171 1537 update_check_scheduler.cc:74] Next update check in 6m44s Sep 10 00:31:25.384319 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 10 00:31:25.384681 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 10 00:31:25.494016 tar[1548]: linux-amd64/helm Sep 10 00:31:25.385028 systemd[1]: motdgen.service: Deactivated successfully. Sep 10 00:31:25.494421 jq[1552]: true Sep 10 00:31:25.386038 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 10 00:31:25.399901 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 10 00:31:25.400228 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 10 00:31:25.425891 systemd[1]: Reached target network-online.target - Network is Online. Sep 10 00:31:25.490074 (ntainerd)[1553]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 10 00:31:25.493686 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 10 00:31:25.498169 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:31:25.506567 systemd-logind[1531]: Watching system buttons on /dev/input/event1 (Power Button) Sep 10 00:31:25.506596 systemd-logind[1531]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 10 00:31:25.508862 systemd-logind[1531]: New seat seat0. Sep 10 00:31:25.510736 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 10 00:31:25.511886 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 10 00:31:25.511916 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 10 00:31:25.513335 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 10 00:31:25.513352 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 10 00:31:25.515670 systemd[1]: Started systemd-logind.service - User Login Management. Sep 10 00:31:25.554975 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 10 00:31:25.586392 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 10 00:31:25.587214 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 10 00:31:25.602336 systemd[1]: Started update-engine.service - Update Engine. Sep 10 00:31:25.607204 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 10 00:31:25.608798 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 10 00:31:25.620665 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 10 00:31:25.648776 sshd_keygen[1543]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 10 00:31:25.673893 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 10 00:31:25.685989 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 10 00:31:25.688518 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 10 00:31:25.700822 systemd[1]: issuegen.service: Deactivated successfully. Sep 10 00:31:25.701165 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 10 00:31:25.704208 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 10 00:31:25.779223 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 10 00:31:25.784564 locksmithd[1602]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 10 00:31:25.796385 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 10 00:31:25.798665 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 10 00:31:25.800126 systemd[1]: Reached target getty.target - Login Prompts. Sep 10 00:31:25.999056 extend-filesystems[1560]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 10 00:31:25.999056 extend-filesystems[1560]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 10 00:31:25.999056 extend-filesystems[1560]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 10 00:31:26.006036 extend-filesystems[1521]: Resized filesystem in /dev/vda9 Sep 10 00:31:26.006887 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 10 00:31:26.007454 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 10 00:31:26.007832 bash[1603]: Updated "/home/core/.ssh/authorized_keys" Sep 10 00:31:26.009722 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 10 00:31:26.014467 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 10 00:31:26.077514 containerd[1553]: time="2025-09-10T00:31:26.077381815Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 10 00:31:26.105762 containerd[1553]: time="2025-09-10T00:31:26.105699781Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:31:26.107934 containerd[1553]: time="2025-09-10T00:31:26.107888387Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.104-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:31:26.107934 containerd[1553]: time="2025-09-10T00:31:26.107918403Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 10 00:31:26.107934 containerd[1553]: time="2025-09-10T00:31:26.107933962Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 10 00:31:26.108223 containerd[1553]: time="2025-09-10T00:31:26.108196735Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 10 00:31:26.108223 containerd[1553]: time="2025-09-10T00:31:26.108220890Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 10 00:31:26.108353 containerd[1553]: time="2025-09-10T00:31:26.108333451Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:31:26.108375 containerd[1553]: time="2025-09-10T00:31:26.108352137Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:31:26.108678 containerd[1553]: time="2025-09-10T00:31:26.108649444Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:31:26.108678 containerd[1553]: time="2025-09-10T00:31:26.108671065Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 10 00:31:26.108728 containerd[1553]: time="2025-09-10T00:31:26.108686163Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:31:26.108728 containerd[1553]: time="2025-09-10T00:31:26.108696162Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 10 00:31:26.108868 containerd[1553]: time="2025-09-10T00:31:26.108851473Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:31:26.109193 containerd[1553]: time="2025-09-10T00:31:26.109165693Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:31:26.109430 containerd[1553]: time="2025-09-10T00:31:26.109398559Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:31:26.109430 containerd[1553]: time="2025-09-10T00:31:26.109419438Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 10 00:31:26.109549 containerd[1553]: time="2025-09-10T00:31:26.109533042Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 10 00:31:26.109624 containerd[1553]: time="2025-09-10T00:31:26.109606630Z" level=info msg="metadata content store policy set" policy=shared Sep 10 00:31:26.114633 containerd[1553]: time="2025-09-10T00:31:26.114576592Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 10 00:31:26.114633 containerd[1553]: time="2025-09-10T00:31:26.114649058Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 10 00:31:26.114813 containerd[1553]: time="2025-09-10T00:31:26.114668645Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 10 00:31:26.114813 containerd[1553]: time="2025-09-10T00:31:26.114683152Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 10 00:31:26.114813 containerd[1553]: time="2025-09-10T00:31:26.114699472Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 10 00:31:26.114878 containerd[1553]: time="2025-09-10T00:31:26.114866005Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 10 00:31:26.115449 containerd[1553]: time="2025-09-10T00:31:26.115394626Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 10 00:31:26.115664 containerd[1553]: time="2025-09-10T00:31:26.115637732Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 10 00:31:26.115705 containerd[1553]: time="2025-09-10T00:31:26.115662719Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 10 00:31:26.115705 containerd[1553]: time="2025-09-10T00:31:26.115678549Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 10 00:31:26.115705 containerd[1553]: time="2025-09-10T00:31:26.115694078Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 10 00:31:26.115758 containerd[1553]: time="2025-09-10T00:31:26.115708325Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 10 00:31:26.115758 containerd[1553]: time="2025-09-10T00:31:26.115721860Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 10 00:31:26.115758 containerd[1553]: time="2025-09-10T00:31:26.115737349Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 10 00:31:26.115758 containerd[1553]: time="2025-09-10T00:31:26.115755714Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 10 00:31:26.115834 containerd[1553]: time="2025-09-10T00:31:26.115770551Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 10 00:31:26.115834 containerd[1553]: time="2025-09-10T00:31:26.115787443Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 10 00:31:26.115834 containerd[1553]: time="2025-09-10T00:31:26.115801099Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 10 00:31:26.115834 containerd[1553]: time="2025-09-10T00:31:26.115830203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 10 00:31:26.115902 containerd[1553]: time="2025-09-10T00:31:26.115844741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 10 00:31:26.115902 containerd[1553]: time="2025-09-10T00:31:26.115860450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 10 00:31:26.115902 containerd[1553]: time="2025-09-10T00:31:26.115873464Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 10 00:31:26.115902 containerd[1553]: time="2025-09-10T00:31:26.115885707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 10 00:31:26.115902 containerd[1553]: time="2025-09-10T00:31:26.115898141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 10 00:31:26.116001 containerd[1553]: time="2025-09-10T00:31:26.115910304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 10 00:31:26.116001 containerd[1553]: time="2025-09-10T00:31:26.115923829Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 10 00:31:26.116001 containerd[1553]: time="2025-09-10T00:31:26.115936974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 10 00:31:26.116001 containerd[1553]: time="2025-09-10T00:31:26.115952533Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 10 00:31:26.116001 containerd[1553]: time="2025-09-10T00:31:26.115964746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 10 00:31:26.116001 containerd[1553]: time="2025-09-10T00:31:26.115979974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 10 00:31:26.116001 containerd[1553]: time="2025-09-10T00:31:26.115992468Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 10 00:31:26.116122 containerd[1553]: time="2025-09-10T00:31:26.116009930Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 10 00:31:26.116122 containerd[1553]: time="2025-09-10T00:31:26.116044014Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 10 00:31:26.116122 containerd[1553]: time="2025-09-10T00:31:26.116055836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 10 00:31:26.116122 containerd[1553]: time="2025-09-10T00:31:26.116066867Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 10 00:31:26.116191 containerd[1553]: time="2025-09-10T00:31:26.116130737Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 10 00:31:26.116191 containerd[1553]: time="2025-09-10T00:31:26.116149432Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 10 00:31:26.116191 containerd[1553]: time="2025-09-10T00:31:26.116160092Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 10 00:31:26.116191 containerd[1553]: time="2025-09-10T00:31:26.116171143Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 10 00:31:26.116191 containerd[1553]: time="2025-09-10T00:31:26.116181923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 10 00:31:26.116304 containerd[1553]: time="2025-09-10T00:31:26.116199646Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 10 00:31:26.116304 containerd[1553]: time="2025-09-10T00:31:26.116210296Z" level=info msg="NRI interface is disabled by configuration." Sep 10 00:31:26.116304 containerd[1553]: time="2025-09-10T00:31:26.116220575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 10 00:31:26.116649 containerd[1553]: time="2025-09-10T00:31:26.116571574Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 10 00:31:26.116789 containerd[1553]: time="2025-09-10T00:31:26.116661242Z" level=info msg="Connect containerd service" Sep 10 00:31:26.116789 containerd[1553]: time="2025-09-10T00:31:26.116739689Z" level=info msg="using legacy CRI server" Sep 10 00:31:26.116789 containerd[1553]: time="2025-09-10T00:31:26.116751982Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 10 00:31:26.116908 containerd[1553]: time="2025-09-10T00:31:26.116889119Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 10 00:31:26.117983 containerd[1553]: time="2025-09-10T00:31:26.117777366Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 10 00:31:26.118066 containerd[1553]: time="2025-09-10T00:31:26.117942896Z" level=info msg="Start subscribing containerd event" Sep 10 00:31:26.118066 containerd[1553]: time="2025-09-10T00:31:26.118028797Z" level=info msg="Start recovering state" Sep 10 00:31:26.118104 containerd[1553]: time="2025-09-10T00:31:26.118097696Z" level=info msg="Start event monitor" Sep 10 00:31:26.118128 containerd[1553]: time="2025-09-10T00:31:26.118118696Z" level=info msg="Start snapshots syncer" Sep 10 00:31:26.118147 containerd[1553]: time="2025-09-10T00:31:26.118131850Z" level=info msg="Start cni network conf syncer for default" Sep 10 00:31:26.118147 containerd[1553]: time="2025-09-10T00:31:26.118141008Z" level=info msg="Start streaming server" Sep 10 00:31:26.118582 containerd[1553]: time="2025-09-10T00:31:26.118498759Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 10 00:31:26.118707 containerd[1553]: time="2025-09-10T00:31:26.118567257Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 10 00:31:26.122678 containerd[1553]: time="2025-09-10T00:31:26.120166226Z" level=info msg="containerd successfully booted in 0.045151s" Sep 10 00:31:26.121027 systemd[1]: Started containerd.service - containerd container runtime. Sep 10 00:31:26.391868 tar[1548]: linux-amd64/LICENSE Sep 10 00:31:26.391979 tar[1548]: linux-amd64/README.md Sep 10 00:31:26.405994 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 10 00:31:27.183900 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:31:27.185651 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 10 00:31:27.186834 systemd[1]: Startup finished in 7.626s (kernel) + 5.230s (userspace) = 12.856s. Sep 10 00:31:27.209766 (kubelet)[1656]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 00:31:27.930893 kubelet[1656]: E0910 00:31:27.930722 1656 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 00:31:27.935005 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 00:31:27.935477 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 00:31:29.283948 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 10 00:31:29.296515 systemd[1]: Started sshd@0-10.0.0.21:22-10.0.0.1:36326.service - OpenSSH per-connection server daemon (10.0.0.1:36326). Sep 10 00:31:29.342566 sshd[1669]: Accepted publickey for core from 10.0.0.1 port 36326 ssh2: RSA SHA256:yotFPVH/8pVol0IcCMTpL4axYdSEk1J0cKg1+3rpd1s Sep 10 00:31:29.344482 sshd[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:31:29.354303 systemd-logind[1531]: New session 1 of user core. Sep 10 00:31:29.355379 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 10 00:31:29.364528 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 10 00:31:29.378848 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 10 00:31:29.381742 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 10 00:31:29.407038 (systemd)[1674]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:31:29.515769 systemd[1674]: Queued start job for default target default.target. Sep 10 00:31:29.516313 systemd[1674]: Created slice app.slice - User Application Slice. Sep 10 00:31:29.516342 systemd[1674]: Reached target paths.target - Paths. Sep 10 00:31:29.516362 systemd[1674]: Reached target timers.target - Timers. Sep 10 00:31:29.529363 systemd[1674]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 10 00:31:29.536853 systemd[1674]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 10 00:31:29.536975 systemd[1674]: Reached target sockets.target - Sockets. Sep 10 00:31:29.536994 systemd[1674]: Reached target basic.target - Basic System. Sep 10 00:31:29.537060 systemd[1674]: Reached target default.target - Main User Target. Sep 10 00:31:29.537105 systemd[1674]: Startup finished in 122ms. Sep 10 00:31:29.537680 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 10 00:31:29.539710 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 10 00:31:29.612644 systemd[1]: Started sshd@1-10.0.0.21:22-10.0.0.1:36330.service - OpenSSH per-connection server daemon (10.0.0.1:36330). Sep 10 00:31:29.648203 sshd[1687]: Accepted publickey for core from 10.0.0.1 port 36330 ssh2: RSA SHA256:yotFPVH/8pVol0IcCMTpL4axYdSEk1J0cKg1+3rpd1s Sep 10 00:31:29.650058 sshd[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:31:29.654578 systemd-logind[1531]: New session 2 of user core. Sep 10 00:31:29.664559 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 10 00:31:29.719385 sshd[1687]: pam_unix(sshd:session): session closed for user core Sep 10 00:31:29.728535 systemd[1]: Started sshd@2-10.0.0.21:22-10.0.0.1:36340.service - OpenSSH per-connection server daemon (10.0.0.1:36340). Sep 10 00:31:29.729164 systemd[1]: sshd@1-10.0.0.21:22-10.0.0.1:36330.service: Deactivated successfully. Sep 10 00:31:29.731941 systemd-logind[1531]: Session 2 logged out. Waiting for processes to exit. Sep 10 00:31:29.733363 systemd[1]: session-2.scope: Deactivated successfully. Sep 10 00:31:29.734548 systemd-logind[1531]: Removed session 2. Sep 10 00:31:29.762834 sshd[1692]: Accepted publickey for core from 10.0.0.1 port 36340 ssh2: RSA SHA256:yotFPVH/8pVol0IcCMTpL4axYdSEk1J0cKg1+3rpd1s Sep 10 00:31:29.764253 sshd[1692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:31:29.768920 systemd-logind[1531]: New session 3 of user core. Sep 10 00:31:29.778563 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 10 00:31:29.829120 sshd[1692]: pam_unix(sshd:session): session closed for user core Sep 10 00:31:29.838456 systemd[1]: Started sshd@3-10.0.0.21:22-10.0.0.1:36354.service - OpenSSH per-connection server daemon (10.0.0.1:36354). Sep 10 00:31:29.839020 systemd[1]: sshd@2-10.0.0.21:22-10.0.0.1:36340.service: Deactivated successfully. Sep 10 00:31:29.841357 systemd-logind[1531]: Session 3 logged out. Waiting for processes to exit. Sep 10 00:31:29.842026 systemd[1]: session-3.scope: Deactivated successfully. Sep 10 00:31:29.843484 systemd-logind[1531]: Removed session 3. Sep 10 00:31:29.874302 sshd[1700]: Accepted publickey for core from 10.0.0.1 port 36354 ssh2: RSA SHA256:yotFPVH/8pVol0IcCMTpL4axYdSEk1J0cKg1+3rpd1s Sep 10 00:31:29.876031 sshd[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:31:29.879859 systemd-logind[1531]: New session 4 of user core. Sep 10 00:31:29.889495 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 10 00:31:29.944624 sshd[1700]: pam_unix(sshd:session): session closed for user core Sep 10 00:31:29.958556 systemd[1]: Started sshd@4-10.0.0.21:22-10.0.0.1:51850.service - OpenSSH per-connection server daemon (10.0.0.1:51850). Sep 10 00:31:29.959106 systemd[1]: sshd@3-10.0.0.21:22-10.0.0.1:36354.service: Deactivated successfully. Sep 10 00:31:29.961884 systemd-logind[1531]: Session 4 logged out. Waiting for processes to exit. Sep 10 00:31:29.962971 systemd[1]: session-4.scope: Deactivated successfully. Sep 10 00:31:29.963943 systemd-logind[1531]: Removed session 4. Sep 10 00:31:29.993891 sshd[1708]: Accepted publickey for core from 10.0.0.1 port 51850 ssh2: RSA SHA256:yotFPVH/8pVol0IcCMTpL4axYdSEk1J0cKg1+3rpd1s Sep 10 00:31:29.995932 sshd[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:31:30.000160 systemd-logind[1531]: New session 5 of user core. Sep 10 00:31:30.010516 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 10 00:31:30.069619 sudo[1715]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 10 00:31:30.069969 sudo[1715]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 00:31:30.092616 sudo[1715]: pam_unix(sudo:session): session closed for user root Sep 10 00:31:30.094502 sshd[1708]: pam_unix(sshd:session): session closed for user core Sep 10 00:31:30.107479 systemd[1]: Started sshd@5-10.0.0.21:22-10.0.0.1:51858.service - OpenSSH per-connection server daemon (10.0.0.1:51858). Sep 10 00:31:30.107997 systemd[1]: sshd@4-10.0.0.21:22-10.0.0.1:51850.service: Deactivated successfully. Sep 10 00:31:30.109940 systemd[1]: session-5.scope: Deactivated successfully. Sep 10 00:31:30.110568 systemd-logind[1531]: Session 5 logged out. Waiting for processes to exit. Sep 10 00:31:30.111854 systemd-logind[1531]: Removed session 5. Sep 10 00:31:30.141481 sshd[1717]: Accepted publickey for core from 10.0.0.1 port 51858 ssh2: RSA SHA256:yotFPVH/8pVol0IcCMTpL4axYdSEk1J0cKg1+3rpd1s Sep 10 00:31:30.143023 sshd[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:31:30.148210 systemd-logind[1531]: New session 6 of user core. Sep 10 00:31:30.158095 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 10 00:31:30.213875 sudo[1725]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 10 00:31:30.214356 sudo[1725]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 00:31:30.218097 sudo[1725]: pam_unix(sudo:session): session closed for user root Sep 10 00:31:30.224841 sudo[1724]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 10 00:31:30.225189 sudo[1724]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 00:31:30.244461 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 10 00:31:30.246317 auditctl[1728]: No rules Sep 10 00:31:30.247629 systemd[1]: audit-rules.service: Deactivated successfully. Sep 10 00:31:30.247964 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 10 00:31:30.266701 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 10 00:31:30.302163 augenrules[1747]: No rules Sep 10 00:31:30.304312 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 10 00:31:30.306165 sudo[1724]: pam_unix(sudo:session): session closed for user root Sep 10 00:31:30.308127 sshd[1717]: pam_unix(sshd:session): session closed for user core Sep 10 00:31:30.322490 systemd[1]: Started sshd@6-10.0.0.21:22-10.0.0.1:51862.service - OpenSSH per-connection server daemon (10.0.0.1:51862). Sep 10 00:31:30.322971 systemd[1]: sshd@5-10.0.0.21:22-10.0.0.1:51858.service: Deactivated successfully. Sep 10 00:31:30.325771 systemd-logind[1531]: Session 6 logged out. Waiting for processes to exit. Sep 10 00:31:30.326529 systemd[1]: session-6.scope: Deactivated successfully. Sep 10 00:31:30.327866 systemd-logind[1531]: Removed session 6. Sep 10 00:31:30.358083 sshd[1753]: Accepted publickey for core from 10.0.0.1 port 51862 ssh2: RSA SHA256:yotFPVH/8pVol0IcCMTpL4axYdSEk1J0cKg1+3rpd1s Sep 10 00:31:30.359539 sshd[1753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:31:30.363446 systemd-logind[1531]: New session 7 of user core. Sep 10 00:31:30.372514 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 10 00:31:30.425326 sudo[1760]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 10 00:31:30.425676 sudo[1760]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 00:31:31.155477 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 10 00:31:31.155685 (dockerd)[1778]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 10 00:31:31.844437 dockerd[1778]: time="2025-09-10T00:31:31.844327617Z" level=info msg="Starting up" Sep 10 00:31:32.477080 dockerd[1778]: time="2025-09-10T00:31:32.476994730Z" level=info msg="Loading containers: start." Sep 10 00:31:32.620259 kernel: Initializing XFRM netlink socket Sep 10 00:31:32.704613 systemd-networkd[1240]: docker0: Link UP Sep 10 00:31:32.733651 dockerd[1778]: time="2025-09-10T00:31:32.733490673Z" level=info msg="Loading containers: done." Sep 10 00:31:32.754598 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck495239550-merged.mount: Deactivated successfully. Sep 10 00:31:32.756429 dockerd[1778]: time="2025-09-10T00:31:32.756374787Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 10 00:31:32.756556 dockerd[1778]: time="2025-09-10T00:31:32.756537522Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 10 00:31:32.756704 dockerd[1778]: time="2025-09-10T00:31:32.756677996Z" level=info msg="Daemon has completed initialization" Sep 10 00:31:32.802007 dockerd[1778]: time="2025-09-10T00:31:32.801905219Z" level=info msg="API listen on /run/docker.sock" Sep 10 00:31:32.802657 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 10 00:31:33.864037 containerd[1553]: time="2025-09-10T00:31:33.863979737Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 10 00:31:34.616007 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4258811640.mount: Deactivated successfully. Sep 10 00:31:35.672530 containerd[1553]: time="2025-09-10T00:31:35.672452195Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:31:35.673250 containerd[1553]: time="2025-09-10T00:31:35.673163709Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.12: active requests=0, bytes read=28079631" Sep 10 00:31:35.674454 containerd[1553]: time="2025-09-10T00:31:35.674399016Z" level=info msg="ImageCreate event name:\"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:31:35.679406 containerd[1553]: time="2025-09-10T00:31:35.679370642Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:31:35.680674 containerd[1553]: time="2025-09-10T00:31:35.680634162Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.12\" with image id \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\", size \"28076431\" in 1.816592058s" Sep 10 00:31:35.680728 containerd[1553]: time="2025-09-10T00:31:35.680694425Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\"" Sep 10 00:31:35.681586 containerd[1553]: time="2025-09-10T00:31:35.681557243Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 10 00:31:37.084282 containerd[1553]: time="2025-09-10T00:31:37.084207223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:31:37.085018 containerd[1553]: time="2025-09-10T00:31:37.084939868Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.12: active requests=0, bytes read=24714681" Sep 10 00:31:37.086251 containerd[1553]: time="2025-09-10T00:31:37.086202506Z" level=info msg="ImageCreate event name:\"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:31:37.089587 containerd[1553]: time="2025-09-10T00:31:37.089530989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:31:37.090548 containerd[1553]: time="2025-09-10T00:31:37.090495038Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.12\" with image id \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\", size \"26317875\" in 1.408895686s" Sep 10 00:31:37.090629 containerd[1553]: time="2025-09-10T00:31:37.090552325Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\"" Sep 10 00:31:37.091213 containerd[1553]: time="2025-09-10T00:31:37.091190352Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 10 00:31:38.185496 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 10 00:31:38.196430 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:31:38.413745 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:31:38.418708 (kubelet)[2000]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 00:31:38.531473 kubelet[2000]: E0910 00:31:38.531293 2000 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 00:31:38.538502 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 00:31:38.538818 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 00:31:38.745090 containerd[1553]: time="2025-09-10T00:31:38.744943460Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:31:38.747644 containerd[1553]: time="2025-09-10T00:31:38.747576108Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.12: active requests=0, bytes read=18782427" Sep 10 00:31:38.750209 containerd[1553]: time="2025-09-10T00:31:38.750157029Z" level=info msg="ImageCreate event name:\"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:31:38.753327 containerd[1553]: time="2025-09-10T00:31:38.753277793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:31:38.754470 containerd[1553]: time="2025-09-10T00:31:38.754419424Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.12\" with image id \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\", size \"20385639\" in 1.663198024s" Sep 10 00:31:38.754470 containerd[1553]: time="2025-09-10T00:31:38.754464990Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\"" Sep 10 00:31:38.755886 containerd[1553]: time="2025-09-10T00:31:38.755820161Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 10 00:31:40.224847 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2280401557.mount: Deactivated successfully. Sep 10 00:31:40.998301 containerd[1553]: time="2025-09-10T00:31:40.998194154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:31:40.999065 containerd[1553]: time="2025-09-10T00:31:40.999007860Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=30384255" Sep 10 00:31:41.000348 containerd[1553]: time="2025-09-10T00:31:41.000318399Z" level=info msg="ImageCreate event name:\"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:31:41.002504 containerd[1553]: time="2025-09-10T00:31:41.002459014Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:31:41.003037 containerd[1553]: time="2025-09-10T00:31:41.003010899Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"30383274\" in 2.247134212s" Sep 10 00:31:41.003080 containerd[1553]: time="2025-09-10T00:31:41.003047117Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\"" Sep 10 00:31:41.003654 containerd[1553]: time="2025-09-10T00:31:41.003617256Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 10 00:31:41.793094 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2858478171.mount: Deactivated successfully. Sep 10 00:31:42.586772 containerd[1553]: time="2025-09-10T00:31:42.586692904Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:31:42.587463 containerd[1553]: time="2025-09-10T00:31:42.587389811Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 10 00:31:42.588797 containerd[1553]: time="2025-09-10T00:31:42.588709827Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:31:42.592206 containerd[1553]: time="2025-09-10T00:31:42.592155300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:31:42.594016 containerd[1553]: time="2025-09-10T00:31:42.593953322Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.590288116s" Sep 10 00:31:42.594016 containerd[1553]: time="2025-09-10T00:31:42.594001963Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 10 00:31:42.594835 containerd[1553]: time="2025-09-10T00:31:42.594706705Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 10 00:31:43.158959 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2058060707.mount: Deactivated successfully. Sep 10 00:31:43.164202 containerd[1553]: time="2025-09-10T00:31:43.164163343Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:31:43.164913 containerd[1553]: time="2025-09-10T00:31:43.164865901Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 10 00:31:43.165924 containerd[1553]: time="2025-09-10T00:31:43.165878520Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:31:43.168126 containerd[1553]: time="2025-09-10T00:31:43.168092754Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:31:43.168718 containerd[1553]: time="2025-09-10T00:31:43.168684744Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 573.852624ms" Sep 10 00:31:43.168765 containerd[1553]: time="2025-09-10T00:31:43.168715221Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 10 00:31:43.169206 containerd[1553]: time="2025-09-10T00:31:43.169174783Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 10 00:31:43.700553 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2543933774.mount: Deactivated successfully. Sep 10 00:31:47.363500 containerd[1553]: time="2025-09-10T00:31:47.363404549Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:31:47.372574 containerd[1553]: time="2025-09-10T00:31:47.372472518Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 10 00:31:47.378692 containerd[1553]: time="2025-09-10T00:31:47.378605542Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:31:47.395566 containerd[1553]: time="2025-09-10T00:31:47.395509539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:31:47.396790 containerd[1553]: time="2025-09-10T00:31:47.396741961Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 4.227537331s" Sep 10 00:31:47.396790 containerd[1553]: time="2025-09-10T00:31:47.396785242Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 10 00:31:48.789020 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 10 00:31:48.800384 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:31:48.979479 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:31:48.985332 (kubelet)[2163]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 00:31:49.031506 kubelet[2163]: E0910 00:31:49.031364 2163 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 00:31:49.036576 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 00:31:49.036939 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 00:31:50.343879 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:31:50.353465 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:31:50.376903 systemd[1]: Reloading requested from client PID 2180 ('systemctl') (unit session-7.scope)... Sep 10 00:31:50.376918 systemd[1]: Reloading... Sep 10 00:31:50.448260 zram_generator::config[2219]: No configuration found. Sep 10 00:31:50.873831 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:31:50.960058 systemd[1]: Reloading finished in 582 ms. Sep 10 00:31:51.011568 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 10 00:31:51.011715 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 10 00:31:51.012221 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:31:51.014828 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:31:51.187282 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:31:51.193065 (kubelet)[2279]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 10 00:31:51.240209 kubelet[2279]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:31:51.240209 kubelet[2279]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 10 00:31:51.240209 kubelet[2279]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:31:51.240715 kubelet[2279]: I0910 00:31:51.240284 2279 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 00:31:51.492684 kubelet[2279]: I0910 00:31:51.492505 2279 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 10 00:31:51.492684 kubelet[2279]: I0910 00:31:51.492546 2279 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 00:31:51.492870 kubelet[2279]: I0910 00:31:51.492800 2279 server.go:934] "Client rotation is on, will bootstrap in background" Sep 10 00:31:51.512678 kubelet[2279]: E0910 00:31:51.512629 2279 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.21:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:31:51.513382 kubelet[2279]: I0910 00:31:51.513353 2279 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 00:31:51.520252 kubelet[2279]: E0910 00:31:51.520182 2279 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 10 00:31:51.520252 kubelet[2279]: I0910 00:31:51.520223 2279 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 10 00:31:51.528724 kubelet[2279]: I0910 00:31:51.528667 2279 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 00:31:51.529985 kubelet[2279]: I0910 00:31:51.529956 2279 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 10 00:31:51.530206 kubelet[2279]: I0910 00:31:51.530156 2279 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 00:31:51.530439 kubelet[2279]: I0910 00:31:51.530200 2279 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 10 00:31:51.530543 kubelet[2279]: I0910 00:31:51.530466 2279 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 00:31:51.530543 kubelet[2279]: I0910 00:31:51.530476 2279 container_manager_linux.go:300] "Creating device plugin manager" Sep 10 00:31:51.530655 kubelet[2279]: I0910 00:31:51.530633 2279 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:31:51.533485 kubelet[2279]: I0910 00:31:51.533452 2279 kubelet.go:408] "Attempting to sync node with API server" Sep 10 00:31:51.533528 kubelet[2279]: I0910 00:31:51.533486 2279 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 00:31:51.533557 kubelet[2279]: I0910 00:31:51.533548 2279 kubelet.go:314] "Adding apiserver pod source" Sep 10 00:31:51.533598 kubelet[2279]: I0910 00:31:51.533579 2279 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 00:31:51.535757 kubelet[2279]: W0910 00:31:51.535685 2279 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.21:6443: connect: connection refused Sep 10 00:31:51.535805 kubelet[2279]: E0910 00:31:51.535764 2279 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:31:51.536533 kubelet[2279]: I0910 00:31:51.536493 2279 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 10 00:31:51.536867 kubelet[2279]: W0910 00:31:51.536825 2279 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.21:6443: connect: connection refused Sep 10 00:31:51.536919 kubelet[2279]: E0910 00:31:51.536872 2279 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:31:51.537061 kubelet[2279]: I0910 00:31:51.537023 2279 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 10 00:31:51.537860 kubelet[2279]: W0910 00:31:51.537813 2279 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 10 00:31:51.540031 kubelet[2279]: I0910 00:31:51.540007 2279 server.go:1274] "Started kubelet" Sep 10 00:31:51.541395 kubelet[2279]: I0910 00:31:51.540436 2279 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 00:31:51.541395 kubelet[2279]: I0910 00:31:51.540678 2279 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 00:31:51.541395 kubelet[2279]: I0910 00:31:51.540832 2279 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 00:31:51.542248 kubelet[2279]: I0910 00:31:51.541665 2279 server.go:449] "Adding debug handlers to kubelet server" Sep 10 00:31:51.542248 kubelet[2279]: I0910 00:31:51.541875 2279 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 00:31:51.544153 kubelet[2279]: I0910 00:31:51.544124 2279 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 00:31:51.545013 kubelet[2279]: I0910 00:31:51.544976 2279 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 10 00:31:51.545110 kubelet[2279]: I0910 00:31:51.545093 2279 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 10 00:31:51.545188 kubelet[2279]: I0910 00:31:51.545172 2279 reconciler.go:26] "Reconciler: start to sync state" Sep 10 00:31:51.545709 kubelet[2279]: W0910 00:31:51.545608 2279 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.21:6443: connect: connection refused Sep 10 00:31:51.545709 kubelet[2279]: E0910 00:31:51.545654 2279 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:31:51.547905 kubelet[2279]: E0910 00:31:51.547851 2279 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:31:51.547987 kubelet[2279]: E0910 00:31:51.547949 2279 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.21:6443: connect: connection refused" interval="200ms" Sep 10 00:31:51.548061 kubelet[2279]: E0910 00:31:51.548039 2279 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 00:31:51.548117 kubelet[2279]: I0910 00:31:51.548097 2279 factory.go:221] Registration of the containerd container factory successfully Sep 10 00:31:51.548117 kubelet[2279]: I0910 00:31:51.548108 2279 factory.go:221] Registration of the systemd container factory successfully Sep 10 00:31:51.548207 kubelet[2279]: I0910 00:31:51.548175 2279 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 00:31:51.548729 kubelet[2279]: E0910 00:31:51.547334 2279 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.21:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.21:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863c47adac4487a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-10 00:31:51.539976314 +0000 UTC m=+0.341878467,LastTimestamp:2025-09-10 00:31:51.539976314 +0000 UTC m=+0.341878467,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 10 00:31:51.569756 kubelet[2279]: I0910 00:31:51.569711 2279 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 10 00:31:51.571814 kubelet[2279]: I0910 00:31:51.571728 2279 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 10 00:31:51.571814 kubelet[2279]: I0910 00:31:51.571775 2279 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 10 00:31:51.572196 kubelet[2279]: I0910 00:31:51.572082 2279 kubelet.go:2321] "Starting kubelet main sync loop" Sep 10 00:31:51.572196 kubelet[2279]: E0910 00:31:51.572141 2279 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 00:31:51.573707 kubelet[2279]: W0910 00:31:51.573617 2279 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.21:6443: connect: connection refused Sep 10 00:31:51.573821 kubelet[2279]: E0910 00:31:51.573800 2279 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:31:51.578104 kubelet[2279]: I0910 00:31:51.578088 2279 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 10 00:31:51.578189 kubelet[2279]: I0910 00:31:51.578161 2279 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 10 00:31:51.578189 kubelet[2279]: I0910 00:31:51.578185 2279 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:31:51.648758 kubelet[2279]: E0910 00:31:51.648657 2279 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:31:51.672872 kubelet[2279]: E0910 00:31:51.672820 2279 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 10 00:31:51.748819 kubelet[2279]: E0910 00:31:51.748624 2279 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.21:6443: connect: connection refused" interval="400ms" Sep 10 00:31:51.749597 kubelet[2279]: E0910 00:31:51.749549 2279 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:31:51.850184 kubelet[2279]: E0910 00:31:51.850123 2279 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:31:51.873424 kubelet[2279]: E0910 00:31:51.873356 2279 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 10 00:31:51.951189 kubelet[2279]: E0910 00:31:51.951119 2279 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:31:52.051610 kubelet[2279]: E0910 00:31:52.051406 2279 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:31:52.107993 kubelet[2279]: I0910 00:31:52.107940 2279 policy_none.go:49] "None policy: Start" Sep 10 00:31:52.109017 kubelet[2279]: I0910 00:31:52.108978 2279 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 10 00:31:52.109017 kubelet[2279]: I0910 00:31:52.109003 2279 state_mem.go:35] "Initializing new in-memory state store" Sep 10 00:31:52.120205 kubelet[2279]: I0910 00:31:52.120169 2279 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 10 00:31:52.120428 kubelet[2279]: I0910 00:31:52.120404 2279 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 00:31:52.120459 kubelet[2279]: I0910 00:31:52.120424 2279 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 00:31:52.121224 kubelet[2279]: I0910 00:31:52.121185 2279 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 00:31:52.122277 kubelet[2279]: E0910 00:31:52.122260 2279 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 10 00:31:52.150081 kubelet[2279]: E0910 00:31:52.150039 2279 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.21:6443: connect: connection refused" interval="800ms" Sep 10 00:31:52.222462 kubelet[2279]: I0910 00:31:52.222425 2279 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 00:31:52.222794 kubelet[2279]: E0910 00:31:52.222756 2279 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.21:6443/api/v1/nodes\": dial tcp 10.0.0.21:6443: connect: connection refused" node="localhost" Sep 10 00:31:52.349717 kubelet[2279]: I0910 00:31:52.349603 2279 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:31:52.349717 kubelet[2279]: I0910 00:31:52.349635 2279 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:31:52.349717 kubelet[2279]: I0910 00:31:52.349652 2279 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 10 00:31:52.349717 kubelet[2279]: I0910 00:31:52.349683 2279 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/07a2c04fcad7841f3214c4637713f0e0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"07a2c04fcad7841f3214c4637713f0e0\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:31:52.349717 kubelet[2279]: I0910 00:31:52.349699 2279 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/07a2c04fcad7841f3214c4637713f0e0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"07a2c04fcad7841f3214c4637713f0e0\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:31:52.350214 kubelet[2279]: I0910 00:31:52.349720 2279 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/07a2c04fcad7841f3214c4637713f0e0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"07a2c04fcad7841f3214c4637713f0e0\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:31:52.350214 kubelet[2279]: I0910 00:31:52.349758 2279 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:31:52.350214 kubelet[2279]: I0910 00:31:52.349785 2279 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:31:52.350214 kubelet[2279]: I0910 00:31:52.349804 2279 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:31:52.426697 kubelet[2279]: I0910 00:31:52.426649 2279 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 00:31:52.427109 kubelet[2279]: E0910 00:31:52.427079 2279 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.21:6443/api/v1/nodes\": dial tcp 10.0.0.21:6443: connect: connection refused" node="localhost" Sep 10 00:31:52.522088 kubelet[2279]: W0910 00:31:52.522060 2279 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.21:6443: connect: connection refused Sep 10 00:31:52.522177 kubelet[2279]: E0910 00:31:52.522100 2279 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:31:52.580757 kubelet[2279]: E0910 00:31:52.580719 2279 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:52.581186 kubelet[2279]: E0910 00:31:52.581165 2279 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:52.581415 containerd[1553]: time="2025-09-10T00:31:52.581356917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:07a2c04fcad7841f3214c4637713f0e0,Namespace:kube-system,Attempt:0,}" Sep 10 00:31:52.581855 containerd[1553]: time="2025-09-10T00:31:52.581491639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,}" Sep 10 00:31:52.582845 kubelet[2279]: E0910 00:31:52.582826 2279 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:52.583133 containerd[1553]: time="2025-09-10T00:31:52.583102721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,}" Sep 10 00:31:52.629868 kubelet[2279]: W0910 00:31:52.629705 2279 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.21:6443: connect: connection refused Sep 10 00:31:52.629868 kubelet[2279]: E0910 00:31:52.629775 2279 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:31:52.651665 kubelet[2279]: E0910 00:31:52.651558 2279 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.21:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.21:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863c47adac4487a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-10 00:31:51.539976314 +0000 UTC m=+0.341878467,LastTimestamp:2025-09-10 00:31:51.539976314 +0000 UTC m=+0.341878467,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 10 00:31:52.829505 kubelet[2279]: I0910 00:31:52.829444 2279 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 00:31:52.829796 kubelet[2279]: E0910 00:31:52.829757 2279 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.21:6443/api/v1/nodes\": dial tcp 10.0.0.21:6443: connect: connection refused" node="localhost" Sep 10 00:31:52.889912 kubelet[2279]: W0910 00:31:52.889742 2279 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.21:6443: connect: connection refused Sep 10 00:31:52.889912 kubelet[2279]: E0910 00:31:52.889820 2279 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:31:52.950773 kubelet[2279]: E0910 00:31:52.950700 2279 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.21:6443: connect: connection refused" interval="1.6s" Sep 10 00:31:53.075442 kubelet[2279]: W0910 00:31:53.075353 2279 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.21:6443: connect: connection refused Sep 10 00:31:53.075621 kubelet[2279]: E0910 00:31:53.075448 2279 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:31:53.181559 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3019382939.mount: Deactivated successfully. Sep 10 00:31:53.187063 containerd[1553]: time="2025-09-10T00:31:53.186989741Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 00:31:53.187905 containerd[1553]: time="2025-09-10T00:31:53.187866595Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 00:31:53.188728 containerd[1553]: time="2025-09-10T00:31:53.188686804Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 10 00:31:53.189655 containerd[1553]: time="2025-09-10T00:31:53.189616878Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 00:31:53.190459 containerd[1553]: time="2025-09-10T00:31:53.190430224Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 10 00:31:53.191171 containerd[1553]: time="2025-09-10T00:31:53.191137641Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 10 00:31:53.192135 containerd[1553]: time="2025-09-10T00:31:53.192098032Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 00:31:53.196419 containerd[1553]: time="2025-09-10T00:31:53.196379293Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 00:31:53.197402 containerd[1553]: time="2025-09-10T00:31:53.197358319Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 614.203009ms" Sep 10 00:31:53.198684 containerd[1553]: time="2025-09-10T00:31:53.198654781Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 617.110633ms" Sep 10 00:31:53.200417 containerd[1553]: time="2025-09-10T00:31:53.200377122Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 618.915679ms" Sep 10 00:31:53.387839 containerd[1553]: time="2025-09-10T00:31:53.385677903Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:31:53.387839 containerd[1553]: time="2025-09-10T00:31:53.385769976Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:31:53.387839 containerd[1553]: time="2025-09-10T00:31:53.385781387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:31:53.387839 containerd[1553]: time="2025-09-10T00:31:53.385888598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:31:53.399749 containerd[1553]: time="2025-09-10T00:31:53.399593975Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:31:53.400256 containerd[1553]: time="2025-09-10T00:31:53.399765918Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:31:53.400256 containerd[1553]: time="2025-09-10T00:31:53.399783331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:31:53.400764 containerd[1553]: time="2025-09-10T00:31:53.400678109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:31:53.415833 containerd[1553]: time="2025-09-10T00:31:53.415435770Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:31:53.415833 containerd[1553]: time="2025-09-10T00:31:53.415547129Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:31:53.415833 containerd[1553]: time="2025-09-10T00:31:53.415564472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:31:53.415833 containerd[1553]: time="2025-09-10T00:31:53.415704775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:31:53.488879 containerd[1553]: time="2025-09-10T00:31:53.488699472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"20fff677d34390b638df7cbd4c6c392229008cb1262aa37510b1c42f7b303ba3\"" Sep 10 00:31:53.488995 containerd[1553]: time="2025-09-10T00:31:53.488744637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"9bf6f3573b3c5ede5fb4003aa25cb3f4bc8ddf62b405d75ab0426dd0991d9b89\"" Sep 10 00:31:53.490591 kubelet[2279]: E0910 00:31:53.490537 2279 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:53.490966 kubelet[2279]: E0910 00:31:53.490866 2279 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:53.493127 containerd[1553]: time="2025-09-10T00:31:53.493099866Z" level=info msg="CreateContainer within sandbox \"9bf6f3573b3c5ede5fb4003aa25cb3f4bc8ddf62b405d75ab0426dd0991d9b89\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 10 00:31:53.494217 containerd[1553]: time="2025-09-10T00:31:53.494180873Z" level=info msg="CreateContainer within sandbox \"20fff677d34390b638df7cbd4c6c392229008cb1262aa37510b1c42f7b303ba3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 10 00:31:53.512593 containerd[1553]: time="2025-09-10T00:31:53.512539309Z" level=info msg="CreateContainer within sandbox \"9bf6f3573b3c5ede5fb4003aa25cb3f4bc8ddf62b405d75ab0426dd0991d9b89\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"42cd8814772e1db436e85836cfd2bf5426ca2100d9c6b64e6e43cd83c72b35ff\"" Sep 10 00:31:53.513804 containerd[1553]: time="2025-09-10T00:31:53.513760569Z" level=info msg="StartContainer for \"42cd8814772e1db436e85836cfd2bf5426ca2100d9c6b64e6e43cd83c72b35ff\"" Sep 10 00:31:53.515508 containerd[1553]: time="2025-09-10T00:31:53.515481026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:07a2c04fcad7841f3214c4637713f0e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"77a13a9426249c4df62f85c645a4d5341d6e1793e595cf2e60892f57247a419e\"" Sep 10 00:31:53.517098 kubelet[2279]: E0910 00:31:53.516323 2279 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:53.518375 containerd[1553]: time="2025-09-10T00:31:53.518334509Z" level=info msg="CreateContainer within sandbox \"20fff677d34390b638df7cbd4c6c392229008cb1262aa37510b1c42f7b303ba3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1d9919f1dc9857933d19714f3552cba4a0768951031469c67e11d115e1f5ecdd\"" Sep 10 00:31:53.518422 containerd[1553]: time="2025-09-10T00:31:53.518344287Z" level=info msg="CreateContainer within sandbox \"77a13a9426249c4df62f85c645a4d5341d6e1793e595cf2e60892f57247a419e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 10 00:31:53.518787 containerd[1553]: time="2025-09-10T00:31:53.518751922Z" level=info msg="StartContainer for \"1d9919f1dc9857933d19714f3552cba4a0768951031469c67e11d115e1f5ecdd\"" Sep 10 00:31:53.537226 containerd[1553]: time="2025-09-10T00:31:53.537091151Z" level=info msg="CreateContainer within sandbox \"77a13a9426249c4df62f85c645a4d5341d6e1793e595cf2e60892f57247a419e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"68b78c9a320d70c8d125e71d95b4deb81b68f6b41f1eb093b2e0556061511533\"" Sep 10 00:31:53.539393 containerd[1553]: time="2025-09-10T00:31:53.539338015Z" level=info msg="StartContainer for \"68b78c9a320d70c8d125e71d95b4deb81b68f6b41f1eb093b2e0556061511533\"" Sep 10 00:31:53.591090 kubelet[2279]: E0910 00:31:53.591051 2279 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.21:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:31:53.624312 containerd[1553]: time="2025-09-10T00:31:53.621061105Z" level=info msg="StartContainer for \"42cd8814772e1db436e85836cfd2bf5426ca2100d9c6b64e6e43cd83c72b35ff\" returns successfully" Sep 10 00:31:53.624312 containerd[1553]: time="2025-09-10T00:31:53.623193104Z" level=info msg="StartContainer for \"68b78c9a320d70c8d125e71d95b4deb81b68f6b41f1eb093b2e0556061511533\" returns successfully" Sep 10 00:31:53.630272 containerd[1553]: time="2025-09-10T00:31:53.629652980Z" level=info msg="StartContainer for \"1d9919f1dc9857933d19714f3552cba4a0768951031469c67e11d115e1f5ecdd\" returns successfully" Sep 10 00:31:53.631827 kubelet[2279]: I0910 00:31:53.631810 2279 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 00:31:53.632335 kubelet[2279]: E0910 00:31:53.632295 2279 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.21:6443/api/v1/nodes\": dial tcp 10.0.0.21:6443: connect: connection refused" node="localhost" Sep 10 00:31:54.611935 kubelet[2279]: E0910 00:31:54.611871 2279 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:54.612818 kubelet[2279]: E0910 00:31:54.612748 2279 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:54.616868 kubelet[2279]: E0910 00:31:54.616811 2279 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:55.245126 kubelet[2279]: I0910 00:31:55.245079 2279 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 00:31:55.570503 kubelet[2279]: E0910 00:31:55.570344 2279 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 10 00:31:55.617604 kubelet[2279]: E0910 00:31:55.617544 2279 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:55.617604 kubelet[2279]: E0910 00:31:55.617559 2279 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:55.618094 kubelet[2279]: E0910 00:31:55.617756 2279 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:55.772445 kubelet[2279]: I0910 00:31:55.772389 2279 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 10 00:31:56.538652 kubelet[2279]: I0910 00:31:56.538605 2279 apiserver.go:52] "Watching apiserver" Sep 10 00:31:56.546093 kubelet[2279]: I0910 00:31:56.546054 2279 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 10 00:31:56.627146 kubelet[2279]: E0910 00:31:56.627102 2279 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:57.619082 kubelet[2279]: E0910 00:31:57.619030 2279 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:57.636722 systemd[1]: Reloading requested from client PID 2552 ('systemctl') (unit session-7.scope)... Sep 10 00:31:57.636742 systemd[1]: Reloading... Sep 10 00:31:57.756700 zram_generator::config[2591]: No configuration found. Sep 10 00:31:57.880561 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:31:57.980196 systemd[1]: Reloading finished in 343 ms. Sep 10 00:31:58.014876 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:31:58.022653 systemd[1]: kubelet.service: Deactivated successfully. Sep 10 00:31:58.023104 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:31:58.034426 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:31:58.250824 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:31:58.256162 (kubelet)[2646]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 10 00:31:58.293789 kubelet[2646]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:31:58.293789 kubelet[2646]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 10 00:31:58.293789 kubelet[2646]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:31:58.294292 kubelet[2646]: I0910 00:31:58.293840 2646 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 00:31:58.300623 kubelet[2646]: I0910 00:31:58.300584 2646 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 10 00:31:58.300623 kubelet[2646]: I0910 00:31:58.300609 2646 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 00:31:58.300851 kubelet[2646]: I0910 00:31:58.300828 2646 server.go:934] "Client rotation is on, will bootstrap in background" Sep 10 00:31:58.302041 kubelet[2646]: I0910 00:31:58.302017 2646 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 10 00:31:58.304108 kubelet[2646]: I0910 00:31:58.303782 2646 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 00:31:58.307683 kubelet[2646]: E0910 00:31:58.307652 2646 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 10 00:31:58.307683 kubelet[2646]: I0910 00:31:58.307677 2646 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 10 00:31:58.313410 kubelet[2646]: I0910 00:31:58.313363 2646 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 00:31:58.313948 kubelet[2646]: I0910 00:31:58.313918 2646 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 10 00:31:58.314137 kubelet[2646]: I0910 00:31:58.314088 2646 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 00:31:58.314345 kubelet[2646]: I0910 00:31:58.314129 2646 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 10 00:31:58.314442 kubelet[2646]: I0910 00:31:58.314347 2646 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 00:31:58.314442 kubelet[2646]: I0910 00:31:58.314374 2646 container_manager_linux.go:300] "Creating device plugin manager" Sep 10 00:31:58.314442 kubelet[2646]: I0910 00:31:58.314407 2646 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:31:58.314560 kubelet[2646]: I0910 00:31:58.314533 2646 kubelet.go:408] "Attempting to sync node with API server" Sep 10 00:31:58.314560 kubelet[2646]: I0910 00:31:58.314550 2646 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 00:31:58.314627 kubelet[2646]: I0910 00:31:58.314590 2646 kubelet.go:314] "Adding apiserver pod source" Sep 10 00:31:58.314627 kubelet[2646]: I0910 00:31:58.314602 2646 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 00:31:58.315306 kubelet[2646]: I0910 00:31:58.315276 2646 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 10 00:31:58.316269 kubelet[2646]: I0910 00:31:58.315628 2646 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 10 00:31:58.316269 kubelet[2646]: I0910 00:31:58.316064 2646 server.go:1274] "Started kubelet" Sep 10 00:31:58.316269 kubelet[2646]: I0910 00:31:58.316258 2646 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 00:31:58.316531 kubelet[2646]: I0910 00:31:58.316383 2646 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 00:31:58.317123 kubelet[2646]: I0910 00:31:58.316693 2646 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 00:31:58.319323 kubelet[2646]: I0910 00:31:58.318898 2646 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 00:31:58.322384 kubelet[2646]: I0910 00:31:58.320506 2646 server.go:449] "Adding debug handlers to kubelet server" Sep 10 00:31:58.322384 kubelet[2646]: I0910 00:31:58.322363 2646 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 00:31:58.326422 kubelet[2646]: I0910 00:31:58.326393 2646 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 10 00:31:58.326530 kubelet[2646]: I0910 00:31:58.326505 2646 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 10 00:31:58.326690 kubelet[2646]: I0910 00:31:58.326669 2646 reconciler.go:26] "Reconciler: start to sync state" Sep 10 00:31:58.327817 kubelet[2646]: E0910 00:31:58.327796 2646 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:31:58.332824 kubelet[2646]: I0910 00:31:58.332795 2646 factory.go:221] Registration of the containerd container factory successfully Sep 10 00:31:58.332824 kubelet[2646]: I0910 00:31:58.332819 2646 factory.go:221] Registration of the systemd container factory successfully Sep 10 00:31:58.332951 kubelet[2646]: I0910 00:31:58.332924 2646 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 00:31:58.333332 kubelet[2646]: E0910 00:31:58.333313 2646 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 00:31:58.333800 kubelet[2646]: I0910 00:31:58.333779 2646 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 10 00:31:58.335258 kubelet[2646]: I0910 00:31:58.335223 2646 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 10 00:31:58.335330 kubelet[2646]: I0910 00:31:58.335320 2646 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 10 00:31:58.335413 kubelet[2646]: I0910 00:31:58.335402 2646 kubelet.go:2321] "Starting kubelet main sync loop" Sep 10 00:31:58.335515 kubelet[2646]: E0910 00:31:58.335492 2646 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 00:31:58.381204 kubelet[2646]: I0910 00:31:58.381163 2646 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 10 00:31:58.381204 kubelet[2646]: I0910 00:31:58.381189 2646 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 10 00:31:58.381204 kubelet[2646]: I0910 00:31:58.381214 2646 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:31:58.381425 kubelet[2646]: I0910 00:31:58.381409 2646 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 10 00:31:58.381453 kubelet[2646]: I0910 00:31:58.381424 2646 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 10 00:31:58.381453 kubelet[2646]: I0910 00:31:58.381444 2646 policy_none.go:49] "None policy: Start" Sep 10 00:31:58.382100 kubelet[2646]: I0910 00:31:58.382057 2646 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 10 00:31:58.382100 kubelet[2646]: I0910 00:31:58.382085 2646 state_mem.go:35] "Initializing new in-memory state store" Sep 10 00:31:58.382285 kubelet[2646]: I0910 00:31:58.382270 2646 state_mem.go:75] "Updated machine memory state" Sep 10 00:31:58.383755 kubelet[2646]: I0910 00:31:58.383726 2646 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 10 00:31:58.383939 kubelet[2646]: I0910 00:31:58.383918 2646 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 00:31:58.383985 kubelet[2646]: I0910 00:31:58.383935 2646 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 00:31:58.384722 kubelet[2646]: I0910 00:31:58.384535 2646 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 00:31:58.490336 kubelet[2646]: I0910 00:31:58.490287 2646 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 00:31:58.498988 kubelet[2646]: E0910 00:31:58.498928 2646 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 10 00:31:58.500682 kubelet[2646]: I0910 00:31:58.500633 2646 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 10 00:31:58.500820 kubelet[2646]: I0910 00:31:58.500750 2646 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 10 00:31:58.528446 kubelet[2646]: I0910 00:31:58.528281 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:31:58.528446 kubelet[2646]: I0910 00:31:58.528331 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:31:58.528446 kubelet[2646]: I0910 00:31:58.528357 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:31:58.528446 kubelet[2646]: I0910 00:31:58.528382 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 10 00:31:58.528446 kubelet[2646]: I0910 00:31:58.528400 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:31:58.528707 kubelet[2646]: I0910 00:31:58.528412 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:31:58.528707 kubelet[2646]: I0910 00:31:58.528425 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/07a2c04fcad7841f3214c4637713f0e0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"07a2c04fcad7841f3214c4637713f0e0\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:31:58.528707 kubelet[2646]: I0910 00:31:58.528444 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/07a2c04fcad7841f3214c4637713f0e0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"07a2c04fcad7841f3214c4637713f0e0\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:31:58.528707 kubelet[2646]: I0910 00:31:58.528493 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/07a2c04fcad7841f3214c4637713f0e0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"07a2c04fcad7841f3214c4637713f0e0\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:31:58.797395 kubelet[2646]: E0910 00:31:58.797195 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:58.797395 kubelet[2646]: E0910 00:31:58.797315 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:58.799314 kubelet[2646]: E0910 00:31:58.799270 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:59.315586 kubelet[2646]: I0910 00:31:59.315525 2646 apiserver.go:52] "Watching apiserver" Sep 10 00:31:59.327060 kubelet[2646]: I0910 00:31:59.327006 2646 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 10 00:31:59.351131 kubelet[2646]: E0910 00:31:59.351085 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:59.351775 kubelet[2646]: E0910 00:31:59.351751 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:59.352083 kubelet[2646]: E0910 00:31:59.352060 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:31:59.500597 kubelet[2646]: I0910 00:31:59.500277 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.500254405 podStartE2EDuration="1.500254405s" podCreationTimestamp="2025-09-10 00:31:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:31:59.499984144 +0000 UTC m=+1.239082153" watchObservedRunningTime="2025-09-10 00:31:59.500254405 +0000 UTC m=+1.239352404" Sep 10 00:31:59.552963 kubelet[2646]: I0910 00:31:59.552620 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.552593257 podStartE2EDuration="3.552593257s" podCreationTimestamp="2025-09-10 00:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:31:59.55241809 +0000 UTC m=+1.291516089" watchObservedRunningTime="2025-09-10 00:31:59.552593257 +0000 UTC m=+1.291691257" Sep 10 00:31:59.555310 kubelet[2646]: I0910 00:31:59.555268 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.555253572 podStartE2EDuration="1.555253572s" podCreationTimestamp="2025-09-10 00:31:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:31:59.542181872 +0000 UTC m=+1.281279871" watchObservedRunningTime="2025-09-10 00:31:59.555253572 +0000 UTC m=+1.294351581" Sep 10 00:32:00.352884 kubelet[2646]: E0910 00:32:00.352842 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:32:03.236866 kubelet[2646]: I0910 00:32:03.236822 2646 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 10 00:32:03.237494 kubelet[2646]: I0910 00:32:03.237372 2646 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 10 00:32:03.237535 containerd[1553]: time="2025-09-10T00:32:03.237168615Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 10 00:32:03.713195 kubelet[2646]: E0910 00:32:03.713029 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:32:04.264743 kubelet[2646]: I0910 00:32:04.264702 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/692b8c8f-1c5c-4bbb-a1df-1e6c1d82ffb5-kube-proxy\") pod \"kube-proxy-jxbld\" (UID: \"692b8c8f-1c5c-4bbb-a1df-1e6c1d82ffb5\") " pod="kube-system/kube-proxy-jxbld" Sep 10 00:32:04.264743 kubelet[2646]: I0910 00:32:04.264736 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/692b8c8f-1c5c-4bbb-a1df-1e6c1d82ffb5-xtables-lock\") pod \"kube-proxy-jxbld\" (UID: \"692b8c8f-1c5c-4bbb-a1df-1e6c1d82ffb5\") " pod="kube-system/kube-proxy-jxbld" Sep 10 00:32:04.264743 kubelet[2646]: I0910 00:32:04.264756 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/692b8c8f-1c5c-4bbb-a1df-1e6c1d82ffb5-lib-modules\") pod \"kube-proxy-jxbld\" (UID: \"692b8c8f-1c5c-4bbb-a1df-1e6c1d82ffb5\") " pod="kube-system/kube-proxy-jxbld" Sep 10 00:32:04.265354 kubelet[2646]: I0910 00:32:04.264774 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5csbh\" (UniqueName: \"kubernetes.io/projected/692b8c8f-1c5c-4bbb-a1df-1e6c1d82ffb5-kube-api-access-5csbh\") pod \"kube-proxy-jxbld\" (UID: \"692b8c8f-1c5c-4bbb-a1df-1e6c1d82ffb5\") " pod="kube-system/kube-proxy-jxbld" Sep 10 00:32:04.362682 kubelet[2646]: E0910 00:32:04.362615 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:32:04.365165 kubelet[2646]: I0910 00:32:04.365119 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7e9c693a-c7f7-42f4-9002-016b3bdd299c-var-lib-calico\") pod \"tigera-operator-58fc44c59b-d9x75\" (UID: \"7e9c693a-c7f7-42f4-9002-016b3bdd299c\") " pod="tigera-operator/tigera-operator-58fc44c59b-d9x75" Sep 10 00:32:04.365260 kubelet[2646]: I0910 00:32:04.365174 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46lx4\" (UniqueName: \"kubernetes.io/projected/7e9c693a-c7f7-42f4-9002-016b3bdd299c-kube-api-access-46lx4\") pod \"tigera-operator-58fc44c59b-d9x75\" (UID: \"7e9c693a-c7f7-42f4-9002-016b3bdd299c\") " pod="tigera-operator/tigera-operator-58fc44c59b-d9x75" Sep 10 00:32:04.471835 kubelet[2646]: E0910 00:32:04.471795 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:32:04.525584 kubelet[2646]: E0910 00:32:04.525419 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:32:04.526459 containerd[1553]: time="2025-09-10T00:32:04.526270162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jxbld,Uid:692b8c8f-1c5c-4bbb-a1df-1e6c1d82ffb5,Namespace:kube-system,Attempt:0,}" Sep 10 00:32:04.552657 containerd[1553]: time="2025-09-10T00:32:04.552530502Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:32:04.552657 containerd[1553]: time="2025-09-10T00:32:04.552597710Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:32:04.552657 containerd[1553]: time="2025-09-10T00:32:04.552610595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:32:04.552869 containerd[1553]: time="2025-09-10T00:32:04.552733740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:32:04.594683 containerd[1553]: time="2025-09-10T00:32:04.594621616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jxbld,Uid:692b8c8f-1c5c-4bbb-a1df-1e6c1d82ffb5,Namespace:kube-system,Attempt:0,} returns sandbox id \"3354f77bf8ff796359a5f30d774ccb328fd85801089aff6463e9d60718e89b81\"" Sep 10 00:32:04.595702 kubelet[2646]: E0910 00:32:04.595663 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:32:04.598972 containerd[1553]: time="2025-09-10T00:32:04.598940686Z" level=info msg="CreateContainer within sandbox \"3354f77bf8ff796359a5f30d774ccb328fd85801089aff6463e9d60718e89b81\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 10 00:32:04.616080 containerd[1553]: time="2025-09-10T00:32:04.616016031Z" level=info msg="CreateContainer within sandbox \"3354f77bf8ff796359a5f30d774ccb328fd85801089aff6463e9d60718e89b81\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bfe0ccfa14b9a9037817bc8df8ddb0eea29838ea4bc96b4e63320af5f886c487\"" Sep 10 00:32:04.618037 containerd[1553]: time="2025-09-10T00:32:04.616778559Z" level=info msg="StartContainer for \"bfe0ccfa14b9a9037817bc8df8ddb0eea29838ea4bc96b4e63320af5f886c487\"" Sep 10 00:32:04.634275 containerd[1553]: time="2025-09-10T00:32:04.634210896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-d9x75,Uid:7e9c693a-c7f7-42f4-9002-016b3bdd299c,Namespace:tigera-operator,Attempt:0,}" Sep 10 00:32:04.676876 containerd[1553]: time="2025-09-10T00:32:04.675771856Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:32:04.676876 containerd[1553]: time="2025-09-10T00:32:04.676846923Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:32:04.676876 containerd[1553]: time="2025-09-10T00:32:04.676866630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:32:04.677056 containerd[1553]: time="2025-09-10T00:32:04.677010335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:32:04.689383 containerd[1553]: time="2025-09-10T00:32:04.689333732Z" level=info msg="StartContainer for \"bfe0ccfa14b9a9037817bc8df8ddb0eea29838ea4bc96b4e63320af5f886c487\" returns successfully" Sep 10 00:32:04.743830 containerd[1553]: time="2025-09-10T00:32:04.743759464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-d9x75,Uid:7e9c693a-c7f7-42f4-9002-016b3bdd299c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6236a2f1d989637f630bbe85528ab0e4ce89b38c377567c57ad5c981a5e423a7\"" Sep 10 00:32:04.747296 containerd[1553]: time="2025-09-10T00:32:04.747203671Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 10 00:32:05.369276 kubelet[2646]: E0910 00:32:05.369215 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:32:05.369922 kubelet[2646]: E0910 00:32:05.369899 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:32:05.370334 kubelet[2646]: E0910 00:32:05.370270 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:32:06.175397 kubelet[2646]: E0910 00:32:06.175346 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:32:06.187104 kubelet[2646]: I0910 00:32:06.186976 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jxbld" podStartSLOduration=2.186948382 podStartE2EDuration="2.186948382s" podCreationTimestamp="2025-09-10 00:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:32:05.388210071 +0000 UTC m=+7.127308070" watchObservedRunningTime="2025-09-10 00:32:06.186948382 +0000 UTC m=+7.926046391" Sep 10 00:32:06.317430 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1846419608.mount: Deactivated successfully. Sep 10 00:32:06.370554 kubelet[2646]: E0910 00:32:06.370519 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:32:06.714866 containerd[1553]: time="2025-09-10T00:32:06.714804990Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:32:06.715702 containerd[1553]: time="2025-09-10T00:32:06.715623271Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 10 00:32:06.717000 containerd[1553]: time="2025-09-10T00:32:06.716977706Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:32:06.719315 containerd[1553]: time="2025-09-10T00:32:06.719254019Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:32:06.719915 containerd[1553]: time="2025-09-10T00:32:06.719884001Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 1.972626747s" Sep 10 00:32:06.719954 containerd[1553]: time="2025-09-10T00:32:06.719916743Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 10 00:32:06.722329 containerd[1553]: time="2025-09-10T00:32:06.722302525Z" level=info msg="CreateContainer within sandbox \"6236a2f1d989637f630bbe85528ab0e4ce89b38c377567c57ad5c981a5e423a7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 10 00:32:06.733850 containerd[1553]: time="2025-09-10T00:32:06.733798168Z" level=info msg="CreateContainer within sandbox \"6236a2f1d989637f630bbe85528ab0e4ce89b38c377567c57ad5c981a5e423a7\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1213c1332d68c34c49228766776d08f11b0a2987ed5f6d0862b46db4b2b7d58f\"" Sep 10 00:32:06.734358 containerd[1553]: time="2025-09-10T00:32:06.734324332Z" level=info msg="StartContainer for \"1213c1332d68c34c49228766776d08f11b0a2987ed5f6d0862b46db4b2b7d58f\"" Sep 10 00:32:06.792565 containerd[1553]: time="2025-09-10T00:32:06.792503254Z" level=info msg="StartContainer for \"1213c1332d68c34c49228766776d08f11b0a2987ed5f6d0862b46db4b2b7d58f\" returns successfully" Sep 10 00:32:07.383613 kubelet[2646]: I0910 00:32:07.383545 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-d9x75" podStartSLOduration=1.4080110719999999 podStartE2EDuration="3.383528864s" podCreationTimestamp="2025-09-10 00:32:04 +0000 UTC" firstStartedPulling="2025-09-10 00:32:04.745256837 +0000 UTC m=+6.484354837" lastFinishedPulling="2025-09-10 00:32:06.72077463 +0000 UTC m=+8.459872629" observedRunningTime="2025-09-10 00:32:07.382678253 +0000 UTC m=+9.121776252" watchObservedRunningTime="2025-09-10 00:32:07.383528864 +0000 UTC m=+9.122626853" Sep 10 00:32:08.949137 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1213c1332d68c34c49228766776d08f11b0a2987ed5f6d0862b46db4b2b7d58f-rootfs.mount: Deactivated successfully. Sep 10 00:32:09.257980 containerd[1553]: time="2025-09-10T00:32:09.253513897Z" level=info msg="shim disconnected" id=1213c1332d68c34c49228766776d08f11b0a2987ed5f6d0862b46db4b2b7d58f namespace=k8s.io Sep 10 00:32:09.257980 containerd[1553]: time="2025-09-10T00:32:09.257890171Z" level=warning msg="cleaning up after shim disconnected" id=1213c1332d68c34c49228766776d08f11b0a2987ed5f6d0862b46db4b2b7d58f namespace=k8s.io Sep 10 00:32:09.257980 containerd[1553]: time="2025-09-10T00:32:09.257905209Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 00:32:09.381724 kubelet[2646]: I0910 00:32:09.381673 2646 scope.go:117] "RemoveContainer" containerID="1213c1332d68c34c49228766776d08f11b0a2987ed5f6d0862b46db4b2b7d58f" Sep 10 00:32:09.386945 containerd[1553]: time="2025-09-10T00:32:09.386830480Z" level=info msg="CreateContainer within sandbox \"6236a2f1d989637f630bbe85528ab0e4ce89b38c377567c57ad5c981a5e423a7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 10 00:32:09.406101 containerd[1553]: time="2025-09-10T00:32:09.406012900Z" level=info msg="CreateContainer within sandbox \"6236a2f1d989637f630bbe85528ab0e4ce89b38c377567c57ad5c981a5e423a7\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"37abfe1ca88e7ddabb42669ae1c8c20212f75ac09adde4c6b3fdf444a2426092\"" Sep 10 00:32:09.408393 containerd[1553]: time="2025-09-10T00:32:09.407516891Z" level=info msg="StartContainer for \"37abfe1ca88e7ddabb42669ae1c8c20212f75ac09adde4c6b3fdf444a2426092\"" Sep 10 00:32:09.483526 containerd[1553]: time="2025-09-10T00:32:09.483450657Z" level=info msg="StartContainer for \"37abfe1ca88e7ddabb42669ae1c8c20212f75ac09adde4c6b3fdf444a2426092\" returns successfully" Sep 10 00:32:10.368462 update_engine[1537]: I20250910 00:32:10.368346 1537 update_attempter.cc:509] Updating boot flags... Sep 10 00:32:10.404278 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (3096) Sep 10 00:32:10.440476 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (3096) Sep 10 00:32:12.099771 sudo[1760]: pam_unix(sudo:session): session closed for user root Sep 10 00:32:12.102586 sshd[1753]: pam_unix(sshd:session): session closed for user core Sep 10 00:32:12.107302 systemd[1]: sshd@6-10.0.0.21:22-10.0.0.1:51862.service: Deactivated successfully. Sep 10 00:32:12.110327 systemd[1]: session-7.scope: Deactivated successfully. Sep 10 00:32:12.110332 systemd-logind[1531]: Session 7 logged out. Waiting for processes to exit. Sep 10 00:32:12.112005 systemd-logind[1531]: Removed session 7. Sep 10 00:32:15.131428 kubelet[2646]: I0910 00:32:15.131372 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvnm9\" (UniqueName: \"kubernetes.io/projected/a164c341-6765-4743-acab-80038b03fc0a-kube-api-access-mvnm9\") pod \"calico-typha-66c8554686-7cb45\" (UID: \"a164c341-6765-4743-acab-80038b03fc0a\") " pod="calico-system/calico-typha-66c8554686-7cb45" Sep 10 00:32:15.131428 kubelet[2646]: I0910 00:32:15.131425 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a164c341-6765-4743-acab-80038b03fc0a-tigera-ca-bundle\") pod \"calico-typha-66c8554686-7cb45\" (UID: \"a164c341-6765-4743-acab-80038b03fc0a\") " pod="calico-system/calico-typha-66c8554686-7cb45" Sep 10 00:32:15.131969 kubelet[2646]: I0910 00:32:15.131442 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a164c341-6765-4743-acab-80038b03fc0a-typha-certs\") pod \"calico-typha-66c8554686-7cb45\" (UID: \"a164c341-6765-4743-acab-80038b03fc0a\") " pod="calico-system/calico-typha-66c8554686-7cb45" Sep 10 00:32:15.287038 kubelet[2646]: E0910 00:32:15.286987 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:32:15.287710 containerd[1553]: time="2025-09-10T00:32:15.287661992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66c8554686-7cb45,Uid:a164c341-6765-4743-acab-80038b03fc0a,Namespace:calico-system,Attempt:0,}" Sep 10 00:32:15.314571 containerd[1553]: time="2025-09-10T00:32:15.314450854Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:32:15.314571 containerd[1553]: time="2025-09-10T00:32:15.314510336Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:32:15.314571 containerd[1553]: time="2025-09-10T00:32:15.314521928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:32:15.314836 containerd[1553]: time="2025-09-10T00:32:15.314622639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:32:15.417531 containerd[1553]: time="2025-09-10T00:32:15.417386287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66c8554686-7cb45,Uid:a164c341-6765-4743-acab-80038b03fc0a,Namespace:calico-system,Attempt:0,} returns sandbox id \"dbc674d77dba70467957f258800db7327c677b7868f12fe36cba4bd330a2a5b3\"" Sep 10 00:32:15.422016 kubelet[2646]: E0910 00:32:15.421990 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:32:15.428969 containerd[1553]: time="2025-09-10T00:32:15.428938232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 10 00:32:15.534296 kubelet[2646]: I0910 00:32:15.533779 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b07d9b29-7a0e-48fb-9c0d-076fcef37e77-cni-log-dir\") pod \"calico-node-dgd87\" (UID: \"b07d9b29-7a0e-48fb-9c0d-076fcef37e77\") " pod="calico-system/calico-node-dgd87" Sep 10 00:32:15.534296 kubelet[2646]: I0910 00:32:15.533837 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b07d9b29-7a0e-48fb-9c0d-076fcef37e77-var-lib-calico\") pod \"calico-node-dgd87\" (UID: \"b07d9b29-7a0e-48fb-9c0d-076fcef37e77\") " pod="calico-system/calico-node-dgd87" Sep 10 00:32:15.534296 kubelet[2646]: I0910 00:32:15.533859 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b07d9b29-7a0e-48fb-9c0d-076fcef37e77-flexvol-driver-host\") pod \"calico-node-dgd87\" (UID: \"b07d9b29-7a0e-48fb-9c0d-076fcef37e77\") " pod="calico-system/calico-node-dgd87" Sep 10 00:32:15.534296 kubelet[2646]: I0910 00:32:15.533878 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b07d9b29-7a0e-48fb-9c0d-076fcef37e77-policysync\") pod \"calico-node-dgd87\" (UID: \"b07d9b29-7a0e-48fb-9c0d-076fcef37e77\") " pod="calico-system/calico-node-dgd87" Sep 10 00:32:15.534296 kubelet[2646]: I0910 00:32:15.533892 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b07d9b29-7a0e-48fb-9c0d-076fcef37e77-cni-net-dir\") pod \"calico-node-dgd87\" (UID: \"b07d9b29-7a0e-48fb-9c0d-076fcef37e77\") " pod="calico-system/calico-node-dgd87" Sep 10 00:32:15.534630 kubelet[2646]: I0910 00:32:15.533907 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b07d9b29-7a0e-48fb-9c0d-076fcef37e77-cni-bin-dir\") pod \"calico-node-dgd87\" (UID: \"b07d9b29-7a0e-48fb-9c0d-076fcef37e77\") " pod="calico-system/calico-node-dgd87" Sep 10 00:32:15.534630 kubelet[2646]: I0910 00:32:15.533920 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b07d9b29-7a0e-48fb-9c0d-076fcef37e77-lib-modules\") pod \"calico-node-dgd87\" (UID: \"b07d9b29-7a0e-48fb-9c0d-076fcef37e77\") " pod="calico-system/calico-node-dgd87" Sep 10 00:32:15.534630 kubelet[2646]: I0910 00:32:15.533934 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b07d9b29-7a0e-48fb-9c0d-076fcef37e77-var-run-calico\") pod \"calico-node-dgd87\" (UID: \"b07d9b29-7a0e-48fb-9c0d-076fcef37e77\") " pod="calico-system/calico-node-dgd87" Sep 10 00:32:15.534630 kubelet[2646]: I0910 00:32:15.534038 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b07d9b29-7a0e-48fb-9c0d-076fcef37e77-tigera-ca-bundle\") pod \"calico-node-dgd87\" (UID: \"b07d9b29-7a0e-48fb-9c0d-076fcef37e77\") " pod="calico-system/calico-node-dgd87" Sep 10 00:32:15.534630 kubelet[2646]: I0910 00:32:15.534098 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwxlb\" (UniqueName: \"kubernetes.io/projected/b07d9b29-7a0e-48fb-9c0d-076fcef37e77-kube-api-access-wwxlb\") pod \"calico-node-dgd87\" (UID: \"b07d9b29-7a0e-48fb-9c0d-076fcef37e77\") " pod="calico-system/calico-node-dgd87" Sep 10 00:32:15.534806 kubelet[2646]: I0910 00:32:15.534164 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b07d9b29-7a0e-48fb-9c0d-076fcef37e77-node-certs\") pod \"calico-node-dgd87\" (UID: \"b07d9b29-7a0e-48fb-9c0d-076fcef37e77\") " pod="calico-system/calico-node-dgd87" Sep 10 00:32:15.534806 kubelet[2646]: I0910 00:32:15.534190 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b07d9b29-7a0e-48fb-9c0d-076fcef37e77-xtables-lock\") pod \"calico-node-dgd87\" (UID: \"b07d9b29-7a0e-48fb-9c0d-076fcef37e77\") " pod="calico-system/calico-node-dgd87" Sep 10 00:32:15.640007 kubelet[2646]: E0910 00:32:15.639788 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.640007 kubelet[2646]: W0910 00:32:15.639821 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.640007 kubelet[2646]: E0910 00:32:15.639852 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.640706 kubelet[2646]: E0910 00:32:15.640664 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.640706 kubelet[2646]: W0910 00:32:15.640704 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.640807 kubelet[2646]: E0910 00:32:15.640728 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.646291 kubelet[2646]: E0910 00:32:15.646253 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.646435 kubelet[2646]: W0910 00:32:15.646301 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.646435 kubelet[2646]: E0910 00:32:15.646331 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.655928 kubelet[2646]: E0910 00:32:15.655822 2646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lkztn" podUID="da4aa882-2a3b-4ce6-a838-c6e29f20e7da" Sep 10 00:32:15.659733 containerd[1553]: time="2025-09-10T00:32:15.659672204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dgd87,Uid:b07d9b29-7a0e-48fb-9c0d-076fcef37e77,Namespace:calico-system,Attempt:0,}" Sep 10 00:32:15.689754 containerd[1553]: time="2025-09-10T00:32:15.689482797Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:32:15.689754 containerd[1553]: time="2025-09-10T00:32:15.689540476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:32:15.689754 containerd[1553]: time="2025-09-10T00:32:15.689566706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:32:15.689987 containerd[1553]: time="2025-09-10T00:32:15.689750283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:32:15.738815 kubelet[2646]: E0910 00:32:15.738768 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.738815 kubelet[2646]: W0910 00:32:15.738799 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.739105 kubelet[2646]: E0910 00:32:15.738879 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.739572 kubelet[2646]: E0910 00:32:15.739202 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.739572 kubelet[2646]: W0910 00:32:15.739262 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.739572 kubelet[2646]: E0910 00:32:15.739279 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.739572 kubelet[2646]: E0910 00:32:15.739505 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.739572 kubelet[2646]: W0910 00:32:15.739514 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.739572 kubelet[2646]: E0910 00:32:15.739524 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.739755 kubelet[2646]: E0910 00:32:15.739722 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.739755 kubelet[2646]: W0910 00:32:15.739731 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.739755 kubelet[2646]: E0910 00:32:15.739740 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.740060 kubelet[2646]: E0910 00:32:15.739955 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.740060 kubelet[2646]: W0910 00:32:15.739968 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.740060 kubelet[2646]: E0910 00:32:15.739980 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.740207 kubelet[2646]: E0910 00:32:15.740182 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.740207 kubelet[2646]: W0910 00:32:15.740192 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.740207 kubelet[2646]: E0910 00:32:15.740201 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.740805 kubelet[2646]: E0910 00:32:15.740726 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.740942 kubelet[2646]: W0910 00:32:15.740921 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.740993 kubelet[2646]: E0910 00:32:15.740944 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.741676 kubelet[2646]: E0910 00:32:15.741500 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.741676 kubelet[2646]: W0910 00:32:15.741534 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.741676 kubelet[2646]: E0910 00:32:15.741559 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.741955 kubelet[2646]: E0910 00:32:15.741943 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.742122 kubelet[2646]: W0910 00:32:15.742012 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.742122 kubelet[2646]: E0910 00:32:15.742026 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.742303 kubelet[2646]: E0910 00:32:15.742292 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.742392 kubelet[2646]: W0910 00:32:15.742378 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.742496 kubelet[2646]: E0910 00:32:15.742453 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.742895 kubelet[2646]: E0910 00:32:15.742780 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.742895 kubelet[2646]: W0910 00:32:15.742800 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.742895 kubelet[2646]: E0910 00:32:15.742811 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.743068 kubelet[2646]: E0910 00:32:15.743046 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.743219 kubelet[2646]: W0910 00:32:15.743149 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.743219 kubelet[2646]: E0910 00:32:15.743166 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.743635 kubelet[2646]: E0910 00:32:15.743560 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.743635 kubelet[2646]: W0910 00:32:15.743571 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.743635 kubelet[2646]: E0910 00:32:15.743581 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.744023 kubelet[2646]: E0910 00:32:15.744008 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.744215 kubelet[2646]: W0910 00:32:15.744135 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.744215 kubelet[2646]: E0910 00:32:15.744160 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.744608 kubelet[2646]: E0910 00:32:15.744510 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.744608 kubelet[2646]: W0910 00:32:15.744522 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.744608 kubelet[2646]: E0910 00:32:15.744532 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.744787 kubelet[2646]: E0910 00:32:15.744771 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.744934 kubelet[2646]: W0910 00:32:15.744867 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.744934 kubelet[2646]: E0910 00:32:15.744883 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.746030 kubelet[2646]: E0910 00:32:15.745985 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.746030 kubelet[2646]: W0910 00:32:15.746024 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.746108 kubelet[2646]: E0910 00:32:15.746054 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.746560 containerd[1553]: time="2025-09-10T00:32:15.746495043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dgd87,Uid:b07d9b29-7a0e-48fb-9c0d-076fcef37e77,Namespace:calico-system,Attempt:0,} returns sandbox id \"ca4d0a9927e1793d56741ff2d76d875d15b9204b529e892bb69d5bd759491c28\"" Sep 10 00:32:15.746679 kubelet[2646]: E0910 00:32:15.746623 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.746679 kubelet[2646]: W0910 00:32:15.746649 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.746679 kubelet[2646]: E0910 00:32:15.746662 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.747733 kubelet[2646]: E0910 00:32:15.747716 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.747733 kubelet[2646]: W0910 00:32:15.747730 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.747799 kubelet[2646]: E0910 00:32:15.747744 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.748049 kubelet[2646]: E0910 00:32:15.748033 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.748049 kubelet[2646]: W0910 00:32:15.748047 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.748119 kubelet[2646]: E0910 00:32:15.748058 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.836948 kubelet[2646]: E0910 00:32:15.836910 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.836948 kubelet[2646]: W0910 00:32:15.836935 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.837290 kubelet[2646]: E0910 00:32:15.836959 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.837290 kubelet[2646]: I0910 00:32:15.836999 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9s4l\" (UniqueName: \"kubernetes.io/projected/da4aa882-2a3b-4ce6-a838-c6e29f20e7da-kube-api-access-h9s4l\") pod \"csi-node-driver-lkztn\" (UID: \"da4aa882-2a3b-4ce6-a838-c6e29f20e7da\") " pod="calico-system/csi-node-driver-lkztn" Sep 10 00:32:15.837356 kubelet[2646]: E0910 00:32:15.837342 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.837356 kubelet[2646]: W0910 00:32:15.837355 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.837401 kubelet[2646]: E0910 00:32:15.837371 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.837401 kubelet[2646]: I0910 00:32:15.837387 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/da4aa882-2a3b-4ce6-a838-c6e29f20e7da-registration-dir\") pod \"csi-node-driver-lkztn\" (UID: \"da4aa882-2a3b-4ce6-a838-c6e29f20e7da\") " pod="calico-system/csi-node-driver-lkztn" Sep 10 00:32:15.837707 kubelet[2646]: E0910 00:32:15.837680 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.837707 kubelet[2646]: W0910 00:32:15.837705 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.837767 kubelet[2646]: E0910 00:32:15.837732 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.837940 kubelet[2646]: E0910 00:32:15.837922 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.837940 kubelet[2646]: W0910 00:32:15.837934 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.837995 kubelet[2646]: E0910 00:32:15.837946 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.838159 kubelet[2646]: E0910 00:32:15.838143 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.838159 kubelet[2646]: W0910 00:32:15.838154 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.838212 kubelet[2646]: E0910 00:32:15.838166 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.838212 kubelet[2646]: I0910 00:32:15.838190 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/da4aa882-2a3b-4ce6-a838-c6e29f20e7da-varrun\") pod \"csi-node-driver-lkztn\" (UID: \"da4aa882-2a3b-4ce6-a838-c6e29f20e7da\") " pod="calico-system/csi-node-driver-lkztn" Sep 10 00:32:15.838426 kubelet[2646]: E0910 00:32:15.838407 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.838426 kubelet[2646]: W0910 00:32:15.838423 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.838498 kubelet[2646]: E0910 00:32:15.838440 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.838664 kubelet[2646]: E0910 00:32:15.838650 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.838664 kubelet[2646]: W0910 00:32:15.838660 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.838731 kubelet[2646]: E0910 00:32:15.838676 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.838886 kubelet[2646]: E0910 00:32:15.838871 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.838886 kubelet[2646]: W0910 00:32:15.838882 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.838974 kubelet[2646]: E0910 00:32:15.838894 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.838974 kubelet[2646]: I0910 00:32:15.838910 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/da4aa882-2a3b-4ce6-a838-c6e29f20e7da-socket-dir\") pod \"csi-node-driver-lkztn\" (UID: \"da4aa882-2a3b-4ce6-a838-c6e29f20e7da\") " pod="calico-system/csi-node-driver-lkztn" Sep 10 00:32:15.839167 kubelet[2646]: E0910 00:32:15.839145 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.839167 kubelet[2646]: W0910 00:32:15.839161 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.839217 kubelet[2646]: E0910 00:32:15.839177 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.839217 kubelet[2646]: I0910 00:32:15.839195 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/da4aa882-2a3b-4ce6-a838-c6e29f20e7da-kubelet-dir\") pod \"csi-node-driver-lkztn\" (UID: \"da4aa882-2a3b-4ce6-a838-c6e29f20e7da\") " pod="calico-system/csi-node-driver-lkztn" Sep 10 00:32:15.839463 kubelet[2646]: E0910 00:32:15.839445 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.839463 kubelet[2646]: W0910 00:32:15.839461 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.839521 kubelet[2646]: E0910 00:32:15.839478 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.839769 kubelet[2646]: E0910 00:32:15.839753 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.839769 kubelet[2646]: W0910 00:32:15.839765 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.839839 kubelet[2646]: E0910 00:32:15.839780 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.840027 kubelet[2646]: E0910 00:32:15.840008 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.840051 kubelet[2646]: W0910 00:32:15.840025 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.840051 kubelet[2646]: E0910 00:32:15.840043 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.840280 kubelet[2646]: E0910 00:32:15.840252 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.840280 kubelet[2646]: W0910 00:32:15.840265 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.840280 kubelet[2646]: E0910 00:32:15.840278 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.840661 kubelet[2646]: E0910 00:32:15.840642 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.840693 kubelet[2646]: W0910 00:32:15.840663 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.840693 kubelet[2646]: E0910 00:32:15.840679 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.840951 kubelet[2646]: E0910 00:32:15.840933 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.840951 kubelet[2646]: W0910 00:32:15.840945 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.841020 kubelet[2646]: E0910 00:32:15.840955 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.940676 kubelet[2646]: E0910 00:32:15.940528 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.940676 kubelet[2646]: W0910 00:32:15.940550 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.940676 kubelet[2646]: E0910 00:32:15.940570 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.940946 kubelet[2646]: E0910 00:32:15.940917 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.940995 kubelet[2646]: W0910 00:32:15.940946 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.940995 kubelet[2646]: E0910 00:32:15.940980 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.941370 kubelet[2646]: E0910 00:32:15.941354 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.941370 kubelet[2646]: W0910 00:32:15.941366 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.941445 kubelet[2646]: E0910 00:32:15.941380 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.941703 kubelet[2646]: E0910 00:32:15.941683 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.941756 kubelet[2646]: W0910 00:32:15.941702 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.941756 kubelet[2646]: E0910 00:32:15.941722 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.942034 kubelet[2646]: E0910 00:32:15.942018 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.942079 kubelet[2646]: W0910 00:32:15.942031 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.942260 kubelet[2646]: E0910 00:32:15.942140 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.942412 kubelet[2646]: E0910 00:32:15.942387 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.942412 kubelet[2646]: W0910 00:32:15.942401 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.942505 kubelet[2646]: E0910 00:32:15.942482 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.942882 kubelet[2646]: E0910 00:32:15.942852 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.942882 kubelet[2646]: W0910 00:32:15.942870 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.943083 kubelet[2646]: E0910 00:32:15.942975 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.943229 kubelet[2646]: E0910 00:32:15.943211 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.943229 kubelet[2646]: W0910 00:32:15.943228 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.943368 kubelet[2646]: E0910 00:32:15.943285 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.943491 kubelet[2646]: E0910 00:32:15.943476 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.943491 kubelet[2646]: W0910 00:32:15.943488 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.943550 kubelet[2646]: E0910 00:32:15.943518 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.943701 kubelet[2646]: E0910 00:32:15.943687 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.943701 kubelet[2646]: W0910 00:32:15.943700 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.943765 kubelet[2646]: E0910 00:32:15.943732 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.943902 kubelet[2646]: E0910 00:32:15.943888 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.943902 kubelet[2646]: W0910 00:32:15.943899 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.943961 kubelet[2646]: E0910 00:32:15.943922 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.944107 kubelet[2646]: E0910 00:32:15.944091 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.944145 kubelet[2646]: W0910 00:32:15.944105 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.944145 kubelet[2646]: E0910 00:32:15.944123 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.944370 kubelet[2646]: E0910 00:32:15.944353 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.944370 kubelet[2646]: W0910 00:32:15.944368 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.944463 kubelet[2646]: E0910 00:32:15.944387 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.944671 kubelet[2646]: E0910 00:32:15.944647 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.944671 kubelet[2646]: W0910 00:32:15.944660 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.944718 kubelet[2646]: E0910 00:32:15.944689 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.944873 kubelet[2646]: E0910 00:32:15.944859 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.944873 kubelet[2646]: W0910 00:32:15.944870 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.945022 kubelet[2646]: E0910 00:32:15.945001 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.945687 kubelet[2646]: E0910 00:32:15.945669 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.945687 kubelet[2646]: W0910 00:32:15.945682 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.945756 kubelet[2646]: E0910 00:32:15.945711 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.945916 kubelet[2646]: E0910 00:32:15.945901 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.945916 kubelet[2646]: W0910 00:32:15.945914 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.945975 kubelet[2646]: E0910 00:32:15.945950 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.946151 kubelet[2646]: E0910 00:32:15.946135 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.946151 kubelet[2646]: W0910 00:32:15.946148 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.946201 kubelet[2646]: E0910 00:32:15.946188 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.946350 kubelet[2646]: E0910 00:32:15.946336 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.946350 kubelet[2646]: W0910 00:32:15.946347 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.946393 kubelet[2646]: E0910 00:32:15.946374 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.946555 kubelet[2646]: E0910 00:32:15.946542 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.946555 kubelet[2646]: W0910 00:32:15.946552 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.946601 kubelet[2646]: E0910 00:32:15.946566 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.946826 kubelet[2646]: E0910 00:32:15.946811 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.946826 kubelet[2646]: W0910 00:32:15.946822 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.946887 kubelet[2646]: E0910 00:32:15.946836 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.947076 kubelet[2646]: E0910 00:32:15.947060 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.947076 kubelet[2646]: W0910 00:32:15.947074 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.947120 kubelet[2646]: E0910 00:32:15.947089 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.947351 kubelet[2646]: E0910 00:32:15.947336 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.947351 kubelet[2646]: W0910 00:32:15.947348 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.947411 kubelet[2646]: E0910 00:32:15.947363 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.947587 kubelet[2646]: E0910 00:32:15.947572 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.947587 kubelet[2646]: W0910 00:32:15.947583 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.947654 kubelet[2646]: E0910 00:32:15.947595 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.947831 kubelet[2646]: E0910 00:32:15.947817 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.947831 kubelet[2646]: W0910 00:32:15.947828 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.947880 kubelet[2646]: E0910 00:32:15.947836 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:15.977770 kubelet[2646]: E0910 00:32:15.977729 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:15.977770 kubelet[2646]: W0910 00:32:15.977750 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:15.977770 kubelet[2646]: E0910 00:32:15.977771 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:16.954530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2609888725.mount: Deactivated successfully. Sep 10 00:32:17.339312 kubelet[2646]: E0910 00:32:17.339251 2646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lkztn" podUID="da4aa882-2a3b-4ce6-a838-c6e29f20e7da" Sep 10 00:32:17.674360 containerd[1553]: time="2025-09-10T00:32:17.674217877Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:32:17.674944 containerd[1553]: time="2025-09-10T00:32:17.674902893Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 10 00:32:17.676043 containerd[1553]: time="2025-09-10T00:32:17.676011129Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:32:17.680498 containerd[1553]: time="2025-09-10T00:32:17.680284161Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:32:17.680916 containerd[1553]: time="2025-09-10T00:32:17.680880830Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.251904705s" Sep 10 00:32:17.680916 containerd[1553]: time="2025-09-10T00:32:17.680912280Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 10 00:32:17.681906 containerd[1553]: time="2025-09-10T00:32:17.681877535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 10 00:32:17.695977 containerd[1553]: time="2025-09-10T00:32:17.695910690Z" level=info msg="CreateContainer within sandbox \"dbc674d77dba70467957f258800db7327c677b7868f12fe36cba4bd330a2a5b3\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 10 00:32:17.709068 containerd[1553]: time="2025-09-10T00:32:17.709030436Z" level=info msg="CreateContainer within sandbox \"dbc674d77dba70467957f258800db7327c677b7868f12fe36cba4bd330a2a5b3\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9f8bff1844ef9745591b2c2543930af3f20c5843a05eea8b94284035f61cbf2d\"" Sep 10 00:32:17.709384 containerd[1553]: time="2025-09-10T00:32:17.709357065Z" level=info msg="StartContainer for \"9f8bff1844ef9745591b2c2543930af3f20c5843a05eea8b94284035f61cbf2d\"" Sep 10 00:32:17.782894 containerd[1553]: time="2025-09-10T00:32:17.782842867Z" level=info msg="StartContainer for \"9f8bff1844ef9745591b2c2543930af3f20c5843a05eea8b94284035f61cbf2d\" returns successfully" Sep 10 00:32:18.437474 kubelet[2646]: E0910 00:32:18.437434 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:32:18.469359 kubelet[2646]: E0910 00:32:18.469312 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:18.469359 kubelet[2646]: W0910 00:32:18.469340 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:18.469522 kubelet[2646]: E0910 00:32:18.469366 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:18.469658 kubelet[2646]: E0910 00:32:18.469637 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:18.469658 kubelet[2646]: W0910 00:32:18.469649 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:18.469658 kubelet[2646]: E0910 00:32:18.469660 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:18.469881 kubelet[2646]: E0910 00:32:18.469853 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:18.469881 kubelet[2646]: W0910 00:32:18.469862 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:18.469881 kubelet[2646]: E0910 00:32:18.469870 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:18.470061 kubelet[2646]: E0910 00:32:18.470030 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:18.470061 kubelet[2646]: W0910 00:32:18.470042 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:18.470061 kubelet[2646]: E0910 00:32:18.470050 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:18.470244 kubelet[2646]: E0910 00:32:18.470214 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:18.470244 kubelet[2646]: W0910 00:32:18.470225 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:18.470301 kubelet[2646]: E0910 00:32:18.470248 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:18.470455 kubelet[2646]: E0910 00:32:18.470428 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:18.470455 kubelet[2646]: W0910 00:32:18.470443 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:18.470455 kubelet[2646]: E0910 00:32:18.470453 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:18.470666 kubelet[2646]: E0910 00:32:18.470647 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:18.470666 kubelet[2646]: W0910 00:32:18.470660 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:18.470722 kubelet[2646]: E0910 00:32:18.470671 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:18.470970 kubelet[2646]: E0910 00:32:18.470952 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:18.470970 kubelet[2646]: W0910 00:32:18.470966 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:18.471032 kubelet[2646]: E0910 00:32:18.470976 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:18.471197 kubelet[2646]: E0910 00:32:18.471182 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:18.471197 kubelet[2646]: W0910 00:32:18.471194 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:18.471267 kubelet[2646]: E0910 00:32:18.471204 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:18.471399 kubelet[2646]: E0910 00:32:18.471384 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:18.471399 kubelet[2646]: W0910 00:32:18.471396 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:18.471447 kubelet[2646]: E0910 00:32:18.471405 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:18.471609 kubelet[2646]: E0910 00:32:18.471595 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:18.471609 kubelet[2646]: W0910 00:32:18.471606 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:18.471686 kubelet[2646]: E0910 00:32:18.471620 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:18.471795 kubelet[2646]: E0910 00:32:18.471780 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:18.471795 kubelet[2646]: W0910 00:32:18.471791 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:18.471846 kubelet[2646]: E0910 00:32:18.471799 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:18.471969 kubelet[2646]: E0910 00:32:18.471955 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:18.471969 kubelet[2646]: W0910 00:32:18.471966 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:18.472018 kubelet[2646]: E0910 00:32:18.471976 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:18.472139 kubelet[2646]: E0910 00:32:18.472125 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:18.472139 kubelet[2646]: W0910 00:32:18.472136 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:18.472187 kubelet[2646]: E0910 00:32:18.472143 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:18.472352 kubelet[2646]: E0910 00:32:18.472336 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:18.472352 kubelet[2646]: W0910 00:32:18.472348 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:18.472411 kubelet[2646]: E0910 00:32:18.472357 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:18.559786 kubelet[2646]: E0910 00:32:18.559742 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:18.559786 kubelet[2646]: W0910 00:32:18.559770 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:18.559786 kubelet[2646]: E0910 00:32:18.559797 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:18.560094 kubelet[2646]: E0910 00:32:18.560079 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:18.560124 kubelet[2646]: W0910 00:32:18.560094 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:18.560124 kubelet[2646]: E0910 00:32:18.560112 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:18.560381 kubelet[2646]: E0910 00:32:18.560362 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:18.560381 kubelet[2646]: W0910 00:32:18.560379 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:18.560448 kubelet[2646]: E0910 00:32:18.560395 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:18.560674 kubelet[2646]: E0910 00:32:18.560659 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:18.560674 kubelet[2646]: W0910 00:32:18.560673 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:18.560724 kubelet[2646]: E0910 00:32:18.560688 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:18.560912 kubelet[2646]: E0910 00:32:18.560898 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:18.560946 kubelet[2646]: W0910 00:32:18.560912 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:18.560946 kubelet[2646]: E0910 00:32:18.560928 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:18.561170 kubelet[2646]: E0910 00:32:18.561156 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:18.561170 kubelet[2646]: W0910 00:32:18.561168 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:18.561227 kubelet[2646]: E0910 00:32:18.561185 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:18.561457 kubelet[2646]: E0910 00:32:18.561442 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:18.561457 kubelet[2646]: W0910 00:32:18.561456 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:18.561527 kubelet[2646]: E0910 00:32:18.561471 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:18.561728 kubelet[2646]: E0910 00:32:18.561705 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:18.561728 kubelet[2646]: W0910 00:32:18.561718 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:18.561783 kubelet[2646]: E0910 00:32:18.561750 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:18.562142 kubelet[2646]: E0910 00:32:18.561972 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:18.562142 kubelet[2646]: W0910 00:32:18.561983 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:18.562142 kubelet[2646]: E0910 00:32:18.562041 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:18.562266 kubelet[2646]: E0910 00:32:18.562249 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:18.562266 kubelet[2646]: W0910 00:32:18.562265 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:18.562338 kubelet[2646]: E0910 00:32:18.562283 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:18.562511 kubelet[2646]: E0910 00:32:18.562497 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:18.562557 kubelet[2646]: W0910 00:32:18.562510 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:18.562557 kubelet[2646]: E0910 00:32:18.562523 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:18.562770 kubelet[2646]: E0910 00:32:18.562751 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:18.562770 kubelet[2646]: W0910 00:32:18.562765 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:18.562862 kubelet[2646]: E0910 00:32:18.562777 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:18.563013 kubelet[2646]: E0910 00:32:18.562997 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:18.563013 kubelet[2646]: W0910 00:32:18.563011 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:18.563127 kubelet[2646]: E0910 00:32:18.563027 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:18.563307 kubelet[2646]: E0910 00:32:18.563291 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:18.563307 kubelet[2646]: W0910 00:32:18.563305 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:18.563383 kubelet[2646]: E0910 00:32:18.563321 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:18.563552 kubelet[2646]: E0910 00:32:18.563538 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:18.563574 kubelet[2646]: W0910 00:32:18.563550 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:18.563574 kubelet[2646]: E0910 00:32:18.563567 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:18.563795 kubelet[2646]: E0910 00:32:18.563780 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:18.563795 kubelet[2646]: W0910 00:32:18.563793 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:18.563861 kubelet[2646]: E0910 00:32:18.563810 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:18.564078 kubelet[2646]: E0910 00:32:18.564049 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:18.564078 kubelet[2646]: W0910 00:32:18.564067 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:18.564132 kubelet[2646]: E0910 00:32:18.564077 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:18.564627 kubelet[2646]: E0910 00:32:18.564605 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:18.564627 kubelet[2646]: W0910 00:32:18.564620 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:18.564687 kubelet[2646]: E0910 00:32:18.564631 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:18.637200 kubelet[2646]: I0910 00:32:18.637001 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-66c8554686-7cb45" podStartSLOduration=2.377940332 podStartE2EDuration="4.636979036s" podCreationTimestamp="2025-09-10 00:32:14 +0000 UTC" firstStartedPulling="2025-09-10 00:32:15.422689108 +0000 UTC m=+17.161787107" lastFinishedPulling="2025-09-10 00:32:17.681727812 +0000 UTC m=+19.420825811" observedRunningTime="2025-09-10 00:32:18.635357771 +0000 UTC m=+20.374455780" watchObservedRunningTime="2025-09-10 00:32:18.636979036 +0000 UTC m=+20.376077035" Sep 10 00:32:19.336782 kubelet[2646]: E0910 00:32:19.336714 2646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lkztn" podUID="da4aa882-2a3b-4ce6-a838-c6e29f20e7da" Sep 10 00:32:19.439472 kubelet[2646]: E0910 00:32:19.439076 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:32:19.456213 containerd[1553]: time="2025-09-10T00:32:19.456144893Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:32:19.456927 containerd[1553]: time="2025-09-10T00:32:19.456878690Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 10 00:32:19.457751 containerd[1553]: time="2025-09-10T00:32:19.457714690Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:32:19.459658 containerd[1553]: time="2025-09-10T00:32:19.459617897Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:32:19.460216 containerd[1553]: time="2025-09-10T00:32:19.460180459Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.778271634s" Sep 10 00:32:19.460272 containerd[1553]: time="2025-09-10T00:32:19.460214764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 10 00:32:19.463343 containerd[1553]: time="2025-09-10T00:32:19.463314752Z" level=info msg="CreateContainer within sandbox \"ca4d0a9927e1793d56741ff2d76d875d15b9204b529e892bb69d5bd759491c28\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 10 00:32:19.479443 kubelet[2646]: E0910 00:32:19.479396 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:19.479443 kubelet[2646]: W0910 00:32:19.479423 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:19.479626 kubelet[2646]: E0910 00:32:19.479452 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:19.479673 containerd[1553]: time="2025-09-10T00:32:19.479508151Z" level=info msg="CreateContainer within sandbox \"ca4d0a9927e1793d56741ff2d76d875d15b9204b529e892bb69d5bd759491c28\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"68ef2cfaff6277ac517e3f096a02774fdc3151e8ec43272b5f5388f352a7d763\"" Sep 10 00:32:19.479847 kubelet[2646]: E0910 00:32:19.479814 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:19.479917 kubelet[2646]: W0910 00:32:19.479845 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:19.479917 kubelet[2646]: E0910 00:32:19.479877 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:19.480197 kubelet[2646]: E0910 00:32:19.480176 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:19.480197 kubelet[2646]: W0910 00:32:19.480192 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:19.480197 kubelet[2646]: E0910 00:32:19.480207 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:19.480426 containerd[1553]: time="2025-09-10T00:32:19.480220227Z" level=info msg="StartContainer for \"68ef2cfaff6277ac517e3f096a02774fdc3151e8ec43272b5f5388f352a7d763\"" Sep 10 00:32:19.480507 kubelet[2646]: E0910 00:32:19.480487 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:19.480507 kubelet[2646]: W0910 00:32:19.480503 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:19.480611 kubelet[2646]: E0910 00:32:19.480516 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:19.480857 kubelet[2646]: E0910 00:32:19.480839 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:19.480857 kubelet[2646]: W0910 00:32:19.480852 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:19.480972 kubelet[2646]: E0910 00:32:19.480865 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:19.481180 kubelet[2646]: E0910 00:32:19.481140 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:19.481180 kubelet[2646]: W0910 00:32:19.481153 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:19.481180 kubelet[2646]: E0910 00:32:19.481165 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:19.481454 kubelet[2646]: E0910 00:32:19.481425 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:19.481454 kubelet[2646]: W0910 00:32:19.481440 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:19.481454 kubelet[2646]: E0910 00:32:19.481452 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:19.481812 kubelet[2646]: E0910 00:32:19.481668 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:19.481812 kubelet[2646]: W0910 00:32:19.481684 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:19.481812 kubelet[2646]: E0910 00:32:19.481698 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:19.481955 kubelet[2646]: E0910 00:32:19.481934 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:19.481955 kubelet[2646]: W0910 00:32:19.481949 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:19.482032 kubelet[2646]: E0910 00:32:19.481962 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:19.482210 kubelet[2646]: E0910 00:32:19.482180 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:19.482210 kubelet[2646]: W0910 00:32:19.482196 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:19.482210 kubelet[2646]: E0910 00:32:19.482207 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:19.482441 kubelet[2646]: E0910 00:32:19.482424 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:19.482441 kubelet[2646]: W0910 00:32:19.482435 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:19.482441 kubelet[2646]: E0910 00:32:19.482444 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:19.482724 kubelet[2646]: E0910 00:32:19.482646 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:19.482724 kubelet[2646]: W0910 00:32:19.482660 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:19.482724 kubelet[2646]: E0910 00:32:19.482667 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:19.482939 kubelet[2646]: E0910 00:32:19.482917 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:19.482939 kubelet[2646]: W0910 00:32:19.482933 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:19.482939 kubelet[2646]: E0910 00:32:19.482942 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:19.483280 kubelet[2646]: E0910 00:32:19.483256 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:19.483280 kubelet[2646]: W0910 00:32:19.483272 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:19.483280 kubelet[2646]: E0910 00:32:19.483281 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:19.483494 kubelet[2646]: E0910 00:32:19.483473 2646 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:32:19.483494 kubelet[2646]: W0910 00:32:19.483481 2646 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:32:19.483494 kubelet[2646]: E0910 00:32:19.483489 2646 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:32:19.543550 containerd[1553]: time="2025-09-10T00:32:19.543509301Z" level=info msg="StartContainer for \"68ef2cfaff6277ac517e3f096a02774fdc3151e8ec43272b5f5388f352a7d763\" returns successfully" Sep 10 00:32:19.688292 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68ef2cfaff6277ac517e3f096a02774fdc3151e8ec43272b5f5388f352a7d763-rootfs.mount: Deactivated successfully. Sep 10 00:32:19.925876 containerd[1553]: time="2025-09-10T00:32:19.925802057Z" level=info msg="shim disconnected" id=68ef2cfaff6277ac517e3f096a02774fdc3151e8ec43272b5f5388f352a7d763 namespace=k8s.io Sep 10 00:32:19.925876 containerd[1553]: time="2025-09-10T00:32:19.925862701Z" level=warning msg="cleaning up after shim disconnected" id=68ef2cfaff6277ac517e3f096a02774fdc3151e8ec43272b5f5388f352a7d763 namespace=k8s.io Sep 10 00:32:19.925876 containerd[1553]: time="2025-09-10T00:32:19.925872189Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 00:32:20.444223 kubelet[2646]: E0910 00:32:20.444165 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:32:20.445037 containerd[1553]: time="2025-09-10T00:32:20.444999948Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 10 00:32:21.336488 kubelet[2646]: E0910 00:32:21.336401 2646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lkztn" podUID="da4aa882-2a3b-4ce6-a838-c6e29f20e7da" Sep 10 00:32:23.335907 kubelet[2646]: E0910 00:32:23.335846 2646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lkztn" podUID="da4aa882-2a3b-4ce6-a838-c6e29f20e7da" Sep 10 00:32:24.538520 containerd[1553]: time="2025-09-10T00:32:24.538452706Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:32:24.539306 containerd[1553]: time="2025-09-10T00:32:24.539252454Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 10 00:32:24.540332 containerd[1553]: time="2025-09-10T00:32:24.540296132Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:32:24.542592 containerd[1553]: time="2025-09-10T00:32:24.542554862Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:32:24.543383 containerd[1553]: time="2025-09-10T00:32:24.543349871Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 4.098309618s" Sep 10 00:32:24.543383 containerd[1553]: time="2025-09-10T00:32:24.543382433Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 10 00:32:24.545295 containerd[1553]: time="2025-09-10T00:32:24.545269111Z" level=info msg="CreateContainer within sandbox \"ca4d0a9927e1793d56741ff2d76d875d15b9204b529e892bb69d5bd759491c28\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 10 00:32:24.563754 containerd[1553]: time="2025-09-10T00:32:24.563705157Z" level=info msg="CreateContainer within sandbox \"ca4d0a9927e1793d56741ff2d76d875d15b9204b529e892bb69d5bd759491c28\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7a90057efabd50ceb6d64674b1a5518a3848e453cc93aeec49561379f3832df8\"" Sep 10 00:32:24.564377 containerd[1553]: time="2025-09-10T00:32:24.564352307Z" level=info msg="StartContainer for \"7a90057efabd50ceb6d64674b1a5518a3848e453cc93aeec49561379f3832df8\"" Sep 10 00:32:24.659210 containerd[1553]: time="2025-09-10T00:32:24.659156158Z" level=info msg="StartContainer for \"7a90057efabd50ceb6d64674b1a5518a3848e453cc93aeec49561379f3832df8\" returns successfully" Sep 10 00:32:25.336311 kubelet[2646]: E0910 00:32:25.336250 2646 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lkztn" podUID="da4aa882-2a3b-4ce6-a838-c6e29f20e7da" Sep 10 00:32:26.046457 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a90057efabd50ceb6d64674b1a5518a3848e453cc93aeec49561379f3832df8-rootfs.mount: Deactivated successfully. Sep 10 00:32:26.051558 containerd[1553]: time="2025-09-10T00:32:26.051458577Z" level=info msg="shim disconnected" id=7a90057efabd50ceb6d64674b1a5518a3848e453cc93aeec49561379f3832df8 namespace=k8s.io Sep 10 00:32:26.051558 containerd[1553]: time="2025-09-10T00:32:26.051533349Z" level=warning msg="cleaning up after shim disconnected" id=7a90057efabd50ceb6d64674b1a5518a3848e453cc93aeec49561379f3832df8 namespace=k8s.io Sep 10 00:32:26.051558 containerd[1553]: time="2025-09-10T00:32:26.051542826Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 00:32:26.105247 kubelet[2646]: I0910 00:32:26.105180 2646 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 10 00:32:26.313391 kubelet[2646]: I0910 00:32:26.313335 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhp6h\" (UniqueName: \"kubernetes.io/projected/43830e0e-c802-4dc2-b581-750fc8258059-kube-api-access-rhp6h\") pod \"whisker-7dcf94495f-gb68x\" (UID: \"43830e0e-c802-4dc2-b581-750fc8258059\") " pod="calico-system/whisker-7dcf94495f-gb68x" Sep 10 00:32:26.313391 kubelet[2646]: I0910 00:32:26.313388 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/444dd844-3e2d-443a-aa9b-1761b39df54b-goldmane-ca-bundle\") pod \"goldmane-7988f88666-cwk66\" (UID: \"444dd844-3e2d-443a-aa9b-1761b39df54b\") " pod="calico-system/goldmane-7988f88666-cwk66" Sep 10 00:32:26.313581 kubelet[2646]: I0910 00:32:26.313408 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp6vv\" (UniqueName: \"kubernetes.io/projected/444dd844-3e2d-443a-aa9b-1761b39df54b-kube-api-access-xp6vv\") pod \"goldmane-7988f88666-cwk66\" (UID: \"444dd844-3e2d-443a-aa9b-1761b39df54b\") " pod="calico-system/goldmane-7988f88666-cwk66" Sep 10 00:32:26.313581 kubelet[2646]: I0910 00:32:26.313426 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3dd0117-a752-4081-b55b-d575ef8b3051-config-volume\") pod \"coredns-7c65d6cfc9-9gf45\" (UID: \"a3dd0117-a752-4081-b55b-d575ef8b3051\") " pod="kube-system/coredns-7c65d6cfc9-9gf45" Sep 10 00:32:26.313581 kubelet[2646]: I0910 00:32:26.313505 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ac80b016-97cf-437e-8b81-8fe6c2942f59-calico-apiserver-certs\") pod \"calico-apiserver-954df557d-d5768\" (UID: \"ac80b016-97cf-437e-8b81-8fe6c2942f59\") " pod="calico-apiserver/calico-apiserver-954df557d-d5768" Sep 10 00:32:26.313581 kubelet[2646]: I0910 00:32:26.313553 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08e6a60c-fbd0-4ac8-87e2-26029f752560-config-volume\") pod \"coredns-7c65d6cfc9-5kznd\" (UID: \"08e6a60c-fbd0-4ac8-87e2-26029f752560\") " pod="kube-system/coredns-7c65d6cfc9-5kznd" Sep 10 00:32:26.313681 kubelet[2646]: I0910 00:32:26.313620 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6r6l\" (UniqueName: \"kubernetes.io/projected/ac80b016-97cf-437e-8b81-8fe6c2942f59-kube-api-access-c6r6l\") pod \"calico-apiserver-954df557d-d5768\" (UID: \"ac80b016-97cf-437e-8b81-8fe6c2942f59\") " pod="calico-apiserver/calico-apiserver-954df557d-d5768" Sep 10 00:32:26.313681 kubelet[2646]: I0910 00:32:26.313656 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/444dd844-3e2d-443a-aa9b-1761b39df54b-config\") pod \"goldmane-7988f88666-cwk66\" (UID: \"444dd844-3e2d-443a-aa9b-1761b39df54b\") " pod="calico-system/goldmane-7988f88666-cwk66" Sep 10 00:32:26.313729 kubelet[2646]: I0910 00:32:26.313687 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e74294c4-7f70-425e-bbde-a3f523072902-tigera-ca-bundle\") pod \"calico-kube-controllers-b94b4bd57-2vk8t\" (UID: \"e74294c4-7f70-425e-bbde-a3f523072902\") " pod="calico-system/calico-kube-controllers-b94b4bd57-2vk8t" Sep 10 00:32:26.313729 kubelet[2646]: I0910 00:32:26.313708 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bjwv\" (UniqueName: \"kubernetes.io/projected/a3dd0117-a752-4081-b55b-d575ef8b3051-kube-api-access-8bjwv\") pod \"coredns-7c65d6cfc9-9gf45\" (UID: \"a3dd0117-a752-4081-b55b-d575ef8b3051\") " pod="kube-system/coredns-7c65d6cfc9-9gf45" Sep 10 00:32:26.313776 kubelet[2646]: I0910 00:32:26.313737 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/444dd844-3e2d-443a-aa9b-1761b39df54b-goldmane-key-pair\") pod \"goldmane-7988f88666-cwk66\" (UID: \"444dd844-3e2d-443a-aa9b-1761b39df54b\") " pod="calico-system/goldmane-7988f88666-cwk66" Sep 10 00:32:26.313838 kubelet[2646]: I0910 00:32:26.313815 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0b9b71be-4940-4c7a-8e7a-6616e525febf-calico-apiserver-certs\") pod \"calico-apiserver-954df557d-7jgtm\" (UID: \"0b9b71be-4940-4c7a-8e7a-6616e525febf\") " pod="calico-apiserver/calico-apiserver-954df557d-7jgtm" Sep 10 00:32:26.313867 kubelet[2646]: I0910 00:32:26.313853 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jchx\" (UniqueName: \"kubernetes.io/projected/0b9b71be-4940-4c7a-8e7a-6616e525febf-kube-api-access-7jchx\") pod \"calico-apiserver-954df557d-7jgtm\" (UID: \"0b9b71be-4940-4c7a-8e7a-6616e525febf\") " pod="calico-apiserver/calico-apiserver-954df557d-7jgtm" Sep 10 00:32:26.313892 kubelet[2646]: I0910 00:32:26.313879 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xbqx\" (UniqueName: \"kubernetes.io/projected/e74294c4-7f70-425e-bbde-a3f523072902-kube-api-access-5xbqx\") pod \"calico-kube-controllers-b94b4bd57-2vk8t\" (UID: \"e74294c4-7f70-425e-bbde-a3f523072902\") " pod="calico-system/calico-kube-controllers-b94b4bd57-2vk8t" Sep 10 00:32:26.313980 kubelet[2646]: I0910 00:32:26.313950 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfkfw\" (UniqueName: \"kubernetes.io/projected/08e6a60c-fbd0-4ac8-87e2-26029f752560-kube-api-access-rfkfw\") pod \"coredns-7c65d6cfc9-5kznd\" (UID: \"08e6a60c-fbd0-4ac8-87e2-26029f752560\") " pod="kube-system/coredns-7c65d6cfc9-5kznd" Sep 10 00:32:26.314015 kubelet[2646]: I0910 00:32:26.313991 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/43830e0e-c802-4dc2-b581-750fc8258059-whisker-backend-key-pair\") pod \"whisker-7dcf94495f-gb68x\" (UID: \"43830e0e-c802-4dc2-b581-750fc8258059\") " pod="calico-system/whisker-7dcf94495f-gb68x" Sep 10 00:32:26.314040 kubelet[2646]: I0910 00:32:26.314016 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43830e0e-c802-4dc2-b581-750fc8258059-whisker-ca-bundle\") pod \"whisker-7dcf94495f-gb68x\" (UID: \"43830e0e-c802-4dc2-b581-750fc8258059\") " pod="calico-system/whisker-7dcf94495f-gb68x" Sep 10 00:32:26.461713 containerd[1553]: time="2025-09-10T00:32:26.461675563Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 10 00:32:26.522263 kubelet[2646]: E0910 00:32:26.522163 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:32:26.525271 containerd[1553]: time="2025-09-10T00:32:26.525176065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9gf45,Uid:a3dd0117-a752-4081-b55b-d575ef8b3051,Namespace:kube-system,Attempt:0,}" Sep 10 00:32:26.528670 containerd[1553]: time="2025-09-10T00:32:26.528631197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-954df557d-d5768,Uid:ac80b016-97cf-437e-8b81-8fe6c2942f59,Namespace:calico-apiserver,Attempt:0,}" Sep 10 00:32:26.535449 containerd[1553]: time="2025-09-10T00:32:26.535389523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-cwk66,Uid:444dd844-3e2d-443a-aa9b-1761b39df54b,Namespace:calico-system,Attempt:0,}" Sep 10 00:32:26.535874 containerd[1553]: time="2025-09-10T00:32:26.535788716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7dcf94495f-gb68x,Uid:43830e0e-c802-4dc2-b581-750fc8258059,Namespace:calico-system,Attempt:0,}" Sep 10 00:32:26.535939 containerd[1553]: time="2025-09-10T00:32:26.535788606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b94b4bd57-2vk8t,Uid:e74294c4-7f70-425e-bbde-a3f523072902,Namespace:calico-system,Attempt:0,}" Sep 10 00:32:26.539148 kubelet[2646]: E0910 00:32:26.539100 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:32:26.539482 containerd[1553]: time="2025-09-10T00:32:26.539433354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5kznd,Uid:08e6a60c-fbd0-4ac8-87e2-26029f752560,Namespace:kube-system,Attempt:0,}" Sep 10 00:32:26.540985 containerd[1553]: time="2025-09-10T00:32:26.540958579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-954df557d-7jgtm,Uid:0b9b71be-4940-4c7a-8e7a-6616e525febf,Namespace:calico-apiserver,Attempt:0,}" Sep 10 00:32:26.811577 containerd[1553]: time="2025-09-10T00:32:26.811410506Z" level=error msg="Failed to destroy network for sandbox \"6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:32:26.815754 containerd[1553]: time="2025-09-10T00:32:26.814648268Z" level=error msg="encountered an error cleaning up failed sandbox \"6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:32:26.815754 containerd[1553]: time="2025-09-10T00:32:26.814709554Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-954df557d-d5768,Uid:ac80b016-97cf-437e-8b81-8fe6c2942f59,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:32:26.822522 containerd[1553]: time="2025-09-10T00:32:26.822356114Z" level=error msg="Failed to destroy network for sandbox \"bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:32:26.823044 containerd[1553]: time="2025-09-10T00:32:26.822855225Z" level=error msg="encountered an error cleaning up failed sandbox \"bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:32:26.823044 containerd[1553]: time="2025-09-10T00:32:26.822901201Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9gf45,Uid:a3dd0117-a752-4081-b55b-d575ef8b3051,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:32:26.835411 kubelet[2646]: E0910 00:32:26.835347 2646 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:32:26.835624 kubelet[2646]: E0910 00:32:26.835352 2646 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:32:26.835624 kubelet[2646]: E0910 00:32:26.835542 2646 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-954df557d-d5768" Sep 10 00:32:26.835624 kubelet[2646]: E0910 00:32:26.835576 2646 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-954df557d-d5768" Sep 10 00:32:26.835745 kubelet[2646]: E0910 00:32:26.835636 2646 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-954df557d-d5768_calico-apiserver(ac80b016-97cf-437e-8b81-8fe6c2942f59)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-954df557d-d5768_calico-apiserver(ac80b016-97cf-437e-8b81-8fe6c2942f59)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-954df557d-d5768" podUID="ac80b016-97cf-437e-8b81-8fe6c2942f59" Sep 10 00:32:26.835844 kubelet[2646]: E0910 00:32:26.835466 2646 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-9gf45" Sep 10 00:32:26.835937 kubelet[2646]: E0910 00:32:26.835910 2646 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-9gf45" Sep 10 00:32:26.836073 kubelet[2646]: E0910 00:32:26.836041 2646 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-9gf45_kube-system(a3dd0117-a752-4081-b55b-d575ef8b3051)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-9gf45_kube-system(a3dd0117-a752-4081-b55b-d575ef8b3051)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-9gf45" podUID="a3dd0117-a752-4081-b55b-d575ef8b3051" Sep 10 00:32:26.844050 containerd[1553]: time="2025-09-10T00:32:26.843968765Z" level=error msg="Failed to destroy network for sandbox \"cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:32:26.844534 containerd[1553]: time="2025-09-10T00:32:26.844500197Z" level=error msg="encountered an error cleaning up failed sandbox \"cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:32:26.844586 containerd[1553]: time="2025-09-10T00:32:26.844554289Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5kznd,Uid:08e6a60c-fbd0-4ac8-87e2-26029f752560,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:32:26.845087 kubelet[2646]: E0910 00:32:26.844764 2646 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:32:26.845087 kubelet[2646]: E0910 00:32:26.844823 2646 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-5kznd" Sep 10 00:32:26.845087 kubelet[2646]: E0910 00:32:26.844843 2646 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-5kznd" Sep 10 00:32:26.845209 kubelet[2646]: E0910 00:32:26.844883 2646 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-5kznd_kube-system(08e6a60c-fbd0-4ac8-87e2-26029f752560)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-5kznd_kube-system(08e6a60c-fbd0-4ac8-87e2-26029f752560)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-5kznd" podUID="08e6a60c-fbd0-4ac8-87e2-26029f752560" Sep 10 00:32:26.846960 containerd[1553]: time="2025-09-10T00:32:26.846902143Z" level=error msg="Failed to destroy network for sandbox \"fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:32:26.847736 containerd[1553]: time="2025-09-10T00:32:26.847629324Z" level=error msg="encountered an error cleaning up failed sandbox \"fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:32:26.847736 containerd[1553]: time="2025-09-10T00:32:26.847687122Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-cwk66,Uid:444dd844-3e2d-443a-aa9b-1761b39df54b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:32:26.848173 kubelet[2646]: E0910 00:32:26.848119 2646 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:32:26.848266 kubelet[2646]: E0910 00:32:26.848173 2646 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-cwk66" Sep 10 00:32:26.848266 kubelet[2646]: E0910 00:32:26.848194 2646 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-cwk66" Sep 10 00:32:26.849389 kubelet[2646]: E0910 00:32:26.849262 2646 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-cwk66_calico-system(444dd844-3e2d-443a-aa9b-1761b39df54b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-cwk66_calico-system(444dd844-3e2d-443a-aa9b-1761b39df54b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-cwk66" podUID="444dd844-3e2d-443a-aa9b-1761b39df54b" Sep 10 00:32:26.856267 containerd[1553]: time="2025-09-10T00:32:26.856204464Z" level=error msg="Failed to destroy network for sandbox \"421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:32:26.856781 containerd[1553]: time="2025-09-10T00:32:26.856756144Z" level=error msg="encountered an error cleaning up failed sandbox \"421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:32:26.857327 containerd[1553]: time="2025-09-10T00:32:26.857205411Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7dcf94495f-gb68x,Uid:43830e0e-c802-4dc2-b581-750fc8258059,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:32:26.857513 kubelet[2646]: E0910 00:32:26.857441 2646 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:32:26.857922 kubelet[2646]: E0910 00:32:26.857521 2646 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7dcf94495f-gb68x" Sep 10 00:32:26.857922 kubelet[2646]: E0910 00:32:26.857551 2646 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7dcf94495f-gb68x" Sep 10 00:32:26.857922 kubelet[2646]: E0910 00:32:26.857590 2646 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7dcf94495f-gb68x_calico-system(43830e0e-c802-4dc2-b581-750fc8258059)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7dcf94495f-gb68x_calico-system(43830e0e-c802-4dc2-b581-750fc8258059)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7dcf94495f-gb68x" podUID="43830e0e-c802-4dc2-b581-750fc8258059" Sep 10 00:32:26.865110 containerd[1553]: time="2025-09-10T00:32:26.865044074Z" level=error msg="Failed to destroy network for sandbox \"05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:32:26.865856 containerd[1553]: time="2025-09-10T00:32:26.865790209Z" level=error msg="encountered an error cleaning up failed sandbox \"05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:32:26.865952 containerd[1553]: time="2025-09-10T00:32:26.865929902Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b94b4bd57-2vk8t,Uid:e74294c4-7f70-425e-bbde-a3f523072902,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:32:26.867068 kubelet[2646]: E0910 00:32:26.866290 2646 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:32:26.867068 kubelet[2646]: E0910 00:32:26.866356 2646 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b94b4bd57-2vk8t" Sep 10 00:32:26.867068 kubelet[2646]: E0910 00:32:26.866375 2646 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b94b4bd57-2vk8t" Sep 10 00:32:26.867206 kubelet[2646]: E0910 00:32:26.866411 2646 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-b94b4bd57-2vk8t_calico-system(e74294c4-7f70-425e-bbde-a3f523072902)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-b94b4bd57-2vk8t_calico-system(e74294c4-7f70-425e-bbde-a3f523072902)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-b94b4bd57-2vk8t" podUID="e74294c4-7f70-425e-bbde-a3f523072902" Sep 10 00:32:26.873422 containerd[1553]: time="2025-09-10T00:32:26.873372229Z" level=error msg="Failed to destroy network for sandbox \"3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:32:26.873852 containerd[1553]: time="2025-09-10T00:32:26.873814862Z" level=error msg="encountered an error cleaning up failed sandbox \"3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:32:26.873893 containerd[1553]: time="2025-09-10T00:32:26.873869796Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-954df557d-7jgtm,Uid:0b9b71be-4940-4c7a-8e7a-6616e525febf,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:32:26.874065 kubelet[2646]: E0910 00:32:26.874033 2646 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:32:26.874108 kubelet[2646]: E0910 00:32:26.874074 2646 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-954df557d-7jgtm" Sep 10 00:32:26.874108 kubelet[2646]: E0910 00:32:26.874091 2646 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-954df557d-7jgtm" Sep 10 00:32:26.874157 kubelet[2646]: E0910 00:32:26.874130 2646 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-954df557d-7jgtm_calico-apiserver(0b9b71be-4940-4c7a-8e7a-6616e525febf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-954df557d-7jgtm_calico-apiserver(0b9b71be-4940-4c7a-8e7a-6616e525febf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-954df557d-7jgtm" podUID="0b9b71be-4940-4c7a-8e7a-6616e525febf" Sep 10 00:32:27.339313 containerd[1553]: time="2025-09-10T00:32:27.339258861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lkztn,Uid:da4aa882-2a3b-4ce6-a838-c6e29f20e7da,Namespace:calico-system,Attempt:0,}" Sep 10 00:32:27.460528 kubelet[2646]: I0910 00:32:27.460471 2646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6" Sep 10 00:32:27.462312 kubelet[2646]: I0910 00:32:27.462288 2646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea" Sep 10 00:32:27.463575 kubelet[2646]: I0910 00:32:27.463529 2646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974" Sep 10 00:32:27.464894 kubelet[2646]: I0910 00:32:27.464873 2646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc" Sep 10 00:32:27.488056 containerd[1553]: time="2025-09-10T00:32:27.487994953Z" level=info msg="StopPodSandbox for \"cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974\"" Sep 10 00:32:27.490299 containerd[1553]: time="2025-09-10T00:32:27.490257325Z" level=info msg="StopPodSandbox for \"bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc\"" Sep 10 00:32:27.491344 containerd[1553]: time="2025-09-10T00:32:27.491311091Z" level=info msg="Ensure that sandbox cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974 in task-service has been cleanup successfully" Sep 10 00:32:27.491645 containerd[1553]: time="2025-09-10T00:32:27.491615894Z" level=info msg="Ensure that sandbox bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc in task-service has been cleanup successfully" Sep 10 00:32:27.492854 containerd[1553]: time="2025-09-10T00:32:27.492603496Z" level=info msg="StopPodSandbox for \"3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea\"" Sep 10 00:32:27.493103 containerd[1553]: time="2025-09-10T00:32:27.493081146Z" level=info msg="Ensure that sandbox 3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea in task-service has been cleanup successfully" Sep 10 00:32:27.497303 containerd[1553]: time="2025-09-10T00:32:27.497267262Z" level=info msg="StopPodSandbox for \"fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6\"" Sep 10 00:32:27.497463 containerd[1553]: time="2025-09-10T00:32:27.497441821Z" level=info msg="Ensure that sandbox fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6 in task-service has been cleanup successfully" Sep 10 00:32:27.498161 kubelet[2646]: I0910 00:32:27.498142 2646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4" Sep 10 00:32:27.498759 containerd[1553]: time="2025-09-10T00:32:27.498737232Z" level=info msg="StopPodSandbox for \"6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4\"" Sep 10 00:32:27.499576 kubelet[2646]: I0910 00:32:27.499195 2646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b" Sep 10 00:32:27.500722 containerd[1553]: time="2025-09-10T00:32:27.500700941Z" level=info msg="Ensure that sandbox 6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4 in task-service has been cleanup successfully" Sep 10 00:32:27.500801 containerd[1553]: time="2025-09-10T00:32:27.499672183Z" level=info msg="StopPodSandbox for \"05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b\"" Sep 10 00:32:27.501022 containerd[1553]: time="2025-09-10T00:32:27.501003522Z" level=info msg="Ensure that sandbox 05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b in task-service has been cleanup successfully" Sep 10 00:32:27.502174 kubelet[2646]: I0910 00:32:27.502155 2646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c" Sep 10 00:32:27.502881 containerd[1553]: time="2025-09-10T00:32:27.502859348Z" level=info msg="StopPodSandbox for \"421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c\"" Sep 10 00:32:27.503075 containerd[1553]: time="2025-09-10T00:32:27.503057471Z" level=info msg="Ensure that sandbox 421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c in task-service has been cleanup successfully" Sep 10 00:32:27.555456 containerd[1553]: time="2025-09-10T00:32:27.555393072Z" level=error msg="StopPodSandbox for \"fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6\" failed" error="failed to destroy network for sandbox \"fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:32:27.556116 kubelet[2646]: E0910 00:32:27.555906 2646 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6" Sep 10 00:32:27.556116 kubelet[2646]: E0910 00:32:27.555981 2646 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6"} Sep 10 00:32:27.556116 kubelet[2646]: E0910 00:32:27.556045 2646 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"444dd844-3e2d-443a-aa9b-1761b39df54b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:32:27.556116 kubelet[2646]: E0910 00:32:27.556076 2646 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"444dd844-3e2d-443a-aa9b-1761b39df54b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-cwk66" podUID="444dd844-3e2d-443a-aa9b-1761b39df54b" Sep 10 00:32:27.562967 containerd[1553]: time="2025-09-10T00:32:27.562905887Z" level=error msg="StopPodSandbox for \"3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea\" failed" error="failed to destroy network for sandbox \"3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:32:27.563609 kubelet[2646]: E0910 00:32:27.563426 2646 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea" Sep 10 00:32:27.563609 kubelet[2646]: E0910 00:32:27.563496 2646 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea"} Sep 10 00:32:27.563609 kubelet[2646]: E0910 00:32:27.563537 2646 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0b9b71be-4940-4c7a-8e7a-6616e525febf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:32:27.563609 kubelet[2646]: E0910 00:32:27.563566 2646 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0b9b71be-4940-4c7a-8e7a-6616e525febf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-954df557d-7jgtm" podUID="0b9b71be-4940-4c7a-8e7a-6616e525febf" Sep 10 00:32:27.566504 containerd[1553]: time="2025-09-10T00:32:27.566444143Z" level=error msg="StopPodSandbox for \"05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b\" failed" error="failed to destroy network for sandbox \"05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:32:27.566711 kubelet[2646]: E0910 00:32:27.566669 2646 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b" Sep 10 00:32:27.566757 kubelet[2646]: E0910 00:32:27.566724 2646 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b"} Sep 10 00:32:27.566784 kubelet[2646]: E0910 00:32:27.566768 2646 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e74294c4-7f70-425e-bbde-a3f523072902\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:32:27.566839 kubelet[2646]: E0910 00:32:27.566793 2646 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e74294c4-7f70-425e-bbde-a3f523072902\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-b94b4bd57-2vk8t" podUID="e74294c4-7f70-425e-bbde-a3f523072902" Sep 10 00:32:27.573817 containerd[1553]: time="2025-09-10T00:32:27.573687991Z" level=error msg="StopPodSandbox for \"bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc\" failed" error="failed to destroy network for sandbox \"bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:32:27.573941 kubelet[2646]: E0910 00:32:27.573893 2646 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc" Sep 10 00:32:27.573994 kubelet[2646]: E0910 00:32:27.573941 2646 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc"} Sep 10 00:32:27.573994 kubelet[2646]: E0910 00:32:27.573980 2646 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a3dd0117-a752-4081-b55b-d575ef8b3051\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:32:27.574080 kubelet[2646]: E0910 00:32:27.574002 2646 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a3dd0117-a752-4081-b55b-d575ef8b3051\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-9gf45" podUID="a3dd0117-a752-4081-b55b-d575ef8b3051" Sep 10 00:32:27.578444 containerd[1553]: time="2025-09-10T00:32:27.578371084Z" level=error msg="StopPodSandbox for \"cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974\" failed" error="failed to destroy network for sandbox \"cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:32:27.578651 kubelet[2646]: E0910 00:32:27.578617 2646 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974" Sep 10 00:32:27.578706 kubelet[2646]: E0910 00:32:27.578658 2646 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974"} Sep 10 00:32:27.578706 kubelet[2646]: E0910 00:32:27.578683 2646 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"08e6a60c-fbd0-4ac8-87e2-26029f752560\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:32:27.578824 kubelet[2646]: E0910 00:32:27.578704 2646 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"08e6a60c-fbd0-4ac8-87e2-26029f752560\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-5kznd" podUID="08e6a60c-fbd0-4ac8-87e2-26029f752560" Sep 10 00:32:27.579055 containerd[1553]: time="2025-09-10T00:32:27.579026228Z" level=error msg="StopPodSandbox for \"6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4\" failed" error="failed to destroy network for sandbox \"6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:32:27.579289 kubelet[2646]: E0910 00:32:27.579228 2646 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4" Sep 10 00:32:27.579289 kubelet[2646]: E0910 00:32:27.579275 2646 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4"} Sep 10 00:32:27.579289 kubelet[2646]: E0910 00:32:27.579299 2646 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ac80b016-97cf-437e-8b81-8fe6c2942f59\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:32:27.579567 kubelet[2646]: E0910 00:32:27.579317 2646 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ac80b016-97cf-437e-8b81-8fe6c2942f59\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-954df557d-d5768" podUID="ac80b016-97cf-437e-8b81-8fe6c2942f59" Sep 10 00:32:27.583379 containerd[1553]: time="2025-09-10T00:32:27.583319777Z" level=error msg="StopPodSandbox for \"421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c\" failed" error="failed to destroy network for sandbox \"421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:32:27.583581 kubelet[2646]: E0910 00:32:27.583531 2646 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c" Sep 10 00:32:27.583581 kubelet[2646]: E0910 00:32:27.583573 2646 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c"} Sep 10 00:32:27.583670 kubelet[2646]: E0910 00:32:27.583598 2646 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"43830e0e-c802-4dc2-b581-750fc8258059\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:32:27.583670 kubelet[2646]: E0910 00:32:27.583619 2646 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"43830e0e-c802-4dc2-b581-750fc8258059\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7dcf94495f-gb68x" podUID="43830e0e-c802-4dc2-b581-750fc8258059" Sep 10 00:32:27.640536 containerd[1553]: time="2025-09-10T00:32:27.640350966Z" level=error msg="Failed to destroy network for sandbox \"ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:32:27.640978 containerd[1553]: time="2025-09-10T00:32:27.640924165Z" level=error msg="encountered an error cleaning up failed sandbox \"ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:32:27.641029 containerd[1553]: time="2025-09-10T00:32:27.640995309Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lkztn,Uid:da4aa882-2a3b-4ce6-a838-c6e29f20e7da,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:32:27.641358 kubelet[2646]: E0910 00:32:27.641309 2646 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:32:27.642833 kubelet[2646]: E0910 00:32:27.641547 2646 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lkztn" Sep 10 00:32:27.642833 kubelet[2646]: E0910 00:32:27.641583 2646 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lkztn" Sep 10 00:32:27.642833 kubelet[2646]: E0910 00:32:27.641650 2646 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lkztn_calico-system(da4aa882-2a3b-4ce6-a838-c6e29f20e7da)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lkztn_calico-system(da4aa882-2a3b-4ce6-a838-c6e29f20e7da)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lkztn" podUID="da4aa882-2a3b-4ce6-a838-c6e29f20e7da" Sep 10 00:32:27.644010 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde-shm.mount: Deactivated successfully. Sep 10 00:32:28.512445 kubelet[2646]: I0910 00:32:28.512387 2646 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde" Sep 10 00:32:28.513159 containerd[1553]: time="2025-09-10T00:32:28.513026050Z" level=info msg="StopPodSandbox for \"ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde\"" Sep 10 00:32:28.513578 containerd[1553]: time="2025-09-10T00:32:28.513215909Z" level=info msg="Ensure that sandbox ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde in task-service has been cleanup successfully" Sep 10 00:32:28.544341 containerd[1553]: time="2025-09-10T00:32:28.544292130Z" level=error msg="StopPodSandbox for \"ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde\" failed" error="failed to destroy network for sandbox \"ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:32:28.544583 kubelet[2646]: E0910 00:32:28.544526 2646 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde" Sep 10 00:32:28.544583 kubelet[2646]: E0910 00:32:28.544573 2646 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde"} Sep 10 00:32:28.544778 kubelet[2646]: E0910 00:32:28.544610 2646 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"da4aa882-2a3b-4ce6-a838-c6e29f20e7da\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:32:28.544778 kubelet[2646]: E0910 00:32:28.544633 2646 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"da4aa882-2a3b-4ce6-a838-c6e29f20e7da\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lkztn" podUID="da4aa882-2a3b-4ce6-a838-c6e29f20e7da" Sep 10 00:32:32.600445 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2122557436.mount: Deactivated successfully. Sep 10 00:32:34.769268 containerd[1553]: time="2025-09-10T00:32:34.766935359Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:32:34.769268 containerd[1553]: time="2025-09-10T00:32:34.769194960Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 10 00:32:34.771212 containerd[1553]: time="2025-09-10T00:32:34.771166668Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:32:34.775413 containerd[1553]: time="2025-09-10T00:32:34.775374935Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:32:34.775718 containerd[1553]: time="2025-09-10T00:32:34.775691720Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 8.313978866s" Sep 10 00:32:34.775755 containerd[1553]: time="2025-09-10T00:32:34.775721797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 10 00:32:34.787352 systemd[1]: Started sshd@7-10.0.0.21:22-10.0.0.1:38874.service - OpenSSH per-connection server daemon (10.0.0.1:38874). Sep 10 00:32:34.817348 containerd[1553]: time="2025-09-10T00:32:34.816939297Z" level=info msg="CreateContainer within sandbox \"ca4d0a9927e1793d56741ff2d76d875d15b9204b529e892bb69d5bd759491c28\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 10 00:32:34.891353 sshd[3969]: Accepted publickey for core from 10.0.0.1 port 38874 ssh2: RSA SHA256:yotFPVH/8pVol0IcCMTpL4axYdSEk1J0cKg1+3rpd1s Sep 10 00:32:34.892963 sshd[3969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:32:34.897212 systemd-logind[1531]: New session 8 of user core. Sep 10 00:32:34.901545 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 10 00:32:35.202799 containerd[1553]: time="2025-09-10T00:32:35.202729172Z" level=info msg="CreateContainer within sandbox \"ca4d0a9927e1793d56741ff2d76d875d15b9204b529e892bb69d5bd759491c28\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"94a87a630f6df55b0433b8c50d5464f40e82f9be1f741bb496ecafc5a498d8fc\"" Sep 10 00:32:35.203348 containerd[1553]: time="2025-09-10T00:32:35.203281852Z" level=info msg="StartContainer for \"94a87a630f6df55b0433b8c50d5464f40e82f9be1f741bb496ecafc5a498d8fc\"" Sep 10 00:32:35.203978 sshd[3969]: pam_unix(sshd:session): session closed for user core Sep 10 00:32:35.208665 systemd[1]: sshd@7-10.0.0.21:22-10.0.0.1:38874.service: Deactivated successfully. Sep 10 00:32:35.211515 systemd-logind[1531]: Session 8 logged out. Waiting for processes to exit. Sep 10 00:32:35.212290 systemd[1]: session-8.scope: Deactivated successfully. Sep 10 00:32:35.214777 systemd-logind[1531]: Removed session 8. Sep 10 00:32:35.294090 containerd[1553]: time="2025-09-10T00:32:35.294036802Z" level=info msg="StartContainer for \"94a87a630f6df55b0433b8c50d5464f40e82f9be1f741bb496ecafc5a498d8fc\" returns successfully" Sep 10 00:32:35.380400 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 10 00:32:35.380526 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 10 00:32:35.464694 containerd[1553]: time="2025-09-10T00:32:35.464538570Z" level=info msg="StopPodSandbox for \"421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c\"" Sep 10 00:32:35.561568 kubelet[2646]: I0910 00:32:35.560112 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-dgd87" podStartSLOduration=1.530878372 podStartE2EDuration="20.560088578s" podCreationTimestamp="2025-09-10 00:32:15 +0000 UTC" firstStartedPulling="2025-09-10 00:32:15.747799082 +0000 UTC m=+17.486897081" lastFinishedPulling="2025-09-10 00:32:34.777009288 +0000 UTC m=+36.516107287" observedRunningTime="2025-09-10 00:32:35.546707974 +0000 UTC m=+37.285805983" watchObservedRunningTime="2025-09-10 00:32:35.560088578 +0000 UTC m=+37.299186577" Sep 10 00:32:35.930182 containerd[1553]: 2025-09-10 00:32:35.572 [INFO][4059] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c" Sep 10 00:32:35.930182 containerd[1553]: 2025-09-10 00:32:35.573 [INFO][4059] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c" iface="eth0" netns="/var/run/netns/cni-56ba0a69-6b95-a727-98bd-62d1029f1ff1" Sep 10 00:32:35.930182 containerd[1553]: 2025-09-10 00:32:35.573 [INFO][4059] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c" iface="eth0" netns="/var/run/netns/cni-56ba0a69-6b95-a727-98bd-62d1029f1ff1" Sep 10 00:32:35.930182 containerd[1553]: 2025-09-10 00:32:35.574 [INFO][4059] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c" iface="eth0" netns="/var/run/netns/cni-56ba0a69-6b95-a727-98bd-62d1029f1ff1" Sep 10 00:32:35.930182 containerd[1553]: 2025-09-10 00:32:35.574 [INFO][4059] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c" Sep 10 00:32:35.930182 containerd[1553]: 2025-09-10 00:32:35.574 [INFO][4059] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c" Sep 10 00:32:35.930182 containerd[1553]: 2025-09-10 00:32:35.911 [INFO][4070] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c" HandleID="k8s-pod-network.421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c" Workload="localhost-k8s-whisker--7dcf94495f--gb68x-eth0" Sep 10 00:32:35.930182 containerd[1553]: 2025-09-10 00:32:35.913 [INFO][4070] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:32:35.930182 containerd[1553]: 2025-09-10 00:32:35.913 [INFO][4070] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:32:35.930182 containerd[1553]: 2025-09-10 00:32:35.921 [WARNING][4070] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c" HandleID="k8s-pod-network.421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c" Workload="localhost-k8s-whisker--7dcf94495f--gb68x-eth0" Sep 10 00:32:35.930182 containerd[1553]: 2025-09-10 00:32:35.921 [INFO][4070] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c" HandleID="k8s-pod-network.421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c" Workload="localhost-k8s-whisker--7dcf94495f--gb68x-eth0" Sep 10 00:32:35.930182 containerd[1553]: 2025-09-10 00:32:35.922 [INFO][4070] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:32:35.930182 containerd[1553]: 2025-09-10 00:32:35.926 [INFO][4059] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c" Sep 10 00:32:35.933408 containerd[1553]: time="2025-09-10T00:32:35.933359769Z" level=info msg="TearDown network for sandbox \"421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c\" successfully" Sep 10 00:32:35.933408 containerd[1553]: time="2025-09-10T00:32:35.933397540Z" level=info msg="StopPodSandbox for \"421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c\" returns successfully" Sep 10 00:32:35.934101 systemd[1]: run-netns-cni\x2d56ba0a69\x2d6b95\x2da727\x2d98bd\x2d62d1029f1ff1.mount: Deactivated successfully. Sep 10 00:32:35.941246 kubelet[2646]: I0910 00:32:35.941186 2646 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43830e0e-c802-4dc2-b581-750fc8258059-whisker-ca-bundle\") pod \"43830e0e-c802-4dc2-b581-750fc8258059\" (UID: \"43830e0e-c802-4dc2-b581-750fc8258059\") " Sep 10 00:32:35.941435 kubelet[2646]: I0910 00:32:35.941273 2646 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhp6h\" (UniqueName: \"kubernetes.io/projected/43830e0e-c802-4dc2-b581-750fc8258059-kube-api-access-rhp6h\") pod \"43830e0e-c802-4dc2-b581-750fc8258059\" (UID: \"43830e0e-c802-4dc2-b581-750fc8258059\") " Sep 10 00:32:35.941435 kubelet[2646]: I0910 00:32:35.941297 2646 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/43830e0e-c802-4dc2-b581-750fc8258059-whisker-backend-key-pair\") pod \"43830e0e-c802-4dc2-b581-750fc8258059\" (UID: \"43830e0e-c802-4dc2-b581-750fc8258059\") " Sep 10 00:32:35.941862 kubelet[2646]: I0910 00:32:35.941803 2646 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43830e0e-c802-4dc2-b581-750fc8258059-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "43830e0e-c802-4dc2-b581-750fc8258059" (UID: "43830e0e-c802-4dc2-b581-750fc8258059"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 10 00:32:35.945445 kubelet[2646]: I0910 00:32:35.945385 2646 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43830e0e-c802-4dc2-b581-750fc8258059-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "43830e0e-c802-4dc2-b581-750fc8258059" (UID: "43830e0e-c802-4dc2-b581-750fc8258059"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 10 00:32:35.946418 kubelet[2646]: I0910 00:32:35.946388 2646 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43830e0e-c802-4dc2-b581-750fc8258059-kube-api-access-rhp6h" (OuterVolumeSpecName: "kube-api-access-rhp6h") pod "43830e0e-c802-4dc2-b581-750fc8258059" (UID: "43830e0e-c802-4dc2-b581-750fc8258059"). InnerVolumeSpecName "kube-api-access-rhp6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 10 00:32:35.947905 systemd[1]: var-lib-kubelet-pods-43830e0e\x2dc802\x2d4dc2\x2db581\x2d750fc8258059-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drhp6h.mount: Deactivated successfully. Sep 10 00:32:35.948177 systemd[1]: var-lib-kubelet-pods-43830e0e\x2dc802\x2d4dc2\x2db581\x2d750fc8258059-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 10 00:32:36.041966 kubelet[2646]: I0910 00:32:36.041898 2646 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhp6h\" (UniqueName: \"kubernetes.io/projected/43830e0e-c802-4dc2-b581-750fc8258059-kube-api-access-rhp6h\") on node \"localhost\" DevicePath \"\"" Sep 10 00:32:36.041966 kubelet[2646]: I0910 00:32:36.041942 2646 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/43830e0e-c802-4dc2-b581-750fc8258059-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 10 00:32:36.041966 kubelet[2646]: I0910 00:32:36.041952 2646 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43830e0e-c802-4dc2-b581-750fc8258059-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 10 00:32:36.645107 kubelet[2646]: I0910 00:32:36.645053 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkjgm\" (UniqueName: \"kubernetes.io/projected/b64d5c39-0b3f-4599-a079-d3db057fe91a-kube-api-access-mkjgm\") pod \"whisker-ddd44bf4-dbjh4\" (UID: \"b64d5c39-0b3f-4599-a079-d3db057fe91a\") " pod="calico-system/whisker-ddd44bf4-dbjh4" Sep 10 00:32:36.645107 kubelet[2646]: I0910 00:32:36.645096 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b64d5c39-0b3f-4599-a079-d3db057fe91a-whisker-backend-key-pair\") pod \"whisker-ddd44bf4-dbjh4\" (UID: \"b64d5c39-0b3f-4599-a079-d3db057fe91a\") " pod="calico-system/whisker-ddd44bf4-dbjh4" Sep 10 00:32:36.645107 kubelet[2646]: I0910 00:32:36.645118 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b64d5c39-0b3f-4599-a079-d3db057fe91a-whisker-ca-bundle\") pod \"whisker-ddd44bf4-dbjh4\" (UID: \"b64d5c39-0b3f-4599-a079-d3db057fe91a\") " pod="calico-system/whisker-ddd44bf4-dbjh4" Sep 10 00:32:36.893540 containerd[1553]: time="2025-09-10T00:32:36.893478902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-ddd44bf4-dbjh4,Uid:b64d5c39-0b3f-4599-a079-d3db057fe91a,Namespace:calico-system,Attempt:0,}" Sep 10 00:32:37.125268 kernel: bpftool[4286]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 10 00:32:37.164910 systemd-networkd[1240]: caliace5da07927: Link UP Sep 10 00:32:37.166339 systemd-networkd[1240]: caliace5da07927: Gained carrier Sep 10 00:32:37.183289 containerd[1553]: 2025-09-10 00:32:36.977 [INFO][4227] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 00:32:37.183289 containerd[1553]: 2025-09-10 00:32:36.993 [INFO][4227] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--ddd44bf4--dbjh4-eth0 whisker-ddd44bf4- calico-system b64d5c39-0b3f-4599-a079-d3db057fe91a 951 0 2025-09-10 00:32:36 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:ddd44bf4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-ddd44bf4-dbjh4 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] caliace5da07927 [] [] }} ContainerID="23c4934aeb185e4e7f979de0214089a88c440738e2a7f2bc107c89bf43adce4b" Namespace="calico-system" Pod="whisker-ddd44bf4-dbjh4" WorkloadEndpoint="localhost-k8s-whisker--ddd44bf4--dbjh4-" Sep 10 00:32:37.183289 containerd[1553]: 2025-09-10 00:32:36.993 [INFO][4227] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="23c4934aeb185e4e7f979de0214089a88c440738e2a7f2bc107c89bf43adce4b" Namespace="calico-system" Pod="whisker-ddd44bf4-dbjh4" WorkloadEndpoint="localhost-k8s-whisker--ddd44bf4--dbjh4-eth0" Sep 10 00:32:37.183289 containerd[1553]: 2025-09-10 00:32:37.037 [INFO][4247] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="23c4934aeb185e4e7f979de0214089a88c440738e2a7f2bc107c89bf43adce4b" HandleID="k8s-pod-network.23c4934aeb185e4e7f979de0214089a88c440738e2a7f2bc107c89bf43adce4b" Workload="localhost-k8s-whisker--ddd44bf4--dbjh4-eth0" Sep 10 00:32:37.183289 containerd[1553]: 2025-09-10 00:32:37.037 [INFO][4247] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="23c4934aeb185e4e7f979de0214089a88c440738e2a7f2bc107c89bf43adce4b" HandleID="k8s-pod-network.23c4934aeb185e4e7f979de0214089a88c440738e2a7f2bc107c89bf43adce4b" Workload="localhost-k8s-whisker--ddd44bf4--dbjh4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df5f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-ddd44bf4-dbjh4", "timestamp":"2025-09-10 00:32:37.037066147 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:32:37.183289 containerd[1553]: 2025-09-10 00:32:37.037 [INFO][4247] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:32:37.183289 containerd[1553]: 2025-09-10 00:32:37.037 [INFO][4247] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:32:37.183289 containerd[1553]: 2025-09-10 00:32:37.037 [INFO][4247] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:32:37.183289 containerd[1553]: 2025-09-10 00:32:37.045 [INFO][4247] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.23c4934aeb185e4e7f979de0214089a88c440738e2a7f2bc107c89bf43adce4b" host="localhost" Sep 10 00:32:37.183289 containerd[1553]: 2025-09-10 00:32:37.109 [INFO][4247] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:32:37.183289 containerd[1553]: 2025-09-10 00:32:37.114 [INFO][4247] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:32:37.183289 containerd[1553]: 2025-09-10 00:32:37.116 [INFO][4247] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:32:37.183289 containerd[1553]: 2025-09-10 00:32:37.119 [INFO][4247] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:32:37.183289 containerd[1553]: 2025-09-10 00:32:37.119 [INFO][4247] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.23c4934aeb185e4e7f979de0214089a88c440738e2a7f2bc107c89bf43adce4b" host="localhost" Sep 10 00:32:37.183289 containerd[1553]: 2025-09-10 00:32:37.120 [INFO][4247] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.23c4934aeb185e4e7f979de0214089a88c440738e2a7f2bc107c89bf43adce4b Sep 10 00:32:37.183289 containerd[1553]: 2025-09-10 00:32:37.145 [INFO][4247] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.23c4934aeb185e4e7f979de0214089a88c440738e2a7f2bc107c89bf43adce4b" host="localhost" Sep 10 00:32:37.183289 containerd[1553]: 2025-09-10 00:32:37.151 [INFO][4247] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.23c4934aeb185e4e7f979de0214089a88c440738e2a7f2bc107c89bf43adce4b" host="localhost" Sep 10 00:32:37.183289 containerd[1553]: 2025-09-10 00:32:37.151 [INFO][4247] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.23c4934aeb185e4e7f979de0214089a88c440738e2a7f2bc107c89bf43adce4b" host="localhost" Sep 10 00:32:37.183289 containerd[1553]: 2025-09-10 00:32:37.151 [INFO][4247] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:32:37.183289 containerd[1553]: 2025-09-10 00:32:37.151 [INFO][4247] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="23c4934aeb185e4e7f979de0214089a88c440738e2a7f2bc107c89bf43adce4b" HandleID="k8s-pod-network.23c4934aeb185e4e7f979de0214089a88c440738e2a7f2bc107c89bf43adce4b" Workload="localhost-k8s-whisker--ddd44bf4--dbjh4-eth0" Sep 10 00:32:37.184376 containerd[1553]: 2025-09-10 00:32:37.155 [INFO][4227] cni-plugin/k8s.go 418: Populated endpoint ContainerID="23c4934aeb185e4e7f979de0214089a88c440738e2a7f2bc107c89bf43adce4b" Namespace="calico-system" Pod="whisker-ddd44bf4-dbjh4" WorkloadEndpoint="localhost-k8s-whisker--ddd44bf4--dbjh4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--ddd44bf4--dbjh4-eth0", GenerateName:"whisker-ddd44bf4-", Namespace:"calico-system", SelfLink:"", UID:"b64d5c39-0b3f-4599-a079-d3db057fe91a", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 32, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"ddd44bf4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-ddd44bf4-dbjh4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliace5da07927", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:32:37.184376 containerd[1553]: 2025-09-10 00:32:37.155 [INFO][4227] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="23c4934aeb185e4e7f979de0214089a88c440738e2a7f2bc107c89bf43adce4b" Namespace="calico-system" Pod="whisker-ddd44bf4-dbjh4" WorkloadEndpoint="localhost-k8s-whisker--ddd44bf4--dbjh4-eth0" Sep 10 00:32:37.184376 containerd[1553]: 2025-09-10 00:32:37.155 [INFO][4227] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliace5da07927 ContainerID="23c4934aeb185e4e7f979de0214089a88c440738e2a7f2bc107c89bf43adce4b" Namespace="calico-system" Pod="whisker-ddd44bf4-dbjh4" WorkloadEndpoint="localhost-k8s-whisker--ddd44bf4--dbjh4-eth0" Sep 10 00:32:37.184376 containerd[1553]: 2025-09-10 00:32:37.165 [INFO][4227] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="23c4934aeb185e4e7f979de0214089a88c440738e2a7f2bc107c89bf43adce4b" Namespace="calico-system" Pod="whisker-ddd44bf4-dbjh4" WorkloadEndpoint="localhost-k8s-whisker--ddd44bf4--dbjh4-eth0" Sep 10 00:32:37.184376 containerd[1553]: 2025-09-10 00:32:37.166 [INFO][4227] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="23c4934aeb185e4e7f979de0214089a88c440738e2a7f2bc107c89bf43adce4b" Namespace="calico-system" Pod="whisker-ddd44bf4-dbjh4" WorkloadEndpoint="localhost-k8s-whisker--ddd44bf4--dbjh4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--ddd44bf4--dbjh4-eth0", GenerateName:"whisker-ddd44bf4-", Namespace:"calico-system", SelfLink:"", UID:"b64d5c39-0b3f-4599-a079-d3db057fe91a", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 32, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"ddd44bf4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"23c4934aeb185e4e7f979de0214089a88c440738e2a7f2bc107c89bf43adce4b", Pod:"whisker-ddd44bf4-dbjh4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliace5da07927", MAC:"0e:48:82:77:81:6f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:32:37.184376 containerd[1553]: 2025-09-10 00:32:37.179 [INFO][4227] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="23c4934aeb185e4e7f979de0214089a88c440738e2a7f2bc107c89bf43adce4b" Namespace="calico-system" Pod="whisker-ddd44bf4-dbjh4" WorkloadEndpoint="localhost-k8s-whisker--ddd44bf4--dbjh4-eth0" Sep 10 00:32:37.216121 containerd[1553]: time="2025-09-10T00:32:37.216031586Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:32:37.216121 containerd[1553]: time="2025-09-10T00:32:37.216082111Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:32:37.216121 containerd[1553]: time="2025-09-10T00:32:37.216112327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:32:37.216553 containerd[1553]: time="2025-09-10T00:32:37.216257300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:32:37.249352 systemd-resolved[1456]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:32:37.275999 containerd[1553]: time="2025-09-10T00:32:37.275960156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-ddd44bf4-dbjh4,Uid:b64d5c39-0b3f-4599-a079-d3db057fe91a,Namespace:calico-system,Attempt:0,} returns sandbox id \"23c4934aeb185e4e7f979de0214089a88c440738e2a7f2bc107c89bf43adce4b\"" Sep 10 00:32:37.277664 containerd[1553]: time="2025-09-10T00:32:37.277641747Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 10 00:32:37.391392 systemd-networkd[1240]: vxlan.calico: Link UP Sep 10 00:32:37.391408 systemd-networkd[1240]: vxlan.calico: Gained carrier Sep 10 00:32:38.337627 containerd[1553]: time="2025-09-10T00:32:38.337549389Z" level=info msg="StopPodSandbox for \"bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc\"" Sep 10 00:32:38.341412 kubelet[2646]: I0910 00:32:38.341369 2646 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43830e0e-c802-4dc2-b581-750fc8258059" path="/var/lib/kubelet/pods/43830e0e-c802-4dc2-b581-750fc8258059/volumes" Sep 10 00:32:38.423753 containerd[1553]: 2025-09-10 00:32:38.384 [INFO][4424] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc" Sep 10 00:32:38.423753 containerd[1553]: 2025-09-10 00:32:38.384 [INFO][4424] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc" iface="eth0" netns="/var/run/netns/cni-5e101a69-5296-f0ba-73ff-91caa1f9d4bf" Sep 10 00:32:38.423753 containerd[1553]: 2025-09-10 00:32:38.384 [INFO][4424] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc" iface="eth0" netns="/var/run/netns/cni-5e101a69-5296-f0ba-73ff-91caa1f9d4bf" Sep 10 00:32:38.423753 containerd[1553]: 2025-09-10 00:32:38.385 [INFO][4424] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc" iface="eth0" netns="/var/run/netns/cni-5e101a69-5296-f0ba-73ff-91caa1f9d4bf" Sep 10 00:32:38.423753 containerd[1553]: 2025-09-10 00:32:38.385 [INFO][4424] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc" Sep 10 00:32:38.423753 containerd[1553]: 2025-09-10 00:32:38.385 [INFO][4424] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc" Sep 10 00:32:38.423753 containerd[1553]: 2025-09-10 00:32:38.410 [INFO][4433] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc" HandleID="k8s-pod-network.bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc" Workload="localhost-k8s-coredns--7c65d6cfc9--9gf45-eth0" Sep 10 00:32:38.423753 containerd[1553]: 2025-09-10 00:32:38.410 [INFO][4433] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:32:38.423753 containerd[1553]: 2025-09-10 00:32:38.410 [INFO][4433] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:32:38.423753 containerd[1553]: 2025-09-10 00:32:38.415 [WARNING][4433] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc" HandleID="k8s-pod-network.bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc" Workload="localhost-k8s-coredns--7c65d6cfc9--9gf45-eth0" Sep 10 00:32:38.423753 containerd[1553]: 2025-09-10 00:32:38.415 [INFO][4433] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc" HandleID="k8s-pod-network.bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc" Workload="localhost-k8s-coredns--7c65d6cfc9--9gf45-eth0" Sep 10 00:32:38.423753 containerd[1553]: 2025-09-10 00:32:38.417 [INFO][4433] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:32:38.423753 containerd[1553]: 2025-09-10 00:32:38.420 [INFO][4424] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc" Sep 10 00:32:38.424217 containerd[1553]: time="2025-09-10T00:32:38.423923746Z" level=info msg="TearDown network for sandbox \"bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc\" successfully" Sep 10 00:32:38.424217 containerd[1553]: time="2025-09-10T00:32:38.423954473Z" level=info msg="StopPodSandbox for \"bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc\" returns successfully" Sep 10 00:32:38.426535 kubelet[2646]: E0910 00:32:38.426505 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:32:38.426907 systemd[1]: run-netns-cni\x2d5e101a69\x2d5296\x2df0ba\x2d73ff\x2d91caa1f9d4bf.mount: Deactivated successfully. Sep 10 00:32:38.427424 containerd[1553]: time="2025-09-10T00:32:38.426916902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9gf45,Uid:a3dd0117-a752-4081-b55b-d575ef8b3051,Namespace:kube-system,Attempt:1,}" Sep 10 00:32:38.532781 systemd-networkd[1240]: calic3e8b23db92: Link UP Sep 10 00:32:38.533371 systemd-networkd[1240]: calic3e8b23db92: Gained carrier Sep 10 00:32:38.546912 containerd[1553]: 2025-09-10 00:32:38.474 [INFO][4440] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--9gf45-eth0 coredns-7c65d6cfc9- kube-system a3dd0117-a752-4081-b55b-d575ef8b3051 967 0 2025-09-10 00:32:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-9gf45 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic3e8b23db92 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4039b149484396cacfdbb6e097e98b8e4f4135ee681913ddad6b917a415f4485" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9gf45" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--9gf45-" Sep 10 00:32:38.546912 containerd[1553]: 2025-09-10 00:32:38.474 [INFO][4440] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4039b149484396cacfdbb6e097e98b8e4f4135ee681913ddad6b917a415f4485" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9gf45" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--9gf45-eth0" Sep 10 00:32:38.546912 containerd[1553]: 2025-09-10 00:32:38.500 [INFO][4455] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4039b149484396cacfdbb6e097e98b8e4f4135ee681913ddad6b917a415f4485" HandleID="k8s-pod-network.4039b149484396cacfdbb6e097e98b8e4f4135ee681913ddad6b917a415f4485" Workload="localhost-k8s-coredns--7c65d6cfc9--9gf45-eth0" Sep 10 00:32:38.546912 containerd[1553]: 2025-09-10 00:32:38.500 [INFO][4455] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4039b149484396cacfdbb6e097e98b8e4f4135ee681913ddad6b917a415f4485" HandleID="k8s-pod-network.4039b149484396cacfdbb6e097e98b8e4f4135ee681913ddad6b917a415f4485" Workload="localhost-k8s-coredns--7c65d6cfc9--9gf45-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000324140), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-9gf45", "timestamp":"2025-09-10 00:32:38.500348248 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:32:38.546912 containerd[1553]: 2025-09-10 00:32:38.500 [INFO][4455] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:32:38.546912 containerd[1553]: 2025-09-10 00:32:38.500 [INFO][4455] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:32:38.546912 containerd[1553]: 2025-09-10 00:32:38.500 [INFO][4455] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:32:38.546912 containerd[1553]: 2025-09-10 00:32:38.507 [INFO][4455] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4039b149484396cacfdbb6e097e98b8e4f4135ee681913ddad6b917a415f4485" host="localhost" Sep 10 00:32:38.546912 containerd[1553]: 2025-09-10 00:32:38.511 [INFO][4455] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:32:38.546912 containerd[1553]: 2025-09-10 00:32:38.515 [INFO][4455] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:32:38.546912 containerd[1553]: 2025-09-10 00:32:38.517 [INFO][4455] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:32:38.546912 containerd[1553]: 2025-09-10 00:32:38.518 [INFO][4455] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:32:38.546912 containerd[1553]: 2025-09-10 00:32:38.518 [INFO][4455] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4039b149484396cacfdbb6e097e98b8e4f4135ee681913ddad6b917a415f4485" host="localhost" Sep 10 00:32:38.546912 containerd[1553]: 2025-09-10 00:32:38.520 [INFO][4455] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4039b149484396cacfdbb6e097e98b8e4f4135ee681913ddad6b917a415f4485 Sep 10 00:32:38.546912 containerd[1553]: 2025-09-10 00:32:38.523 [INFO][4455] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4039b149484396cacfdbb6e097e98b8e4f4135ee681913ddad6b917a415f4485" host="localhost" Sep 10 00:32:38.546912 containerd[1553]: 2025-09-10 00:32:38.527 [INFO][4455] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.4039b149484396cacfdbb6e097e98b8e4f4135ee681913ddad6b917a415f4485" host="localhost" Sep 10 00:32:38.546912 containerd[1553]: 2025-09-10 00:32:38.527 [INFO][4455] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.4039b149484396cacfdbb6e097e98b8e4f4135ee681913ddad6b917a415f4485" host="localhost" Sep 10 00:32:38.546912 containerd[1553]: 2025-09-10 00:32:38.527 [INFO][4455] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:32:38.546912 containerd[1553]: 2025-09-10 00:32:38.527 [INFO][4455] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="4039b149484396cacfdbb6e097e98b8e4f4135ee681913ddad6b917a415f4485" HandleID="k8s-pod-network.4039b149484396cacfdbb6e097e98b8e4f4135ee681913ddad6b917a415f4485" Workload="localhost-k8s-coredns--7c65d6cfc9--9gf45-eth0" Sep 10 00:32:38.547595 containerd[1553]: 2025-09-10 00:32:38.530 [INFO][4440] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4039b149484396cacfdbb6e097e98b8e4f4135ee681913ddad6b917a415f4485" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9gf45" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--9gf45-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--9gf45-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a3dd0117-a752-4081-b55b-d575ef8b3051", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 32, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-9gf45", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic3e8b23db92", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:32:38.547595 containerd[1553]: 2025-09-10 00:32:38.530 [INFO][4440] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="4039b149484396cacfdbb6e097e98b8e4f4135ee681913ddad6b917a415f4485" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9gf45" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--9gf45-eth0" Sep 10 00:32:38.547595 containerd[1553]: 2025-09-10 00:32:38.530 [INFO][4440] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic3e8b23db92 ContainerID="4039b149484396cacfdbb6e097e98b8e4f4135ee681913ddad6b917a415f4485" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9gf45" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--9gf45-eth0" Sep 10 00:32:38.547595 containerd[1553]: 2025-09-10 00:32:38.533 [INFO][4440] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4039b149484396cacfdbb6e097e98b8e4f4135ee681913ddad6b917a415f4485" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9gf45" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--9gf45-eth0" Sep 10 00:32:38.547595 containerd[1553]: 2025-09-10 00:32:38.534 [INFO][4440] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4039b149484396cacfdbb6e097e98b8e4f4135ee681913ddad6b917a415f4485" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9gf45" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--9gf45-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--9gf45-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a3dd0117-a752-4081-b55b-d575ef8b3051", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 32, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4039b149484396cacfdbb6e097e98b8e4f4135ee681913ddad6b917a415f4485", Pod:"coredns-7c65d6cfc9-9gf45", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic3e8b23db92", MAC:"02:6d:15:47:d2:5d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:32:38.547595 containerd[1553]: 2025-09-10 00:32:38.542 [INFO][4440] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4039b149484396cacfdbb6e097e98b8e4f4135ee681913ddad6b917a415f4485" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9gf45" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--9gf45-eth0" Sep 10 00:32:38.568257 containerd[1553]: time="2025-09-10T00:32:38.568080581Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:32:38.568257 containerd[1553]: time="2025-09-10T00:32:38.568164188Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:32:38.568503 containerd[1553]: time="2025-09-10T00:32:38.568215474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:32:38.569822 containerd[1553]: time="2025-09-10T00:32:38.569785617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:32:38.600163 systemd-resolved[1456]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:32:38.630329 containerd[1553]: time="2025-09-10T00:32:38.630271628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9gf45,Uid:a3dd0117-a752-4081-b55b-d575ef8b3051,Namespace:kube-system,Attempt:1,} returns sandbox id \"4039b149484396cacfdbb6e097e98b8e4f4135ee681913ddad6b917a415f4485\"" Sep 10 00:32:38.631127 kubelet[2646]: E0910 00:32:38.631101 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:32:38.633923 containerd[1553]: time="2025-09-10T00:32:38.633869050Z" level=info msg="CreateContainer within sandbox \"4039b149484396cacfdbb6e097e98b8e4f4135ee681913ddad6b917a415f4485\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 00:32:38.657783 containerd[1553]: time="2025-09-10T00:32:38.657721749Z" level=info msg="CreateContainer within sandbox \"4039b149484396cacfdbb6e097e98b8e4f4135ee681913ddad6b917a415f4485\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"23e29009ebdb6c6c93f54a9e53765edf0e90376ae5dcccf62a8cd81476743b22\"" Sep 10 00:32:38.658405 containerd[1553]: time="2025-09-10T00:32:38.658358235Z" level=info msg="StartContainer for \"23e29009ebdb6c6c93f54a9e53765edf0e90376ae5dcccf62a8cd81476743b22\"" Sep 10 00:32:38.718068 containerd[1553]: time="2025-09-10T00:32:38.718025739Z" level=info msg="StartContainer for \"23e29009ebdb6c6c93f54a9e53765edf0e90376ae5dcccf62a8cd81476743b22\" returns successfully" Sep 10 00:32:38.802364 systemd-networkd[1240]: vxlan.calico: Gained IPv6LL Sep 10 00:32:38.858515 containerd[1553]: time="2025-09-10T00:32:38.858074171Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:32:38.861491 containerd[1553]: time="2025-09-10T00:32:38.861429168Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 10 00:32:38.863294 containerd[1553]: time="2025-09-10T00:32:38.863260310Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:32:38.865809 containerd[1553]: time="2025-09-10T00:32:38.865773945Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:32:38.866504 containerd[1553]: time="2025-09-10T00:32:38.866456538Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.588786498s" Sep 10 00:32:38.866504 containerd[1553]: time="2025-09-10T00:32:38.866487156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 10 00:32:38.868633 containerd[1553]: time="2025-09-10T00:32:38.868575251Z" level=info msg="CreateContainer within sandbox \"23c4934aeb185e4e7f979de0214089a88c440738e2a7f2bc107c89bf43adce4b\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 10 00:32:38.885980 containerd[1553]: time="2025-09-10T00:32:38.885931447Z" level=info msg="CreateContainer within sandbox \"23c4934aeb185e4e7f979de0214089a88c440738e2a7f2bc107c89bf43adce4b\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"a5bf5b3daa8c5392359e2640f4cc220e0ce1b73b889e494cad716b56bac4ae34\"" Sep 10 00:32:38.887267 containerd[1553]: time="2025-09-10T00:32:38.887207977Z" level=info msg="StartContainer for \"a5bf5b3daa8c5392359e2640f4cc220e0ce1b73b889e494cad716b56bac4ae34\"" Sep 10 00:32:39.002333 containerd[1553]: time="2025-09-10T00:32:39.002286682Z" level=info msg="StartContainer for \"a5bf5b3daa8c5392359e2640f4cc220e0ce1b73b889e494cad716b56bac4ae34\" returns successfully" Sep 10 00:32:39.004472 containerd[1553]: time="2025-09-10T00:32:39.004446661Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 10 00:32:39.056432 systemd-networkd[1240]: caliace5da07927: Gained IPv6LL Sep 10 00:32:39.540863 kubelet[2646]: E0910 00:32:39.540728 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:32:39.553074 kubelet[2646]: I0910 00:32:39.552999 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-9gf45" podStartSLOduration=35.552981797 podStartE2EDuration="35.552981797s" podCreationTimestamp="2025-09-10 00:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:32:39.552799726 +0000 UTC m=+41.291897725" watchObservedRunningTime="2025-09-10 00:32:39.552981797 +0000 UTC m=+41.292079796" Sep 10 00:32:39.632489 systemd-networkd[1240]: calic3e8b23db92: Gained IPv6LL Sep 10 00:32:40.214536 systemd[1]: Started sshd@8-10.0.0.21:22-10.0.0.1:46684.service - OpenSSH per-connection server daemon (10.0.0.1:46684). Sep 10 00:32:40.255798 sshd[4602]: Accepted publickey for core from 10.0.0.1 port 46684 ssh2: RSA SHA256:yotFPVH/8pVol0IcCMTpL4axYdSEk1J0cKg1+3rpd1s Sep 10 00:32:40.336910 containerd[1553]: time="2025-09-10T00:32:40.336856395Z" level=info msg="StopPodSandbox for \"05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b\"" Sep 10 00:32:40.341683 sshd[4602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:32:40.347256 systemd-logind[1531]: New session 9 of user core. Sep 10 00:32:40.353558 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 10 00:32:40.426785 containerd[1553]: 2025-09-10 00:32:40.384 [INFO][4615] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b" Sep 10 00:32:40.426785 containerd[1553]: 2025-09-10 00:32:40.384 [INFO][4615] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b" iface="eth0" netns="/var/run/netns/cni-cd121f50-8fed-b115-9ca5-cdf7f10a78ec" Sep 10 00:32:40.426785 containerd[1553]: 2025-09-10 00:32:40.385 [INFO][4615] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b" iface="eth0" netns="/var/run/netns/cni-cd121f50-8fed-b115-9ca5-cdf7f10a78ec" Sep 10 00:32:40.426785 containerd[1553]: 2025-09-10 00:32:40.385 [INFO][4615] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b" iface="eth0" netns="/var/run/netns/cni-cd121f50-8fed-b115-9ca5-cdf7f10a78ec" Sep 10 00:32:40.426785 containerd[1553]: 2025-09-10 00:32:40.385 [INFO][4615] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b" Sep 10 00:32:40.426785 containerd[1553]: 2025-09-10 00:32:40.385 [INFO][4615] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b" Sep 10 00:32:40.426785 containerd[1553]: 2025-09-10 00:32:40.412 [INFO][4626] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b" HandleID="k8s-pod-network.05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b" Workload="localhost-k8s-calico--kube--controllers--b94b4bd57--2vk8t-eth0" Sep 10 00:32:40.426785 containerd[1553]: 2025-09-10 00:32:40.412 [INFO][4626] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:32:40.426785 containerd[1553]: 2025-09-10 00:32:40.412 [INFO][4626] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:32:40.426785 containerd[1553]: 2025-09-10 00:32:40.418 [WARNING][4626] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b" HandleID="k8s-pod-network.05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b" Workload="localhost-k8s-calico--kube--controllers--b94b4bd57--2vk8t-eth0" Sep 10 00:32:40.426785 containerd[1553]: 2025-09-10 00:32:40.418 [INFO][4626] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b" HandleID="k8s-pod-network.05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b" Workload="localhost-k8s-calico--kube--controllers--b94b4bd57--2vk8t-eth0" Sep 10 00:32:40.426785 containerd[1553]: 2025-09-10 00:32:40.420 [INFO][4626] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:32:40.426785 containerd[1553]: 2025-09-10 00:32:40.423 [INFO][4615] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b" Sep 10 00:32:40.430500 containerd[1553]: time="2025-09-10T00:32:40.430448472Z" level=info msg="TearDown network for sandbox \"05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b\" successfully" Sep 10 00:32:40.430590 containerd[1553]: time="2025-09-10T00:32:40.430508455Z" level=info msg="StopPodSandbox for \"05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b\" returns successfully" Sep 10 00:32:40.431324 containerd[1553]: time="2025-09-10T00:32:40.431278191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b94b4bd57-2vk8t,Uid:e74294c4-7f70-425e-bbde-a3f523072902,Namespace:calico-system,Attempt:1,}" Sep 10 00:32:40.432043 systemd[1]: run-netns-cni\x2dcd121f50\x2d8fed\x2db115\x2d9ca5\x2dcdf7f10a78ec.mount: Deactivated successfully. Sep 10 00:32:40.506570 sshd[4602]: pam_unix(sshd:session): session closed for user core Sep 10 00:32:40.511740 systemd[1]: sshd@8-10.0.0.21:22-10.0.0.1:46684.service: Deactivated successfully. Sep 10 00:32:40.517846 systemd-logind[1531]: Session 9 logged out. Waiting for processes to exit. Sep 10 00:32:40.519242 systemd[1]: session-9.scope: Deactivated successfully. Sep 10 00:32:40.520670 systemd-logind[1531]: Removed session 9. Sep 10 00:32:40.544133 kubelet[2646]: E0910 00:32:40.544023 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:32:40.558100 systemd-networkd[1240]: calia324af93cbd: Link UP Sep 10 00:32:40.558390 systemd-networkd[1240]: calia324af93cbd: Gained carrier Sep 10 00:32:40.574357 containerd[1553]: 2025-09-10 00:32:40.489 [INFO][4643] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--b94b4bd57--2vk8t-eth0 calico-kube-controllers-b94b4bd57- calico-system e74294c4-7f70-425e-bbde-a3f523072902 997 0 2025-09-10 00:32:15 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:b94b4bd57 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-b94b4bd57-2vk8t eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia324af93cbd [] [] }} ContainerID="7de755c72acee3feaca14d68af691b75bee3614473f844bceba402b000461480" Namespace="calico-system" Pod="calico-kube-controllers-b94b4bd57-2vk8t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b94b4bd57--2vk8t-" Sep 10 00:32:40.574357 containerd[1553]: 2025-09-10 00:32:40.489 [INFO][4643] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7de755c72acee3feaca14d68af691b75bee3614473f844bceba402b000461480" Namespace="calico-system" Pod="calico-kube-controllers-b94b4bd57-2vk8t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b94b4bd57--2vk8t-eth0" Sep 10 00:32:40.574357 containerd[1553]: 2025-09-10 00:32:40.520 [INFO][4658] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7de755c72acee3feaca14d68af691b75bee3614473f844bceba402b000461480" HandleID="k8s-pod-network.7de755c72acee3feaca14d68af691b75bee3614473f844bceba402b000461480" Workload="localhost-k8s-calico--kube--controllers--b94b4bd57--2vk8t-eth0" Sep 10 00:32:40.574357 containerd[1553]: 2025-09-10 00:32:40.521 [INFO][4658] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7de755c72acee3feaca14d68af691b75bee3614473f844bceba402b000461480" HandleID="k8s-pod-network.7de755c72acee3feaca14d68af691b75bee3614473f844bceba402b000461480" Workload="localhost-k8s-calico--kube--controllers--b94b4bd57--2vk8t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035f7d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-b94b4bd57-2vk8t", "timestamp":"2025-09-10 00:32:40.520896051 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:32:40.574357 containerd[1553]: 2025-09-10 00:32:40.521 [INFO][4658] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:32:40.574357 containerd[1553]: 2025-09-10 00:32:40.521 [INFO][4658] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:32:40.574357 containerd[1553]: 2025-09-10 00:32:40.521 [INFO][4658] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:32:40.574357 containerd[1553]: 2025-09-10 00:32:40.528 [INFO][4658] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7de755c72acee3feaca14d68af691b75bee3614473f844bceba402b000461480" host="localhost" Sep 10 00:32:40.574357 containerd[1553]: 2025-09-10 00:32:40.532 [INFO][4658] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:32:40.574357 containerd[1553]: 2025-09-10 00:32:40.537 [INFO][4658] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:32:40.574357 containerd[1553]: 2025-09-10 00:32:40.538 [INFO][4658] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:32:40.574357 containerd[1553]: 2025-09-10 00:32:40.540 [INFO][4658] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:32:40.574357 containerd[1553]: 2025-09-10 00:32:40.540 [INFO][4658] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7de755c72acee3feaca14d68af691b75bee3614473f844bceba402b000461480" host="localhost" Sep 10 00:32:40.574357 containerd[1553]: 2025-09-10 00:32:40.542 [INFO][4658] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7de755c72acee3feaca14d68af691b75bee3614473f844bceba402b000461480 Sep 10 00:32:40.574357 containerd[1553]: 2025-09-10 00:32:40.546 [INFO][4658] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7de755c72acee3feaca14d68af691b75bee3614473f844bceba402b000461480" host="localhost" Sep 10 00:32:40.574357 containerd[1553]: 2025-09-10 00:32:40.551 [INFO][4658] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.7de755c72acee3feaca14d68af691b75bee3614473f844bceba402b000461480" host="localhost" Sep 10 00:32:40.574357 containerd[1553]: 2025-09-10 00:32:40.551 [INFO][4658] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.7de755c72acee3feaca14d68af691b75bee3614473f844bceba402b000461480" host="localhost" Sep 10 00:32:40.574357 containerd[1553]: 2025-09-10 00:32:40.551 [INFO][4658] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:32:40.574357 containerd[1553]: 2025-09-10 00:32:40.551 [INFO][4658] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="7de755c72acee3feaca14d68af691b75bee3614473f844bceba402b000461480" HandleID="k8s-pod-network.7de755c72acee3feaca14d68af691b75bee3614473f844bceba402b000461480" Workload="localhost-k8s-calico--kube--controllers--b94b4bd57--2vk8t-eth0" Sep 10 00:32:40.574923 containerd[1553]: 2025-09-10 00:32:40.555 [INFO][4643] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7de755c72acee3feaca14d68af691b75bee3614473f844bceba402b000461480" Namespace="calico-system" Pod="calico-kube-controllers-b94b4bd57-2vk8t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b94b4bd57--2vk8t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--b94b4bd57--2vk8t-eth0", GenerateName:"calico-kube-controllers-b94b4bd57-", Namespace:"calico-system", SelfLink:"", UID:"e74294c4-7f70-425e-bbde-a3f523072902", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 32, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b94b4bd57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-b94b4bd57-2vk8t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia324af93cbd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:32:40.574923 containerd[1553]: 2025-09-10 00:32:40.555 [INFO][4643] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="7de755c72acee3feaca14d68af691b75bee3614473f844bceba402b000461480" Namespace="calico-system" Pod="calico-kube-controllers-b94b4bd57-2vk8t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b94b4bd57--2vk8t-eth0" Sep 10 00:32:40.574923 containerd[1553]: 2025-09-10 00:32:40.555 [INFO][4643] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia324af93cbd ContainerID="7de755c72acee3feaca14d68af691b75bee3614473f844bceba402b000461480" Namespace="calico-system" Pod="calico-kube-controllers-b94b4bd57-2vk8t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b94b4bd57--2vk8t-eth0" Sep 10 00:32:40.574923 containerd[1553]: 2025-09-10 00:32:40.558 [INFO][4643] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7de755c72acee3feaca14d68af691b75bee3614473f844bceba402b000461480" Namespace="calico-system" Pod="calico-kube-controllers-b94b4bd57-2vk8t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b94b4bd57--2vk8t-eth0" Sep 10 00:32:40.574923 containerd[1553]: 2025-09-10 00:32:40.560 [INFO][4643] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7de755c72acee3feaca14d68af691b75bee3614473f844bceba402b000461480" Namespace="calico-system" Pod="calico-kube-controllers-b94b4bd57-2vk8t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b94b4bd57--2vk8t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--b94b4bd57--2vk8t-eth0", GenerateName:"calico-kube-controllers-b94b4bd57-", Namespace:"calico-system", SelfLink:"", UID:"e74294c4-7f70-425e-bbde-a3f523072902", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 32, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b94b4bd57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7de755c72acee3feaca14d68af691b75bee3614473f844bceba402b000461480", Pod:"calico-kube-controllers-b94b4bd57-2vk8t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia324af93cbd", MAC:"12:68:ca:07:65:77", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:32:40.574923 containerd[1553]: 2025-09-10 00:32:40.571 [INFO][4643] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7de755c72acee3feaca14d68af691b75bee3614473f844bceba402b000461480" Namespace="calico-system" Pod="calico-kube-controllers-b94b4bd57-2vk8t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b94b4bd57--2vk8t-eth0" Sep 10 00:32:40.594954 containerd[1553]: time="2025-09-10T00:32:40.594807954Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:32:40.594954 containerd[1553]: time="2025-09-10T00:32:40.594909875Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:32:40.594954 containerd[1553]: time="2025-09-10T00:32:40.594927579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:32:40.595497 containerd[1553]: time="2025-09-10T00:32:40.595036793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:32:40.628196 systemd-resolved[1456]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:32:40.654981 containerd[1553]: time="2025-09-10T00:32:40.654929190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b94b4bd57-2vk8t,Uid:e74294c4-7f70-425e-bbde-a3f523072902,Namespace:calico-system,Attempt:1,} returns sandbox id \"7de755c72acee3feaca14d68af691b75bee3614473f844bceba402b000461480\"" Sep 10 00:32:41.337307 containerd[1553]: time="2025-09-10T00:32:41.337221269Z" level=info msg="StopPodSandbox for \"ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde\"" Sep 10 00:32:41.432907 systemd[1]: run-containerd-runc-k8s.io-7de755c72acee3feaca14d68af691b75bee3614473f844bceba402b000461480-runc.A5ZbYz.mount: Deactivated successfully. Sep 10 00:32:41.440554 containerd[1553]: 2025-09-10 00:32:41.394 [INFO][4730] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde" Sep 10 00:32:41.440554 containerd[1553]: 2025-09-10 00:32:41.394 [INFO][4730] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde" iface="eth0" netns="/var/run/netns/cni-e650ed9a-e6b7-d139-1420-851a4ca3ac4f" Sep 10 00:32:41.440554 containerd[1553]: 2025-09-10 00:32:41.394 [INFO][4730] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde" iface="eth0" netns="/var/run/netns/cni-e650ed9a-e6b7-d139-1420-851a4ca3ac4f" Sep 10 00:32:41.440554 containerd[1553]: 2025-09-10 00:32:41.394 [INFO][4730] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde" iface="eth0" netns="/var/run/netns/cni-e650ed9a-e6b7-d139-1420-851a4ca3ac4f" Sep 10 00:32:41.440554 containerd[1553]: 2025-09-10 00:32:41.394 [INFO][4730] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde" Sep 10 00:32:41.440554 containerd[1553]: 2025-09-10 00:32:41.394 [INFO][4730] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde" Sep 10 00:32:41.440554 containerd[1553]: 2025-09-10 00:32:41.420 [INFO][4743] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde" HandleID="k8s-pod-network.ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde" Workload="localhost-k8s-csi--node--driver--lkztn-eth0" Sep 10 00:32:41.440554 containerd[1553]: 2025-09-10 00:32:41.420 [INFO][4743] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:32:41.440554 containerd[1553]: 2025-09-10 00:32:41.421 [INFO][4743] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:32:41.440554 containerd[1553]: 2025-09-10 00:32:41.431 [WARNING][4743] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde" HandleID="k8s-pod-network.ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde" Workload="localhost-k8s-csi--node--driver--lkztn-eth0" Sep 10 00:32:41.440554 containerd[1553]: 2025-09-10 00:32:41.431 [INFO][4743] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde" HandleID="k8s-pod-network.ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde" Workload="localhost-k8s-csi--node--driver--lkztn-eth0" Sep 10 00:32:41.440554 containerd[1553]: 2025-09-10 00:32:41.432 [INFO][4743] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:32:41.440554 containerd[1553]: 2025-09-10 00:32:41.436 [INFO][4730] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde" Sep 10 00:32:41.443397 containerd[1553]: time="2025-09-10T00:32:41.443339354Z" level=info msg="TearDown network for sandbox \"ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde\" successfully" Sep 10 00:32:41.443397 containerd[1553]: time="2025-09-10T00:32:41.443393696Z" level=info msg="StopPodSandbox for \"ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde\" returns successfully" Sep 10 00:32:41.444532 containerd[1553]: time="2025-09-10T00:32:41.444170556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lkztn,Uid:da4aa882-2a3b-4ce6-a838-c6e29f20e7da,Namespace:calico-system,Attempt:1,}" Sep 10 00:32:41.445558 systemd[1]: run-netns-cni\x2de650ed9a\x2de6b7\x2dd139\x2d1420\x2d851a4ca3ac4f.mount: Deactivated successfully. Sep 10 00:32:41.550130 kubelet[2646]: E0910 00:32:41.549611 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:32:41.601387 systemd-networkd[1240]: calib3fc0530cfa: Link UP Sep 10 00:32:41.603415 systemd-networkd[1240]: calib3fc0530cfa: Gained carrier Sep 10 00:32:41.619969 containerd[1553]: 2025-09-10 00:32:41.498 [INFO][4752] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--lkztn-eth0 csi-node-driver- calico-system da4aa882-2a3b-4ce6-a838-c6e29f20e7da 1030 0 2025-09-10 00:32:15 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-lkztn eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib3fc0530cfa [] [] }} ContainerID="833be9c8bd57ac84b1e32a66bf93aa933edc8e10cec745cd6850bd2d324b447b" Namespace="calico-system" Pod="csi-node-driver-lkztn" WorkloadEndpoint="localhost-k8s-csi--node--driver--lkztn-" Sep 10 00:32:41.619969 containerd[1553]: 2025-09-10 00:32:41.498 [INFO][4752] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="833be9c8bd57ac84b1e32a66bf93aa933edc8e10cec745cd6850bd2d324b447b" Namespace="calico-system" Pod="csi-node-driver-lkztn" WorkloadEndpoint="localhost-k8s-csi--node--driver--lkztn-eth0" Sep 10 00:32:41.619969 containerd[1553]: 2025-09-10 00:32:41.533 [INFO][4766] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="833be9c8bd57ac84b1e32a66bf93aa933edc8e10cec745cd6850bd2d324b447b" HandleID="k8s-pod-network.833be9c8bd57ac84b1e32a66bf93aa933edc8e10cec745cd6850bd2d324b447b" Workload="localhost-k8s-csi--node--driver--lkztn-eth0" Sep 10 00:32:41.619969 containerd[1553]: 2025-09-10 00:32:41.533 [INFO][4766] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="833be9c8bd57ac84b1e32a66bf93aa933edc8e10cec745cd6850bd2d324b447b" HandleID="k8s-pod-network.833be9c8bd57ac84b1e32a66bf93aa933edc8e10cec745cd6850bd2d324b447b" Workload="localhost-k8s-csi--node--driver--lkztn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f540), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-lkztn", "timestamp":"2025-09-10 00:32:41.533462072 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:32:41.619969 containerd[1553]: 2025-09-10 00:32:41.533 [INFO][4766] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:32:41.619969 containerd[1553]: 2025-09-10 00:32:41.533 [INFO][4766] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:32:41.619969 containerd[1553]: 2025-09-10 00:32:41.533 [INFO][4766] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:32:41.619969 containerd[1553]: 2025-09-10 00:32:41.541 [INFO][4766] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.833be9c8bd57ac84b1e32a66bf93aa933edc8e10cec745cd6850bd2d324b447b" host="localhost" Sep 10 00:32:41.619969 containerd[1553]: 2025-09-10 00:32:41.546 [INFO][4766] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:32:41.619969 containerd[1553]: 2025-09-10 00:32:41.549 [INFO][4766] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:32:41.619969 containerd[1553]: 2025-09-10 00:32:41.551 [INFO][4766] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:32:41.619969 containerd[1553]: 2025-09-10 00:32:41.554 [INFO][4766] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:32:41.619969 containerd[1553]: 2025-09-10 00:32:41.554 [INFO][4766] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.833be9c8bd57ac84b1e32a66bf93aa933edc8e10cec745cd6850bd2d324b447b" host="localhost" Sep 10 00:32:41.619969 containerd[1553]: 2025-09-10 00:32:41.555 [INFO][4766] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.833be9c8bd57ac84b1e32a66bf93aa933edc8e10cec745cd6850bd2d324b447b Sep 10 00:32:41.619969 containerd[1553]: 2025-09-10 00:32:41.559 [INFO][4766] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.833be9c8bd57ac84b1e32a66bf93aa933edc8e10cec745cd6850bd2d324b447b" host="localhost" Sep 10 00:32:41.619969 containerd[1553]: 2025-09-10 00:32:41.593 [INFO][4766] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.833be9c8bd57ac84b1e32a66bf93aa933edc8e10cec745cd6850bd2d324b447b" host="localhost" Sep 10 00:32:41.619969 containerd[1553]: 2025-09-10 00:32:41.593 [INFO][4766] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.833be9c8bd57ac84b1e32a66bf93aa933edc8e10cec745cd6850bd2d324b447b" host="localhost" Sep 10 00:32:41.619969 containerd[1553]: 2025-09-10 00:32:41.593 [INFO][4766] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:32:41.619969 containerd[1553]: 2025-09-10 00:32:41.593 [INFO][4766] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="833be9c8bd57ac84b1e32a66bf93aa933edc8e10cec745cd6850bd2d324b447b" HandleID="k8s-pod-network.833be9c8bd57ac84b1e32a66bf93aa933edc8e10cec745cd6850bd2d324b447b" Workload="localhost-k8s-csi--node--driver--lkztn-eth0" Sep 10 00:32:41.620590 containerd[1553]: 2025-09-10 00:32:41.597 [INFO][4752] cni-plugin/k8s.go 418: Populated endpoint ContainerID="833be9c8bd57ac84b1e32a66bf93aa933edc8e10cec745cd6850bd2d324b447b" Namespace="calico-system" Pod="csi-node-driver-lkztn" WorkloadEndpoint="localhost-k8s-csi--node--driver--lkztn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lkztn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"da4aa882-2a3b-4ce6-a838-c6e29f20e7da", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 32, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-lkztn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib3fc0530cfa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:32:41.620590 containerd[1553]: 2025-09-10 00:32:41.597 [INFO][4752] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="833be9c8bd57ac84b1e32a66bf93aa933edc8e10cec745cd6850bd2d324b447b" Namespace="calico-system" Pod="csi-node-driver-lkztn" WorkloadEndpoint="localhost-k8s-csi--node--driver--lkztn-eth0" Sep 10 00:32:41.620590 containerd[1553]: 2025-09-10 00:32:41.597 [INFO][4752] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib3fc0530cfa ContainerID="833be9c8bd57ac84b1e32a66bf93aa933edc8e10cec745cd6850bd2d324b447b" Namespace="calico-system" Pod="csi-node-driver-lkztn" WorkloadEndpoint="localhost-k8s-csi--node--driver--lkztn-eth0" Sep 10 00:32:41.620590 containerd[1553]: 2025-09-10 00:32:41.605 [INFO][4752] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="833be9c8bd57ac84b1e32a66bf93aa933edc8e10cec745cd6850bd2d324b447b" Namespace="calico-system" Pod="csi-node-driver-lkztn" WorkloadEndpoint="localhost-k8s-csi--node--driver--lkztn-eth0" Sep 10 00:32:41.620590 containerd[1553]: 2025-09-10 00:32:41.606 [INFO][4752] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="833be9c8bd57ac84b1e32a66bf93aa933edc8e10cec745cd6850bd2d324b447b" Namespace="calico-system" Pod="csi-node-driver-lkztn" WorkloadEndpoint="localhost-k8s-csi--node--driver--lkztn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lkztn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"da4aa882-2a3b-4ce6-a838-c6e29f20e7da", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 32, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"833be9c8bd57ac84b1e32a66bf93aa933edc8e10cec745cd6850bd2d324b447b", Pod:"csi-node-driver-lkztn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib3fc0530cfa", MAC:"6a:53:8c:80:2d:c8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:32:41.620590 containerd[1553]: 2025-09-10 00:32:41.615 [INFO][4752] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="833be9c8bd57ac84b1e32a66bf93aa933edc8e10cec745cd6850bd2d324b447b" Namespace="calico-system" Pod="csi-node-driver-lkztn" WorkloadEndpoint="localhost-k8s-csi--node--driver--lkztn-eth0" Sep 10 00:32:41.638499 containerd[1553]: time="2025-09-10T00:32:41.638133338Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:32:41.638499 containerd[1553]: time="2025-09-10T00:32:41.638201325Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:32:41.638499 containerd[1553]: time="2025-09-10T00:32:41.638216113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:32:41.638499 containerd[1553]: time="2025-09-10T00:32:41.638394619Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:32:41.670399 systemd-resolved[1456]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:32:41.686302 containerd[1553]: time="2025-09-10T00:32:41.686260362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lkztn,Uid:da4aa882-2a3b-4ce6-a838-c6e29f20e7da,Namespace:calico-system,Attempt:1,} returns sandbox id \"833be9c8bd57ac84b1e32a66bf93aa933edc8e10cec745cd6850bd2d324b447b\"" Sep 10 00:32:41.936437 systemd-networkd[1240]: calia324af93cbd: Gained IPv6LL Sep 10 00:32:42.113461 containerd[1553]: time="2025-09-10T00:32:42.113409165Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:32:42.114290 containerd[1553]: time="2025-09-10T00:32:42.114210851Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 10 00:32:42.115513 containerd[1553]: time="2025-09-10T00:32:42.115473283Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:32:42.117753 containerd[1553]: time="2025-09-10T00:32:42.117718802Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:32:42.118415 containerd[1553]: time="2025-09-10T00:32:42.118379413Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 3.113867119s" Sep 10 00:32:42.118463 containerd[1553]: time="2025-09-10T00:32:42.118419619Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 10 00:32:42.119660 containerd[1553]: time="2025-09-10T00:32:42.119465785Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 10 00:32:42.120740 containerd[1553]: time="2025-09-10T00:32:42.120714621Z" level=info msg="CreateContainer within sandbox \"23c4934aeb185e4e7f979de0214089a88c440738e2a7f2bc107c89bf43adce4b\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 10 00:32:42.135797 containerd[1553]: time="2025-09-10T00:32:42.135755500Z" level=info msg="CreateContainer within sandbox \"23c4934aeb185e4e7f979de0214089a88c440738e2a7f2bc107c89bf43adce4b\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"22827270f3571b77b13c8712e28e8871320d5b4d5d155253833d86b5b0558f83\"" Sep 10 00:32:42.136334 containerd[1553]: time="2025-09-10T00:32:42.136293812Z" level=info msg="StartContainer for \"22827270f3571b77b13c8712e28e8871320d5b4d5d155253833d86b5b0558f83\"" Sep 10 00:32:42.337050 containerd[1553]: time="2025-09-10T00:32:42.337007147Z" level=info msg="StopPodSandbox for \"6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4\"" Sep 10 00:32:42.338159 containerd[1553]: time="2025-09-10T00:32:42.338040458Z" level=info msg="StopPodSandbox for \"fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6\"" Sep 10 00:32:42.338711 containerd[1553]: time="2025-09-10T00:32:42.338550116Z" level=info msg="StopPodSandbox for \"cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974\"" Sep 10 00:32:42.415225 containerd[1553]: time="2025-09-10T00:32:42.415165065Z" level=info msg="StartContainer for \"22827270f3571b77b13c8712e28e8871320d5b4d5d155253833d86b5b0558f83\" returns successfully" Sep 10 00:32:42.434003 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3560467748.mount: Deactivated successfully. Sep 10 00:32:42.654364 kubelet[2646]: I0910 00:32:42.654191 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-ddd44bf4-dbjh4" podStartSLOduration=1.812043759 podStartE2EDuration="6.654169865s" podCreationTimestamp="2025-09-10 00:32:36 +0000 UTC" firstStartedPulling="2025-09-10 00:32:37.277095009 +0000 UTC m=+39.016193008" lastFinishedPulling="2025-09-10 00:32:42.119221115 +0000 UTC m=+43.858319114" observedRunningTime="2025-09-10 00:32:42.654050591 +0000 UTC m=+44.393148590" watchObservedRunningTime="2025-09-10 00:32:42.654169865 +0000 UTC m=+44.393267854" Sep 10 00:32:42.702343 containerd[1553]: 2025-09-10 00:32:42.626 [INFO][4898] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4" Sep 10 00:32:42.702343 containerd[1553]: 2025-09-10 00:32:42.626 [INFO][4898] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4" iface="eth0" netns="/var/run/netns/cni-292efb09-614b-2bce-243b-cbc937751587" Sep 10 00:32:42.702343 containerd[1553]: 2025-09-10 00:32:42.626 [INFO][4898] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4" iface="eth0" netns="/var/run/netns/cni-292efb09-614b-2bce-243b-cbc937751587" Sep 10 00:32:42.702343 containerd[1553]: 2025-09-10 00:32:42.627 [INFO][4898] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4" iface="eth0" netns="/var/run/netns/cni-292efb09-614b-2bce-243b-cbc937751587" Sep 10 00:32:42.702343 containerd[1553]: 2025-09-10 00:32:42.627 [INFO][4898] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4" Sep 10 00:32:42.702343 containerd[1553]: 2025-09-10 00:32:42.627 [INFO][4898] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4" Sep 10 00:32:42.702343 containerd[1553]: 2025-09-10 00:32:42.657 [INFO][4923] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4" HandleID="k8s-pod-network.6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4" Workload="localhost-k8s-calico--apiserver--954df557d--d5768-eth0" Sep 10 00:32:42.702343 containerd[1553]: 2025-09-10 00:32:42.663 [INFO][4923] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:32:42.702343 containerd[1553]: 2025-09-10 00:32:42.663 [INFO][4923] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:32:42.702343 containerd[1553]: 2025-09-10 00:32:42.688 [WARNING][4923] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4" HandleID="k8s-pod-network.6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4" Workload="localhost-k8s-calico--apiserver--954df557d--d5768-eth0" Sep 10 00:32:42.702343 containerd[1553]: 2025-09-10 00:32:42.688 [INFO][4923] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4" HandleID="k8s-pod-network.6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4" Workload="localhost-k8s-calico--apiserver--954df557d--d5768-eth0" Sep 10 00:32:42.702343 containerd[1553]: 2025-09-10 00:32:42.692 [INFO][4923] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:32:42.702343 containerd[1553]: 2025-09-10 00:32:42.697 [INFO][4898] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4" Sep 10 00:32:42.705423 containerd[1553]: time="2025-09-10T00:32:42.702557348Z" level=info msg="TearDown network for sandbox \"6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4\" successfully" Sep 10 00:32:42.705423 containerd[1553]: time="2025-09-10T00:32:42.702591593Z" level=info msg="StopPodSandbox for \"6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4\" returns successfully" Sep 10 00:32:42.707687 containerd[1553]: time="2025-09-10T00:32:42.707658903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-954df557d-d5768,Uid:ac80b016-97cf-437e-8b81-8fe6c2942f59,Namespace:calico-apiserver,Attempt:1,}" Sep 10 00:32:42.708403 systemd[1]: run-netns-cni\x2d292efb09\x2d614b\x2d2bce\x2d243b\x2dcbc937751587.mount: Deactivated successfully. Sep 10 00:32:42.710684 containerd[1553]: 2025-09-10 00:32:42.624 [INFO][4899] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6" Sep 10 00:32:42.710684 containerd[1553]: 2025-09-10 00:32:42.624 [INFO][4899] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6" iface="eth0" netns="/var/run/netns/cni-fbb06606-73d5-39ab-8fac-d1d2388927c1" Sep 10 00:32:42.710684 containerd[1553]: 2025-09-10 00:32:42.624 [INFO][4899] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6" iface="eth0" netns="/var/run/netns/cni-fbb06606-73d5-39ab-8fac-d1d2388927c1" Sep 10 00:32:42.710684 containerd[1553]: 2025-09-10 00:32:42.625 [INFO][4899] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6" iface="eth0" netns="/var/run/netns/cni-fbb06606-73d5-39ab-8fac-d1d2388927c1" Sep 10 00:32:42.710684 containerd[1553]: 2025-09-10 00:32:42.625 [INFO][4899] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6" Sep 10 00:32:42.710684 containerd[1553]: 2025-09-10 00:32:42.625 [INFO][4899] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6" Sep 10 00:32:42.710684 containerd[1553]: 2025-09-10 00:32:42.668 [INFO][4921] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6" HandleID="k8s-pod-network.fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6" Workload="localhost-k8s-goldmane--7988f88666--cwk66-eth0" Sep 10 00:32:42.710684 containerd[1553]: 2025-09-10 00:32:42.668 [INFO][4921] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:32:42.710684 containerd[1553]: 2025-09-10 00:32:42.692 [INFO][4921] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:32:42.710684 containerd[1553]: 2025-09-10 00:32:42.698 [WARNING][4921] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6" HandleID="k8s-pod-network.fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6" Workload="localhost-k8s-goldmane--7988f88666--cwk66-eth0" Sep 10 00:32:42.710684 containerd[1553]: 2025-09-10 00:32:42.698 [INFO][4921] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6" HandleID="k8s-pod-network.fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6" Workload="localhost-k8s-goldmane--7988f88666--cwk66-eth0" Sep 10 00:32:42.710684 containerd[1553]: 2025-09-10 00:32:42.700 [INFO][4921] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:32:42.710684 containerd[1553]: 2025-09-10 00:32:42.703 [INFO][4899] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6" Sep 10 00:32:42.712359 containerd[1553]: time="2025-09-10T00:32:42.712319911Z" level=info msg="TearDown network for sandbox \"fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6\" successfully" Sep 10 00:32:42.712359 containerd[1553]: time="2025-09-10T00:32:42.712357351Z" level=info msg="StopPodSandbox for \"fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6\" returns successfully" Sep 10 00:32:42.713893 systemd[1]: run-netns-cni\x2dfbb06606\x2d73d5\x2d39ab\x2d8fac\x2dd1d2388927c1.mount: Deactivated successfully. Sep 10 00:32:42.714359 containerd[1553]: time="2025-09-10T00:32:42.714326491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-cwk66,Uid:444dd844-3e2d-443a-aa9b-1761b39df54b,Namespace:calico-system,Attempt:1,}" Sep 10 00:32:42.718215 containerd[1553]: 2025-09-10 00:32:42.626 [INFO][4897] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974" Sep 10 00:32:42.718215 containerd[1553]: 2025-09-10 00:32:42.627 [INFO][4897] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974" iface="eth0" netns="/var/run/netns/cni-6bd226da-978b-6d93-0105-8f5b267ad2b2" Sep 10 00:32:42.718215 containerd[1553]: 2025-09-10 00:32:42.628 [INFO][4897] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974" iface="eth0" netns="/var/run/netns/cni-6bd226da-978b-6d93-0105-8f5b267ad2b2" Sep 10 00:32:42.718215 containerd[1553]: 2025-09-10 00:32:42.629 [INFO][4897] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974" iface="eth0" netns="/var/run/netns/cni-6bd226da-978b-6d93-0105-8f5b267ad2b2" Sep 10 00:32:42.718215 containerd[1553]: 2025-09-10 00:32:42.630 [INFO][4897] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974" Sep 10 00:32:42.718215 containerd[1553]: 2025-09-10 00:32:42.630 [INFO][4897] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974" Sep 10 00:32:42.718215 containerd[1553]: 2025-09-10 00:32:42.701 [INFO][4934] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974" HandleID="k8s-pod-network.cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974" Workload="localhost-k8s-coredns--7c65d6cfc9--5kznd-eth0" Sep 10 00:32:42.718215 containerd[1553]: 2025-09-10 00:32:42.701 [INFO][4934] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:32:42.718215 containerd[1553]: 2025-09-10 00:32:42.701 [INFO][4934] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:32:42.718215 containerd[1553]: 2025-09-10 00:32:42.707 [WARNING][4934] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974" HandleID="k8s-pod-network.cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974" Workload="localhost-k8s-coredns--7c65d6cfc9--5kznd-eth0" Sep 10 00:32:42.718215 containerd[1553]: 2025-09-10 00:32:42.707 [INFO][4934] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974" HandleID="k8s-pod-network.cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974" Workload="localhost-k8s-coredns--7c65d6cfc9--5kznd-eth0" Sep 10 00:32:42.718215 containerd[1553]: 2025-09-10 00:32:42.709 [INFO][4934] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:32:42.718215 containerd[1553]: 2025-09-10 00:32:42.715 [INFO][4897] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974" Sep 10 00:32:42.718690 containerd[1553]: time="2025-09-10T00:32:42.718436413Z" level=info msg="TearDown network for sandbox \"cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974\" successfully" Sep 10 00:32:42.718690 containerd[1553]: time="2025-09-10T00:32:42.718458725Z" level=info msg="StopPodSandbox for \"cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974\" returns successfully" Sep 10 00:32:42.718852 kubelet[2646]: E0910 00:32:42.718824 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:32:42.719545 containerd[1553]: time="2025-09-10T00:32:42.719481537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5kznd,Uid:08e6a60c-fbd0-4ac8-87e2-26029f752560,Namespace:kube-system,Attempt:1,}" Sep 10 00:32:42.722122 systemd[1]: run-netns-cni\x2d6bd226da\x2d978b\x2d6d93\x2d0105\x2d8f5b267ad2b2.mount: Deactivated successfully. Sep 10 00:32:42.857637 systemd-networkd[1240]: cali6916278af54: Link UP Sep 10 00:32:42.858834 systemd-networkd[1240]: cali6916278af54: Gained carrier Sep 10 00:32:42.873254 containerd[1553]: 2025-09-10 00:32:42.789 [INFO][4977] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--5kznd-eth0 coredns-7c65d6cfc9- kube-system 08e6a60c-fbd0-4ac8-87e2-26029f752560 1042 0 2025-09-10 00:32:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-5kznd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6916278af54 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e482f39f8645f1cf1d828a0141c056e73f77b5d069f9a5dc2af8438a79b3f620" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5kznd" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--5kznd-" Sep 10 00:32:42.873254 containerd[1553]: 2025-09-10 00:32:42.789 [INFO][4977] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e482f39f8645f1cf1d828a0141c056e73f77b5d069f9a5dc2af8438a79b3f620" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5kznd" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--5kznd-eth0" Sep 10 00:32:42.873254 containerd[1553]: 2025-09-10 00:32:42.819 [INFO][5004] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e482f39f8645f1cf1d828a0141c056e73f77b5d069f9a5dc2af8438a79b3f620" HandleID="k8s-pod-network.e482f39f8645f1cf1d828a0141c056e73f77b5d069f9a5dc2af8438a79b3f620" Workload="localhost-k8s-coredns--7c65d6cfc9--5kznd-eth0" Sep 10 00:32:42.873254 containerd[1553]: 2025-09-10 00:32:42.819 [INFO][5004] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e482f39f8645f1cf1d828a0141c056e73f77b5d069f9a5dc2af8438a79b3f620" HandleID="k8s-pod-network.e482f39f8645f1cf1d828a0141c056e73f77b5d069f9a5dc2af8438a79b3f620" Workload="localhost-k8s-coredns--7c65d6cfc9--5kznd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139490), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-5kznd", "timestamp":"2025-09-10 00:32:42.819010058 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:32:42.873254 containerd[1553]: 2025-09-10 00:32:42.819 [INFO][5004] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:32:42.873254 containerd[1553]: 2025-09-10 00:32:42.819 [INFO][5004] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:32:42.873254 containerd[1553]: 2025-09-10 00:32:42.819 [INFO][5004] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:32:42.873254 containerd[1553]: 2025-09-10 00:32:42.825 [INFO][5004] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e482f39f8645f1cf1d828a0141c056e73f77b5d069f9a5dc2af8438a79b3f620" host="localhost" Sep 10 00:32:42.873254 containerd[1553]: 2025-09-10 00:32:42.830 [INFO][5004] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:32:42.873254 containerd[1553]: 2025-09-10 00:32:42.834 [INFO][5004] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:32:42.873254 containerd[1553]: 2025-09-10 00:32:42.835 [INFO][5004] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:32:42.873254 containerd[1553]: 2025-09-10 00:32:42.837 [INFO][5004] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:32:42.873254 containerd[1553]: 2025-09-10 00:32:42.837 [INFO][5004] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e482f39f8645f1cf1d828a0141c056e73f77b5d069f9a5dc2af8438a79b3f620" host="localhost" Sep 10 00:32:42.873254 containerd[1553]: 2025-09-10 00:32:42.838 [INFO][5004] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e482f39f8645f1cf1d828a0141c056e73f77b5d069f9a5dc2af8438a79b3f620 Sep 10 00:32:42.873254 containerd[1553]: 2025-09-10 00:32:42.846 [INFO][5004] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e482f39f8645f1cf1d828a0141c056e73f77b5d069f9a5dc2af8438a79b3f620" host="localhost" Sep 10 00:32:42.873254 containerd[1553]: 2025-09-10 00:32:42.850 [INFO][5004] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.e482f39f8645f1cf1d828a0141c056e73f77b5d069f9a5dc2af8438a79b3f620" host="localhost" Sep 10 00:32:42.873254 containerd[1553]: 2025-09-10 00:32:42.850 [INFO][5004] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.e482f39f8645f1cf1d828a0141c056e73f77b5d069f9a5dc2af8438a79b3f620" host="localhost" Sep 10 00:32:42.873254 containerd[1553]: 2025-09-10 00:32:42.850 [INFO][5004] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:32:42.873254 containerd[1553]: 2025-09-10 00:32:42.850 [INFO][5004] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="e482f39f8645f1cf1d828a0141c056e73f77b5d069f9a5dc2af8438a79b3f620" HandleID="k8s-pod-network.e482f39f8645f1cf1d828a0141c056e73f77b5d069f9a5dc2af8438a79b3f620" Workload="localhost-k8s-coredns--7c65d6cfc9--5kznd-eth0" Sep 10 00:32:42.873852 containerd[1553]: 2025-09-10 00:32:42.854 [INFO][4977] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e482f39f8645f1cf1d828a0141c056e73f77b5d069f9a5dc2af8438a79b3f620" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5kznd" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--5kznd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--5kznd-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"08e6a60c-fbd0-4ac8-87e2-26029f752560", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 32, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-5kznd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6916278af54", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:32:42.873852 containerd[1553]: 2025-09-10 00:32:42.854 [INFO][4977] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="e482f39f8645f1cf1d828a0141c056e73f77b5d069f9a5dc2af8438a79b3f620" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5kznd" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--5kznd-eth0" Sep 10 00:32:42.873852 containerd[1553]: 2025-09-10 00:32:42.854 [INFO][4977] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6916278af54 ContainerID="e482f39f8645f1cf1d828a0141c056e73f77b5d069f9a5dc2af8438a79b3f620" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5kznd" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--5kznd-eth0" Sep 10 00:32:42.873852 containerd[1553]: 2025-09-10 00:32:42.858 [INFO][4977] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e482f39f8645f1cf1d828a0141c056e73f77b5d069f9a5dc2af8438a79b3f620" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5kznd" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--5kznd-eth0" Sep 10 00:32:42.873852 containerd[1553]: 2025-09-10 00:32:42.859 [INFO][4977] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e482f39f8645f1cf1d828a0141c056e73f77b5d069f9a5dc2af8438a79b3f620" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5kznd" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--5kznd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--5kznd-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"08e6a60c-fbd0-4ac8-87e2-26029f752560", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 32, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e482f39f8645f1cf1d828a0141c056e73f77b5d069f9a5dc2af8438a79b3f620", Pod:"coredns-7c65d6cfc9-5kznd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6916278af54", MAC:"b2:7a:46:58:93:9f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:32:42.873852 containerd[1553]: 2025-09-10 00:32:42.869 [INFO][4977] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e482f39f8645f1cf1d828a0141c056e73f77b5d069f9a5dc2af8438a79b3f620" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5kznd" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--5kznd-eth0" Sep 10 00:32:42.891793 containerd[1553]: time="2025-09-10T00:32:42.891463528Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:32:42.891793 containerd[1553]: time="2025-09-10T00:32:42.891513012Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:32:42.891793 containerd[1553]: time="2025-09-10T00:32:42.891522390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:32:42.891793 containerd[1553]: time="2025-09-10T00:32:42.891603742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:32:42.917488 systemd-resolved[1456]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:32:42.950870 containerd[1553]: time="2025-09-10T00:32:42.950812558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5kznd,Uid:08e6a60c-fbd0-4ac8-87e2-26029f752560,Namespace:kube-system,Attempt:1,} returns sandbox id \"e482f39f8645f1cf1d828a0141c056e73f77b5d069f9a5dc2af8438a79b3f620\"" Sep 10 00:32:42.951762 kubelet[2646]: E0910 00:32:42.951735 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:32:42.953614 containerd[1553]: time="2025-09-10T00:32:42.953569338Z" level=info msg="CreateContainer within sandbox \"e482f39f8645f1cf1d828a0141c056e73f77b5d069f9a5dc2af8438a79b3f620\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 00:32:42.962658 systemd-networkd[1240]: cali568b9aa7b6f: Link UP Sep 10 00:32:42.963846 systemd-networkd[1240]: cali568b9aa7b6f: Gained carrier Sep 10 00:32:42.981258 containerd[1553]: time="2025-09-10T00:32:42.977846652Z" level=info msg="CreateContainer within sandbox \"e482f39f8645f1cf1d828a0141c056e73f77b5d069f9a5dc2af8438a79b3f620\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9978c7a2bc21a530e0478514480cb7164cf7422d3a55b9c809498c46b9a1478f\"" Sep 10 00:32:42.996282 containerd[1553]: time="2025-09-10T00:32:42.992904313Z" level=info msg="StartContainer for \"9978c7a2bc21a530e0478514480cb7164cf7422d3a55b9c809498c46b9a1478f\"" Sep 10 00:32:43.007413 containerd[1553]: 2025-09-10 00:32:42.788 [INFO][4966] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7988f88666--cwk66-eth0 goldmane-7988f88666- calico-system 444dd844-3e2d-443a-aa9b-1761b39df54b 1041 0 2025-09-10 00:32:14 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7988f88666-cwk66 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali568b9aa7b6f [] [] }} ContainerID="eb16f27aa405160f8600c36832ac94b7ca435c5da6bacf74757316874b37bc00" Namespace="calico-system" Pod="goldmane-7988f88666-cwk66" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--cwk66-" Sep 10 00:32:43.007413 containerd[1553]: 2025-09-10 00:32:42.788 [INFO][4966] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eb16f27aa405160f8600c36832ac94b7ca435c5da6bacf74757316874b37bc00" Namespace="calico-system" Pod="goldmane-7988f88666-cwk66" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--cwk66-eth0" Sep 10 00:32:43.007413 containerd[1553]: 2025-09-10 00:32:42.824 [INFO][4998] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eb16f27aa405160f8600c36832ac94b7ca435c5da6bacf74757316874b37bc00" HandleID="k8s-pod-network.eb16f27aa405160f8600c36832ac94b7ca435c5da6bacf74757316874b37bc00" Workload="localhost-k8s-goldmane--7988f88666--cwk66-eth0" Sep 10 00:32:43.007413 containerd[1553]: 2025-09-10 00:32:42.824 [INFO][4998] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="eb16f27aa405160f8600c36832ac94b7ca435c5da6bacf74757316874b37bc00" HandleID="k8s-pod-network.eb16f27aa405160f8600c36832ac94b7ca435c5da6bacf74757316874b37bc00" Workload="localhost-k8s-goldmane--7988f88666--cwk66-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7940), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7988f88666-cwk66", "timestamp":"2025-09-10 00:32:42.824042273 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:32:43.007413 containerd[1553]: 2025-09-10 00:32:42.824 [INFO][4998] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:32:43.007413 containerd[1553]: 2025-09-10 00:32:42.850 [INFO][4998] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:32:43.007413 containerd[1553]: 2025-09-10 00:32:42.850 [INFO][4998] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:32:43.007413 containerd[1553]: 2025-09-10 00:32:42.926 [INFO][4998] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.eb16f27aa405160f8600c36832ac94b7ca435c5da6bacf74757316874b37bc00" host="localhost" Sep 10 00:32:43.007413 containerd[1553]: 2025-09-10 00:32:42.932 [INFO][4998] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:32:43.007413 containerd[1553]: 2025-09-10 00:32:42.935 [INFO][4998] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:32:43.007413 containerd[1553]: 2025-09-10 00:32:42.937 [INFO][4998] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:32:43.007413 containerd[1553]: 2025-09-10 00:32:42.943 [INFO][4998] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:32:43.007413 containerd[1553]: 2025-09-10 00:32:42.943 [INFO][4998] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.eb16f27aa405160f8600c36832ac94b7ca435c5da6bacf74757316874b37bc00" host="localhost" Sep 10 00:32:43.007413 containerd[1553]: 2025-09-10 00:32:42.944 [INFO][4998] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.eb16f27aa405160f8600c36832ac94b7ca435c5da6bacf74757316874b37bc00 Sep 10 00:32:43.007413 containerd[1553]: 2025-09-10 00:32:42.948 [INFO][4998] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.eb16f27aa405160f8600c36832ac94b7ca435c5da6bacf74757316874b37bc00" host="localhost" Sep 10 00:32:43.007413 containerd[1553]: 2025-09-10 00:32:42.955 [INFO][4998] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.eb16f27aa405160f8600c36832ac94b7ca435c5da6bacf74757316874b37bc00" host="localhost" Sep 10 00:32:43.007413 containerd[1553]: 2025-09-10 00:32:42.955 [INFO][4998] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.eb16f27aa405160f8600c36832ac94b7ca435c5da6bacf74757316874b37bc00" host="localhost" Sep 10 00:32:43.007413 containerd[1553]: 2025-09-10 00:32:42.955 [INFO][4998] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:32:43.007413 containerd[1553]: 2025-09-10 00:32:42.955 [INFO][4998] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="eb16f27aa405160f8600c36832ac94b7ca435c5da6bacf74757316874b37bc00" HandleID="k8s-pod-network.eb16f27aa405160f8600c36832ac94b7ca435c5da6bacf74757316874b37bc00" Workload="localhost-k8s-goldmane--7988f88666--cwk66-eth0" Sep 10 00:32:43.008060 containerd[1553]: 2025-09-10 00:32:42.960 [INFO][4966] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eb16f27aa405160f8600c36832ac94b7ca435c5da6bacf74757316874b37bc00" Namespace="calico-system" Pod="goldmane-7988f88666-cwk66" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--cwk66-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--cwk66-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"444dd844-3e2d-443a-aa9b-1761b39df54b", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 32, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7988f88666-cwk66", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali568b9aa7b6f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:32:43.008060 containerd[1553]: 2025-09-10 00:32:42.960 [INFO][4966] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="eb16f27aa405160f8600c36832ac94b7ca435c5da6bacf74757316874b37bc00" Namespace="calico-system" Pod="goldmane-7988f88666-cwk66" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--cwk66-eth0" Sep 10 00:32:43.008060 containerd[1553]: 2025-09-10 00:32:42.960 [INFO][4966] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali568b9aa7b6f ContainerID="eb16f27aa405160f8600c36832ac94b7ca435c5da6bacf74757316874b37bc00" Namespace="calico-system" Pod="goldmane-7988f88666-cwk66" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--cwk66-eth0" Sep 10 00:32:43.008060 containerd[1553]: 2025-09-10 00:32:42.964 [INFO][4966] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eb16f27aa405160f8600c36832ac94b7ca435c5da6bacf74757316874b37bc00" Namespace="calico-system" Pod="goldmane-7988f88666-cwk66" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--cwk66-eth0" Sep 10 00:32:43.008060 containerd[1553]: 2025-09-10 00:32:42.964 [INFO][4966] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eb16f27aa405160f8600c36832ac94b7ca435c5da6bacf74757316874b37bc00" Namespace="calico-system" Pod="goldmane-7988f88666-cwk66" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--cwk66-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--cwk66-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"444dd844-3e2d-443a-aa9b-1761b39df54b", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 32, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eb16f27aa405160f8600c36832ac94b7ca435c5da6bacf74757316874b37bc00", Pod:"goldmane-7988f88666-cwk66", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali568b9aa7b6f", MAC:"a2:75:11:a2:6f:ea", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:32:43.008060 containerd[1553]: 2025-09-10 00:32:42.991 [INFO][4966] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eb16f27aa405160f8600c36832ac94b7ca435c5da6bacf74757316874b37bc00" Namespace="calico-system" Pod="goldmane-7988f88666-cwk66" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--cwk66-eth0" Sep 10 00:32:43.041103 containerd[1553]: time="2025-09-10T00:32:43.040771622Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:32:43.041103 containerd[1553]: time="2025-09-10T00:32:43.040850419Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:32:43.041103 containerd[1553]: time="2025-09-10T00:32:43.040869105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:32:43.041103 containerd[1553]: time="2025-09-10T00:32:43.040966898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:32:43.082347 systemd-resolved[1456]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:32:43.082929 containerd[1553]: time="2025-09-10T00:32:43.082829334Z" level=info msg="StartContainer for \"9978c7a2bc21a530e0478514480cb7164cf7422d3a55b9c809498c46b9a1478f\" returns successfully" Sep 10 00:32:43.090747 systemd-networkd[1240]: calic753e922da7: Link UP Sep 10 00:32:43.091105 systemd-networkd[1240]: calic753e922da7: Gained carrier Sep 10 00:32:43.101829 containerd[1553]: 2025-09-10 00:32:42.786 [INFO][4955] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--954df557d--d5768-eth0 calico-apiserver-954df557d- calico-apiserver ac80b016-97cf-437e-8b81-8fe6c2942f59 1040 0 2025-09-10 00:32:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:954df557d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-954df557d-d5768 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic753e922da7 [] [] }} ContainerID="7435b885dd9b3671a81953898554bcf1a56f42c3a60a1090bdee64de91bb064c" Namespace="calico-apiserver" Pod="calico-apiserver-954df557d-d5768" WorkloadEndpoint="localhost-k8s-calico--apiserver--954df557d--d5768-" Sep 10 00:32:43.101829 containerd[1553]: 2025-09-10 00:32:42.786 [INFO][4955] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7435b885dd9b3671a81953898554bcf1a56f42c3a60a1090bdee64de91bb064c" Namespace="calico-apiserver" Pod="calico-apiserver-954df557d-d5768" WorkloadEndpoint="localhost-k8s-calico--apiserver--954df557d--d5768-eth0" Sep 10 00:32:43.101829 containerd[1553]: 2025-09-10 00:32:42.826 [INFO][4996] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7435b885dd9b3671a81953898554bcf1a56f42c3a60a1090bdee64de91bb064c" HandleID="k8s-pod-network.7435b885dd9b3671a81953898554bcf1a56f42c3a60a1090bdee64de91bb064c" Workload="localhost-k8s-calico--apiserver--954df557d--d5768-eth0" Sep 10 00:32:43.101829 containerd[1553]: 2025-09-10 00:32:42.826 [INFO][4996] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7435b885dd9b3671a81953898554bcf1a56f42c3a60a1090bdee64de91bb064c" HandleID="k8s-pod-network.7435b885dd9b3671a81953898554bcf1a56f42c3a60a1090bdee64de91bb064c" Workload="localhost-k8s-calico--apiserver--954df557d--d5768-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00013ab50), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-954df557d-d5768", "timestamp":"2025-09-10 00:32:42.826305896 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:32:43.101829 containerd[1553]: 2025-09-10 00:32:42.826 [INFO][4996] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:32:43.101829 containerd[1553]: 2025-09-10 00:32:42.955 [INFO][4996] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:32:43.101829 containerd[1553]: 2025-09-10 00:32:42.955 [INFO][4996] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:32:43.101829 containerd[1553]: 2025-09-10 00:32:43.027 [INFO][4996] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7435b885dd9b3671a81953898554bcf1a56f42c3a60a1090bdee64de91bb064c" host="localhost" Sep 10 00:32:43.101829 containerd[1553]: 2025-09-10 00:32:43.038 [INFO][4996] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:32:43.101829 containerd[1553]: 2025-09-10 00:32:43.047 [INFO][4996] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:32:43.101829 containerd[1553]: 2025-09-10 00:32:43.049 [INFO][4996] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:32:43.101829 containerd[1553]: 2025-09-10 00:32:43.051 [INFO][4996] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:32:43.101829 containerd[1553]: 2025-09-10 00:32:43.052 [INFO][4996] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7435b885dd9b3671a81953898554bcf1a56f42c3a60a1090bdee64de91bb064c" host="localhost" Sep 10 00:32:43.101829 containerd[1553]: 2025-09-10 00:32:43.053 [INFO][4996] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7435b885dd9b3671a81953898554bcf1a56f42c3a60a1090bdee64de91bb064c Sep 10 00:32:43.101829 containerd[1553]: 2025-09-10 00:32:43.059 [INFO][4996] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7435b885dd9b3671a81953898554bcf1a56f42c3a60a1090bdee64de91bb064c" host="localhost" Sep 10 00:32:43.101829 containerd[1553]: 2025-09-10 00:32:43.065 [INFO][4996] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.7435b885dd9b3671a81953898554bcf1a56f42c3a60a1090bdee64de91bb064c" host="localhost" Sep 10 00:32:43.101829 containerd[1553]: 2025-09-10 00:32:43.065 [INFO][4996] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.7435b885dd9b3671a81953898554bcf1a56f42c3a60a1090bdee64de91bb064c" host="localhost" Sep 10 00:32:43.101829 containerd[1553]: 2025-09-10 00:32:43.065 [INFO][4996] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:32:43.101829 containerd[1553]: 2025-09-10 00:32:43.065 [INFO][4996] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="7435b885dd9b3671a81953898554bcf1a56f42c3a60a1090bdee64de91bb064c" HandleID="k8s-pod-network.7435b885dd9b3671a81953898554bcf1a56f42c3a60a1090bdee64de91bb064c" Workload="localhost-k8s-calico--apiserver--954df557d--d5768-eth0" Sep 10 00:32:43.103476 containerd[1553]: 2025-09-10 00:32:43.074 [INFO][4955] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7435b885dd9b3671a81953898554bcf1a56f42c3a60a1090bdee64de91bb064c" Namespace="calico-apiserver" Pod="calico-apiserver-954df557d-d5768" WorkloadEndpoint="localhost-k8s-calico--apiserver--954df557d--d5768-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--954df557d--d5768-eth0", GenerateName:"calico-apiserver-954df557d-", Namespace:"calico-apiserver", SelfLink:"", UID:"ac80b016-97cf-437e-8b81-8fe6c2942f59", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 32, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"954df557d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-954df557d-d5768", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic753e922da7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:32:43.103476 containerd[1553]: 2025-09-10 00:32:43.075 [INFO][4955] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="7435b885dd9b3671a81953898554bcf1a56f42c3a60a1090bdee64de91bb064c" Namespace="calico-apiserver" Pod="calico-apiserver-954df557d-d5768" WorkloadEndpoint="localhost-k8s-calico--apiserver--954df557d--d5768-eth0" Sep 10 00:32:43.103476 containerd[1553]: 2025-09-10 00:32:43.075 [INFO][4955] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic753e922da7 ContainerID="7435b885dd9b3671a81953898554bcf1a56f42c3a60a1090bdee64de91bb064c" Namespace="calico-apiserver" Pod="calico-apiserver-954df557d-d5768" WorkloadEndpoint="localhost-k8s-calico--apiserver--954df557d--d5768-eth0" Sep 10 00:32:43.103476 containerd[1553]: 2025-09-10 00:32:43.084 [INFO][4955] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7435b885dd9b3671a81953898554bcf1a56f42c3a60a1090bdee64de91bb064c" Namespace="calico-apiserver" Pod="calico-apiserver-954df557d-d5768" WorkloadEndpoint="localhost-k8s-calico--apiserver--954df557d--d5768-eth0" Sep 10 00:32:43.103476 containerd[1553]: 2025-09-10 00:32:43.085 [INFO][4955] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7435b885dd9b3671a81953898554bcf1a56f42c3a60a1090bdee64de91bb064c" Namespace="calico-apiserver" Pod="calico-apiserver-954df557d-d5768" WorkloadEndpoint="localhost-k8s-calico--apiserver--954df557d--d5768-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--954df557d--d5768-eth0", GenerateName:"calico-apiserver-954df557d-", Namespace:"calico-apiserver", SelfLink:"", UID:"ac80b016-97cf-437e-8b81-8fe6c2942f59", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 32, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"954df557d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7435b885dd9b3671a81953898554bcf1a56f42c3a60a1090bdee64de91bb064c", Pod:"calico-apiserver-954df557d-d5768", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic753e922da7", MAC:"c6:33:7e:26:0a:02", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:32:43.103476 containerd[1553]: 2025-09-10 00:32:43.096 [INFO][4955] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7435b885dd9b3671a81953898554bcf1a56f42c3a60a1090bdee64de91bb064c" Namespace="calico-apiserver" Pod="calico-apiserver-954df557d-d5768" WorkloadEndpoint="localhost-k8s-calico--apiserver--954df557d--d5768-eth0" Sep 10 00:32:43.125068 containerd[1553]: time="2025-09-10T00:32:43.124985732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-cwk66,Uid:444dd844-3e2d-443a-aa9b-1761b39df54b,Namespace:calico-system,Attempt:1,} returns sandbox id \"eb16f27aa405160f8600c36832ac94b7ca435c5da6bacf74757316874b37bc00\"" Sep 10 00:32:43.144978 containerd[1553]: time="2025-09-10T00:32:43.144380639Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:32:43.145187 containerd[1553]: time="2025-09-10T00:32:43.144963473Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:32:43.145187 containerd[1553]: time="2025-09-10T00:32:43.144985835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:32:43.145187 containerd[1553]: time="2025-09-10T00:32:43.145164010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:32:43.171504 systemd-resolved[1456]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:32:43.198891 containerd[1553]: time="2025-09-10T00:32:43.198844819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-954df557d-d5768,Uid:ac80b016-97cf-437e-8b81-8fe6c2942f59,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"7435b885dd9b3671a81953898554bcf1a56f42c3a60a1090bdee64de91bb064c\"" Sep 10 00:32:43.337349 containerd[1553]: time="2025-09-10T00:32:43.337115815Z" level=info msg="StopPodSandbox for \"3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea\"" Sep 10 00:32:43.536433 systemd-networkd[1240]: calib3fc0530cfa: Gained IPv6LL Sep 10 00:32:43.559309 kubelet[2646]: E0910 00:32:43.559102 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:32:43.633887 containerd[1553]: 2025-09-10 00:32:43.386 [INFO][5228] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea" Sep 10 00:32:43.633887 containerd[1553]: 2025-09-10 00:32:43.386 [INFO][5228] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea" iface="eth0" netns="/var/run/netns/cni-a132c032-a734-6a0a-4c09-393c0f83443d" Sep 10 00:32:43.633887 containerd[1553]: 2025-09-10 00:32:43.387 [INFO][5228] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea" iface="eth0" netns="/var/run/netns/cni-a132c032-a734-6a0a-4c09-393c0f83443d" Sep 10 00:32:43.633887 containerd[1553]: 2025-09-10 00:32:43.387 [INFO][5228] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea" iface="eth0" netns="/var/run/netns/cni-a132c032-a734-6a0a-4c09-393c0f83443d" Sep 10 00:32:43.633887 containerd[1553]: 2025-09-10 00:32:43.387 [INFO][5228] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea" Sep 10 00:32:43.633887 containerd[1553]: 2025-09-10 00:32:43.387 [INFO][5228] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea" Sep 10 00:32:43.633887 containerd[1553]: 2025-09-10 00:32:43.511 [INFO][5238] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea" HandleID="k8s-pod-network.3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea" Workload="localhost-k8s-calico--apiserver--954df557d--7jgtm-eth0" Sep 10 00:32:43.633887 containerd[1553]: 2025-09-10 00:32:43.511 [INFO][5238] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:32:43.633887 containerd[1553]: 2025-09-10 00:32:43.511 [INFO][5238] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:32:43.633887 containerd[1553]: 2025-09-10 00:32:43.611 [WARNING][5238] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea" HandleID="k8s-pod-network.3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea" Workload="localhost-k8s-calico--apiserver--954df557d--7jgtm-eth0" Sep 10 00:32:43.633887 containerd[1553]: 2025-09-10 00:32:43.612 [INFO][5238] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea" HandleID="k8s-pod-network.3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea" Workload="localhost-k8s-calico--apiserver--954df557d--7jgtm-eth0" Sep 10 00:32:43.633887 containerd[1553]: 2025-09-10 00:32:43.625 [INFO][5238] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:32:43.633887 containerd[1553]: 2025-09-10 00:32:43.630 [INFO][5228] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea" Sep 10 00:32:43.634845 containerd[1553]: time="2025-09-10T00:32:43.634051892Z" level=info msg="TearDown network for sandbox \"3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea\" successfully" Sep 10 00:32:43.634845 containerd[1553]: time="2025-09-10T00:32:43.634108858Z" level=info msg="StopPodSandbox for \"3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea\" returns successfully" Sep 10 00:32:43.635032 containerd[1553]: time="2025-09-10T00:32:43.634991416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-954df557d-7jgtm,Uid:0b9b71be-4940-4c7a-8e7a-6616e525febf,Namespace:calico-apiserver,Attempt:1,}" Sep 10 00:32:43.638092 systemd[1]: run-netns-cni\x2da132c032\x2da734\x2d6a0a\x2d4c09\x2d393c0f83443d.mount: Deactivated successfully. Sep 10 00:32:43.676443 kubelet[2646]: I0910 00:32:43.676338 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-5kznd" podStartSLOduration=39.676317244 podStartE2EDuration="39.676317244s" podCreationTimestamp="2025-09-10 00:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:32:43.675655581 +0000 UTC m=+45.414753600" watchObservedRunningTime="2025-09-10 00:32:43.676317244 +0000 UTC m=+45.415415243" Sep 10 00:32:43.959155 systemd-networkd[1240]: cali394065eb720: Link UP Sep 10 00:32:43.959448 systemd-networkd[1240]: cali394065eb720: Gained carrier Sep 10 00:32:43.978780 containerd[1553]: 2025-09-10 00:32:43.877 [INFO][5249] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--954df557d--7jgtm-eth0 calico-apiserver-954df557d- calico-apiserver 0b9b71be-4940-4c7a-8e7a-6616e525febf 1072 0 2025-09-10 00:32:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:954df557d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-954df557d-7jgtm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali394065eb720 [] [] }} ContainerID="3342d3b08d6acbbfc28410fe11d99e4058eba84288d5c220dc49abfb9c3b8f0c" Namespace="calico-apiserver" Pod="calico-apiserver-954df557d-7jgtm" WorkloadEndpoint="localhost-k8s-calico--apiserver--954df557d--7jgtm-" Sep 10 00:32:43.978780 containerd[1553]: 2025-09-10 00:32:43.877 [INFO][5249] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3342d3b08d6acbbfc28410fe11d99e4058eba84288d5c220dc49abfb9c3b8f0c" Namespace="calico-apiserver" Pod="calico-apiserver-954df557d-7jgtm" WorkloadEndpoint="localhost-k8s-calico--apiserver--954df557d--7jgtm-eth0" Sep 10 00:32:43.978780 containerd[1553]: 2025-09-10 00:32:43.917 [INFO][5263] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3342d3b08d6acbbfc28410fe11d99e4058eba84288d5c220dc49abfb9c3b8f0c" HandleID="k8s-pod-network.3342d3b08d6acbbfc28410fe11d99e4058eba84288d5c220dc49abfb9c3b8f0c" Workload="localhost-k8s-calico--apiserver--954df557d--7jgtm-eth0" Sep 10 00:32:43.978780 containerd[1553]: 2025-09-10 00:32:43.918 [INFO][5263] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3342d3b08d6acbbfc28410fe11d99e4058eba84288d5c220dc49abfb9c3b8f0c" HandleID="k8s-pod-network.3342d3b08d6acbbfc28410fe11d99e4058eba84288d5c220dc49abfb9c3b8f0c" Workload="localhost-k8s-calico--apiserver--954df557d--7jgtm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000324290), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-954df557d-7jgtm", "timestamp":"2025-09-10 00:32:43.916923144 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:32:43.978780 containerd[1553]: 2025-09-10 00:32:43.918 [INFO][5263] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:32:43.978780 containerd[1553]: 2025-09-10 00:32:43.918 [INFO][5263] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:32:43.978780 containerd[1553]: 2025-09-10 00:32:43.918 [INFO][5263] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:32:43.978780 containerd[1553]: 2025-09-10 00:32:43.923 [INFO][5263] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3342d3b08d6acbbfc28410fe11d99e4058eba84288d5c220dc49abfb9c3b8f0c" host="localhost" Sep 10 00:32:43.978780 containerd[1553]: 2025-09-10 00:32:43.930 [INFO][5263] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:32:43.978780 containerd[1553]: 2025-09-10 00:32:43.934 [INFO][5263] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:32:43.978780 containerd[1553]: 2025-09-10 00:32:43.936 [INFO][5263] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:32:43.978780 containerd[1553]: 2025-09-10 00:32:43.938 [INFO][5263] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:32:43.978780 containerd[1553]: 2025-09-10 00:32:43.938 [INFO][5263] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3342d3b08d6acbbfc28410fe11d99e4058eba84288d5c220dc49abfb9c3b8f0c" host="localhost" Sep 10 00:32:43.978780 containerd[1553]: 2025-09-10 00:32:43.939 [INFO][5263] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3342d3b08d6acbbfc28410fe11d99e4058eba84288d5c220dc49abfb9c3b8f0c Sep 10 00:32:43.978780 containerd[1553]: 2025-09-10 00:32:43.943 [INFO][5263] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3342d3b08d6acbbfc28410fe11d99e4058eba84288d5c220dc49abfb9c3b8f0c" host="localhost" Sep 10 00:32:43.978780 containerd[1553]: 2025-09-10 00:32:43.949 [INFO][5263] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.3342d3b08d6acbbfc28410fe11d99e4058eba84288d5c220dc49abfb9c3b8f0c" host="localhost" Sep 10 00:32:43.978780 containerd[1553]: 2025-09-10 00:32:43.949 [INFO][5263] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.3342d3b08d6acbbfc28410fe11d99e4058eba84288d5c220dc49abfb9c3b8f0c" host="localhost" Sep 10 00:32:43.978780 containerd[1553]: 2025-09-10 00:32:43.950 [INFO][5263] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:32:43.978780 containerd[1553]: 2025-09-10 00:32:43.950 [INFO][5263] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="3342d3b08d6acbbfc28410fe11d99e4058eba84288d5c220dc49abfb9c3b8f0c" HandleID="k8s-pod-network.3342d3b08d6acbbfc28410fe11d99e4058eba84288d5c220dc49abfb9c3b8f0c" Workload="localhost-k8s-calico--apiserver--954df557d--7jgtm-eth0" Sep 10 00:32:43.979586 containerd[1553]: 2025-09-10 00:32:43.953 [INFO][5249] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3342d3b08d6acbbfc28410fe11d99e4058eba84288d5c220dc49abfb9c3b8f0c" Namespace="calico-apiserver" Pod="calico-apiserver-954df557d-7jgtm" WorkloadEndpoint="localhost-k8s-calico--apiserver--954df557d--7jgtm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--954df557d--7jgtm-eth0", GenerateName:"calico-apiserver-954df557d-", Namespace:"calico-apiserver", SelfLink:"", UID:"0b9b71be-4940-4c7a-8e7a-6616e525febf", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 32, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"954df557d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-954df557d-7jgtm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali394065eb720", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:32:43.979586 containerd[1553]: 2025-09-10 00:32:43.954 [INFO][5249] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="3342d3b08d6acbbfc28410fe11d99e4058eba84288d5c220dc49abfb9c3b8f0c" Namespace="calico-apiserver" Pod="calico-apiserver-954df557d-7jgtm" WorkloadEndpoint="localhost-k8s-calico--apiserver--954df557d--7jgtm-eth0" Sep 10 00:32:43.979586 containerd[1553]: 2025-09-10 00:32:43.954 [INFO][5249] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali394065eb720 ContainerID="3342d3b08d6acbbfc28410fe11d99e4058eba84288d5c220dc49abfb9c3b8f0c" Namespace="calico-apiserver" Pod="calico-apiserver-954df557d-7jgtm" WorkloadEndpoint="localhost-k8s-calico--apiserver--954df557d--7jgtm-eth0" Sep 10 00:32:43.979586 containerd[1553]: 2025-09-10 00:32:43.957 [INFO][5249] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3342d3b08d6acbbfc28410fe11d99e4058eba84288d5c220dc49abfb9c3b8f0c" Namespace="calico-apiserver" Pod="calico-apiserver-954df557d-7jgtm" WorkloadEndpoint="localhost-k8s-calico--apiserver--954df557d--7jgtm-eth0" Sep 10 00:32:43.979586 containerd[1553]: 2025-09-10 00:32:43.963 [INFO][5249] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3342d3b08d6acbbfc28410fe11d99e4058eba84288d5c220dc49abfb9c3b8f0c" Namespace="calico-apiserver" Pod="calico-apiserver-954df557d-7jgtm" WorkloadEndpoint="localhost-k8s-calico--apiserver--954df557d--7jgtm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--954df557d--7jgtm-eth0", GenerateName:"calico-apiserver-954df557d-", Namespace:"calico-apiserver", SelfLink:"", UID:"0b9b71be-4940-4c7a-8e7a-6616e525febf", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 32, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"954df557d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3342d3b08d6acbbfc28410fe11d99e4058eba84288d5c220dc49abfb9c3b8f0c", Pod:"calico-apiserver-954df557d-7jgtm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali394065eb720", MAC:"be:d8:0b:7e:90:11", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:32:43.979586 containerd[1553]: 2025-09-10 00:32:43.973 [INFO][5249] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3342d3b08d6acbbfc28410fe11d99e4058eba84288d5c220dc49abfb9c3b8f0c" Namespace="calico-apiserver" Pod="calico-apiserver-954df557d-7jgtm" WorkloadEndpoint="localhost-k8s-calico--apiserver--954df557d--7jgtm-eth0" Sep 10 00:32:44.006713 containerd[1553]: time="2025-09-10T00:32:44.006583165Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:32:44.006713 containerd[1553]: time="2025-09-10T00:32:44.006664968Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:32:44.006713 containerd[1553]: time="2025-09-10T00:32:44.006704412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:32:44.007035 containerd[1553]: time="2025-09-10T00:32:44.006852401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:32:44.040375 systemd-resolved[1456]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:32:44.078789 containerd[1553]: time="2025-09-10T00:32:44.078745811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-954df557d-7jgtm,Uid:0b9b71be-4940-4c7a-8e7a-6616e525febf,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"3342d3b08d6acbbfc28410fe11d99e4058eba84288d5c220dc49abfb9c3b8f0c\"" Sep 10 00:32:44.112457 systemd-networkd[1240]: cali568b9aa7b6f: Gained IPv6LL Sep 10 00:32:44.497333 systemd-networkd[1240]: cali6916278af54: Gained IPv6LL Sep 10 00:32:44.565694 kubelet[2646]: E0910 00:32:44.565650 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:32:44.944595 systemd-networkd[1240]: calic753e922da7: Gained IPv6LL Sep 10 00:32:45.326574 containerd[1553]: time="2025-09-10T00:32:45.326499935Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:32:45.327329 containerd[1553]: time="2025-09-10T00:32:45.327259632Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 10 00:32:45.329100 containerd[1553]: time="2025-09-10T00:32:45.329007175Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:32:45.331343 containerd[1553]: time="2025-09-10T00:32:45.331299170Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:32:45.332142 containerd[1553]: time="2025-09-10T00:32:45.331930437Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 3.212428384s" Sep 10 00:32:45.332142 containerd[1553]: time="2025-09-10T00:32:45.331990409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 10 00:32:45.333335 containerd[1553]: time="2025-09-10T00:32:45.333290050Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 10 00:32:45.340443 containerd[1553]: time="2025-09-10T00:32:45.340408240Z" level=info msg="CreateContainer within sandbox \"7de755c72acee3feaca14d68af691b75bee3614473f844bceba402b000461480\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 10 00:32:45.517544 systemd[1]: Started sshd@9-10.0.0.21:22-10.0.0.1:46696.service - OpenSSH per-connection server daemon (10.0.0.1:46696). Sep 10 00:32:45.568012 kubelet[2646]: E0910 00:32:45.567978 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:32:45.600320 sshd[5331]: Accepted publickey for core from 10.0.0.1 port 46696 ssh2: RSA SHA256:yotFPVH/8pVol0IcCMTpL4axYdSEk1J0cKg1+3rpd1s Sep 10 00:32:45.602273 sshd[5331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:32:45.606833 systemd-logind[1531]: New session 10 of user core. Sep 10 00:32:45.620636 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 10 00:32:45.746326 sshd[5331]: pam_unix(sshd:session): session closed for user core Sep 10 00:32:45.751137 systemd[1]: sshd@9-10.0.0.21:22-10.0.0.1:46696.service: Deactivated successfully. Sep 10 00:32:45.753861 systemd-logind[1531]: Session 10 logged out. Waiting for processes to exit. Sep 10 00:32:45.753994 systemd[1]: session-10.scope: Deactivated successfully. Sep 10 00:32:45.755178 systemd-logind[1531]: Removed session 10. Sep 10 00:32:45.840396 systemd-networkd[1240]: cali394065eb720: Gained IPv6LL Sep 10 00:32:46.533950 containerd[1553]: time="2025-09-10T00:32:46.533886541Z" level=info msg="CreateContainer within sandbox \"7de755c72acee3feaca14d68af691b75bee3614473f844bceba402b000461480\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"dedb6c8d4974462c40af73be1a53180a72891f7c5101dba8e4032a7345a971b1\"" Sep 10 00:32:46.534688 containerd[1553]: time="2025-09-10T00:32:46.534650616Z" level=info msg="StartContainer for \"dedb6c8d4974462c40af73be1a53180a72891f7c5101dba8e4032a7345a971b1\"" Sep 10 00:32:46.910603 containerd[1553]: time="2025-09-10T00:32:46.910531157Z" level=info msg="StartContainer for \"dedb6c8d4974462c40af73be1a53180a72891f7c5101dba8e4032a7345a971b1\" returns successfully" Sep 10 00:32:47.654106 kubelet[2646]: I0910 00:32:47.654020 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-b94b4bd57-2vk8t" podStartSLOduration=27.977045417 podStartE2EDuration="32.653990357s" podCreationTimestamp="2025-09-10 00:32:15 +0000 UTC" firstStartedPulling="2025-09-10 00:32:40.656082297 +0000 UTC m=+42.395180296" lastFinishedPulling="2025-09-10 00:32:45.333027237 +0000 UTC m=+47.072125236" observedRunningTime="2025-09-10 00:32:47.606373192 +0000 UTC m=+49.345471181" watchObservedRunningTime="2025-09-10 00:32:47.653990357 +0000 UTC m=+49.393088356" Sep 10 00:32:48.446994 containerd[1553]: time="2025-09-10T00:32:48.446929102Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:32:48.447865 containerd[1553]: time="2025-09-10T00:32:48.447814745Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 10 00:32:48.449320 containerd[1553]: time="2025-09-10T00:32:48.449281639Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:32:48.451791 containerd[1553]: time="2025-09-10T00:32:48.451745245Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:32:48.452433 containerd[1553]: time="2025-09-10T00:32:48.452407009Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 3.119083195s" Sep 10 00:32:48.452470 containerd[1553]: time="2025-09-10T00:32:48.452438959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 10 00:32:48.454302 containerd[1553]: time="2025-09-10T00:32:48.454091732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 10 00:32:48.455597 containerd[1553]: time="2025-09-10T00:32:48.455562854Z" level=info msg="CreateContainer within sandbox \"833be9c8bd57ac84b1e32a66bf93aa933edc8e10cec745cd6850bd2d324b447b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 10 00:32:48.476591 containerd[1553]: time="2025-09-10T00:32:48.476545427Z" level=info msg="CreateContainer within sandbox \"833be9c8bd57ac84b1e32a66bf93aa933edc8e10cec745cd6850bd2d324b447b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"6fac360170bfe82f61e827db2705d3f597fd9e7d63a87fed1ff4cfe579de3290\"" Sep 10 00:32:48.477292 containerd[1553]: time="2025-09-10T00:32:48.477257183Z" level=info msg="StartContainer for \"6fac360170bfe82f61e827db2705d3f597fd9e7d63a87fed1ff4cfe579de3290\"" Sep 10 00:32:48.560553 containerd[1553]: time="2025-09-10T00:32:48.560490855Z" level=info msg="StartContainer for \"6fac360170bfe82f61e827db2705d3f597fd9e7d63a87fed1ff4cfe579de3290\" returns successfully" Sep 10 00:32:50.284916 systemd[1]: run-containerd-runc-k8s.io-94a87a630f6df55b0433b8c50d5464f40e82f9be1f741bb496ecafc5a498d8fc-runc.YnJQ9d.mount: Deactivated successfully. Sep 10 00:32:50.753468 systemd[1]: Started sshd@10-10.0.0.21:22-10.0.0.1:35016.service - OpenSSH per-connection server daemon (10.0.0.1:35016). Sep 10 00:32:51.020529 sshd[5482]: Accepted publickey for core from 10.0.0.1 port 35016 ssh2: RSA SHA256:yotFPVH/8pVol0IcCMTpL4axYdSEk1J0cKg1+3rpd1s Sep 10 00:32:51.022608 sshd[5482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:32:51.032319 systemd-logind[1531]: New session 11 of user core. Sep 10 00:32:51.036724 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 10 00:32:51.210595 sshd[5482]: pam_unix(sshd:session): session closed for user core Sep 10 00:32:51.218601 systemd[1]: Started sshd@11-10.0.0.21:22-10.0.0.1:35022.service - OpenSSH per-connection server daemon (10.0.0.1:35022). Sep 10 00:32:51.219117 systemd[1]: sshd@10-10.0.0.21:22-10.0.0.1:35016.service: Deactivated successfully. Sep 10 00:32:51.224167 systemd-logind[1531]: Session 11 logged out. Waiting for processes to exit. Sep 10 00:32:51.225922 systemd[1]: session-11.scope: Deactivated successfully. Sep 10 00:32:51.227684 systemd-logind[1531]: Removed session 11. Sep 10 00:32:51.256669 sshd[5500]: Accepted publickey for core from 10.0.0.1 port 35022 ssh2: RSA SHA256:yotFPVH/8pVol0IcCMTpL4axYdSEk1J0cKg1+3rpd1s Sep 10 00:32:51.258766 sshd[5500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:32:51.264521 systemd-logind[1531]: New session 12 of user core. Sep 10 00:32:51.272710 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 10 00:32:51.364402 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2992511766.mount: Deactivated successfully. Sep 10 00:32:51.715504 sshd[5500]: pam_unix(sshd:session): session closed for user core Sep 10 00:32:51.721516 systemd[1]: Started sshd@12-10.0.0.21:22-10.0.0.1:35038.service - OpenSSH per-connection server daemon (10.0.0.1:35038). Sep 10 00:32:51.722051 systemd[1]: sshd@11-10.0.0.21:22-10.0.0.1:35022.service: Deactivated successfully. Sep 10 00:32:51.725368 systemd-logind[1531]: Session 12 logged out. Waiting for processes to exit. Sep 10 00:32:51.726346 systemd[1]: session-12.scope: Deactivated successfully. Sep 10 00:32:51.727266 systemd-logind[1531]: Removed session 12. Sep 10 00:32:51.762557 sshd[5513]: Accepted publickey for core from 10.0.0.1 port 35038 ssh2: RSA SHA256:yotFPVH/8pVol0IcCMTpL4axYdSEk1J0cKg1+3rpd1s Sep 10 00:32:51.764408 sshd[5513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:32:51.768717 systemd-logind[1531]: New session 13 of user core. Sep 10 00:32:51.778554 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 10 00:32:51.910767 sshd[5513]: pam_unix(sshd:session): session closed for user core Sep 10 00:32:51.916144 systemd[1]: sshd@12-10.0.0.21:22-10.0.0.1:35038.service: Deactivated successfully. Sep 10 00:32:51.919815 systemd[1]: session-13.scope: Deactivated successfully. Sep 10 00:32:51.921615 systemd-logind[1531]: Session 13 logged out. Waiting for processes to exit. Sep 10 00:32:51.923385 systemd-logind[1531]: Removed session 13. Sep 10 00:32:52.352082 containerd[1553]: time="2025-09-10T00:32:52.352015524Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:32:52.352939 containerd[1553]: time="2025-09-10T00:32:52.352884204Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 10 00:32:52.354004 containerd[1553]: time="2025-09-10T00:32:52.353968229Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:32:52.365110 containerd[1553]: time="2025-09-10T00:32:52.365057197Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:32:52.365770 containerd[1553]: time="2025-09-10T00:32:52.365721234Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 3.911524645s" Sep 10 00:32:52.365770 containerd[1553]: time="2025-09-10T00:32:52.365766550Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 10 00:32:52.367045 containerd[1553]: time="2025-09-10T00:32:52.366884638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 10 00:32:52.368396 containerd[1553]: time="2025-09-10T00:32:52.368367733Z" level=info msg="CreateContainer within sandbox \"eb16f27aa405160f8600c36832ac94b7ca435c5da6bacf74757316874b37bc00\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 10 00:32:52.382849 containerd[1553]: time="2025-09-10T00:32:52.382793816Z" level=info msg="CreateContainer within sandbox \"eb16f27aa405160f8600c36832ac94b7ca435c5da6bacf74757316874b37bc00\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"94f356cff0be2a5d92a94c9c6ef0fdd8ea9a2f262597344a96633ae69eeb2406\"" Sep 10 00:32:52.383914 containerd[1553]: time="2025-09-10T00:32:52.383850088Z" level=info msg="StartContainer for \"94f356cff0be2a5d92a94c9c6ef0fdd8ea9a2f262597344a96633ae69eeb2406\"" Sep 10 00:32:52.459757 containerd[1553]: time="2025-09-10T00:32:52.459704054Z" level=info msg="StartContainer for \"94f356cff0be2a5d92a94c9c6ef0fdd8ea9a2f262597344a96633ae69eeb2406\" returns successfully" Sep 10 00:32:52.690293 kubelet[2646]: I0910 00:32:52.689979 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-cwk66" podStartSLOduration=29.449990958 podStartE2EDuration="38.68994429s" podCreationTimestamp="2025-09-10 00:32:14 +0000 UTC" firstStartedPulling="2025-09-10 00:32:43.126705853 +0000 UTC m=+44.865803852" lastFinishedPulling="2025-09-10 00:32:52.366659185 +0000 UTC m=+54.105757184" observedRunningTime="2025-09-10 00:32:52.687698684 +0000 UTC m=+54.426796693" watchObservedRunningTime="2025-09-10 00:32:52.68994429 +0000 UTC m=+54.429042289" Sep 10 00:32:55.830993 containerd[1553]: time="2025-09-10T00:32:55.830919025Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:32:55.832266 containerd[1553]: time="2025-09-10T00:32:55.832197045Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 10 00:32:55.847359 containerd[1553]: time="2025-09-10T00:32:55.847292890Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:32:55.849796 containerd[1553]: time="2025-09-10T00:32:55.849750784Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:32:55.850608 containerd[1553]: time="2025-09-10T00:32:55.850559613Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.483633627s" Sep 10 00:32:55.850608 containerd[1553]: time="2025-09-10T00:32:55.850594037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 10 00:32:55.851869 containerd[1553]: time="2025-09-10T00:32:55.851831750Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 10 00:32:55.853146 containerd[1553]: time="2025-09-10T00:32:55.853116802Z" level=info msg="CreateContainer within sandbox \"7435b885dd9b3671a81953898554bcf1a56f42c3a60a1090bdee64de91bb064c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 10 00:32:55.879357 containerd[1553]: time="2025-09-10T00:32:55.879288839Z" level=info msg="CreateContainer within sandbox \"7435b885dd9b3671a81953898554bcf1a56f42c3a60a1090bdee64de91bb064c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ffc886ed266e8af22681c7bc0ba47f3fa850df83c1a9939194c974938e807308\"" Sep 10 00:32:55.880918 containerd[1553]: time="2025-09-10T00:32:55.880885456Z" level=info msg="StartContainer for \"ffc886ed266e8af22681c7bc0ba47f3fa850df83c1a9939194c974938e807308\"" Sep 10 00:32:56.275513 containerd[1553]: time="2025-09-10T00:32:56.275306432Z" level=info msg="StartContainer for \"ffc886ed266e8af22681c7bc0ba47f3fa850df83c1a9939194c974938e807308\" returns successfully" Sep 10 00:32:56.308073 containerd[1553]: time="2025-09-10T00:32:56.308005948Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:32:56.310519 containerd[1553]: time="2025-09-10T00:32:56.310422675Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 10 00:32:56.313360 containerd[1553]: time="2025-09-10T00:32:56.313224263Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 461.341218ms" Sep 10 00:32:56.313360 containerd[1553]: time="2025-09-10T00:32:56.313311286Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 10 00:32:56.316770 containerd[1553]: time="2025-09-10T00:32:56.316639885Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 10 00:32:56.317677 containerd[1553]: time="2025-09-10T00:32:56.317630825Z" level=info msg="CreateContainer within sandbox \"3342d3b08d6acbbfc28410fe11d99e4058eba84288d5c220dc49abfb9c3b8f0c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 10 00:32:56.339889 containerd[1553]: time="2025-09-10T00:32:56.339547642Z" level=info msg="CreateContainer within sandbox \"3342d3b08d6acbbfc28410fe11d99e4058eba84288d5c220dc49abfb9c3b8f0c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"bff9f1338ef851729a223e1cd970fc619ecef10ad702266742589ed1b8338a5c\"" Sep 10 00:32:56.340622 containerd[1553]: time="2025-09-10T00:32:56.340582534Z" level=info msg="StartContainer for \"bff9f1338ef851729a223e1cd970fc619ecef10ad702266742589ed1b8338a5c\"" Sep 10 00:32:56.509102 containerd[1553]: time="2025-09-10T00:32:56.509041912Z" level=info msg="StartContainer for \"bff9f1338ef851729a223e1cd970fc619ecef10ad702266742589ed1b8338a5c\" returns successfully" Sep 10 00:32:56.633563 kubelet[2646]: I0910 00:32:56.633488 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-954df557d-d5768" podStartSLOduration=30.982373562 podStartE2EDuration="43.633470653s" podCreationTimestamp="2025-09-10 00:32:13 +0000 UTC" firstStartedPulling="2025-09-10 00:32:43.200553619 +0000 UTC m=+44.939651618" lastFinishedPulling="2025-09-10 00:32:55.85165071 +0000 UTC m=+57.590748709" observedRunningTime="2025-09-10 00:32:56.619493288 +0000 UTC m=+58.358591288" watchObservedRunningTime="2025-09-10 00:32:56.633470653 +0000 UTC m=+58.372568652" Sep 10 00:32:56.925510 systemd[1]: Started sshd@13-10.0.0.21:22-10.0.0.1:35046.service - OpenSSH per-connection server daemon (10.0.0.1:35046). Sep 10 00:32:56.974035 sshd[5744]: Accepted publickey for core from 10.0.0.1 port 35046 ssh2: RSA SHA256:yotFPVH/8pVol0IcCMTpL4axYdSEk1J0cKg1+3rpd1s Sep 10 00:32:56.976896 sshd[5744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:32:56.985252 systemd-logind[1531]: New session 14 of user core. Sep 10 00:32:56.991901 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 10 00:32:57.396680 sshd[5744]: pam_unix(sshd:session): session closed for user core Sep 10 00:32:57.403606 systemd[1]: sshd@13-10.0.0.21:22-10.0.0.1:35046.service: Deactivated successfully. Sep 10 00:32:57.409310 systemd[1]: session-14.scope: Deactivated successfully. Sep 10 00:32:57.410460 systemd-logind[1531]: Session 14 logged out. Waiting for processes to exit. Sep 10 00:32:57.411919 systemd-logind[1531]: Removed session 14. Sep 10 00:32:57.417741 kubelet[2646]: I0910 00:32:57.417667 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-954df557d-7jgtm" podStartSLOduration=32.183468597 podStartE2EDuration="44.417635362s" podCreationTimestamp="2025-09-10 00:32:13 +0000 UTC" firstStartedPulling="2025-09-10 00:32:44.080281797 +0000 UTC m=+45.819379796" lastFinishedPulling="2025-09-10 00:32:56.314448561 +0000 UTC m=+58.053546561" observedRunningTime="2025-09-10 00:32:56.633848773 +0000 UTC m=+58.372946802" watchObservedRunningTime="2025-09-10 00:32:57.417635362 +0000 UTC m=+59.156733361" Sep 10 00:32:58.334377 containerd[1553]: time="2025-09-10T00:32:58.334330523Z" level=info msg="StopPodSandbox for \"6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4\"" Sep 10 00:32:58.869637 containerd[1553]: 2025-09-10 00:32:58.820 [WARNING][5781] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--954df557d--d5768-eth0", GenerateName:"calico-apiserver-954df557d-", Namespace:"calico-apiserver", SelfLink:"", UID:"ac80b016-97cf-437e-8b81-8fe6c2942f59", ResourceVersion:"1205", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 32, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"954df557d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7435b885dd9b3671a81953898554bcf1a56f42c3a60a1090bdee64de91bb064c", Pod:"calico-apiserver-954df557d-d5768", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic753e922da7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:32:58.869637 containerd[1553]: 2025-09-10 00:32:58.820 [INFO][5781] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4" Sep 10 00:32:58.869637 containerd[1553]: 2025-09-10 00:32:58.820 [INFO][5781] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4" iface="eth0" netns="" Sep 10 00:32:58.869637 containerd[1553]: 2025-09-10 00:32:58.820 [INFO][5781] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4" Sep 10 00:32:58.869637 containerd[1553]: 2025-09-10 00:32:58.820 [INFO][5781] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4" Sep 10 00:32:58.869637 containerd[1553]: 2025-09-10 00:32:58.852 [INFO][5797] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4" HandleID="k8s-pod-network.6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4" Workload="localhost-k8s-calico--apiserver--954df557d--d5768-eth0" Sep 10 00:32:58.869637 containerd[1553]: 2025-09-10 00:32:58.853 [INFO][5797] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:32:58.869637 containerd[1553]: 2025-09-10 00:32:58.853 [INFO][5797] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:32:58.869637 containerd[1553]: 2025-09-10 00:32:58.858 [WARNING][5797] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4" HandleID="k8s-pod-network.6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4" Workload="localhost-k8s-calico--apiserver--954df557d--d5768-eth0" Sep 10 00:32:58.869637 containerd[1553]: 2025-09-10 00:32:58.858 [INFO][5797] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4" HandleID="k8s-pod-network.6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4" Workload="localhost-k8s-calico--apiserver--954df557d--d5768-eth0" Sep 10 00:32:58.869637 containerd[1553]: 2025-09-10 00:32:58.859 [INFO][5797] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:32:58.869637 containerd[1553]: 2025-09-10 00:32:58.866 [INFO][5781] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4" Sep 10 00:32:58.878471 containerd[1553]: time="2025-09-10T00:32:58.878403931Z" level=info msg="TearDown network for sandbox \"6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4\" successfully" Sep 10 00:32:58.878471 containerd[1553]: time="2025-09-10T00:32:58.878457512Z" level=info msg="StopPodSandbox for \"6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4\" returns successfully" Sep 10 00:32:58.925344 containerd[1553]: time="2025-09-10T00:32:58.925113889Z" level=info msg="RemovePodSandbox for \"6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4\"" Sep 10 00:32:58.927617 containerd[1553]: time="2025-09-10T00:32:58.927575358Z" level=info msg="Forcibly stopping sandbox \"6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4\"" Sep 10 00:32:58.959777 containerd[1553]: time="2025-09-10T00:32:58.959699711Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:32:58.960379 containerd[1553]: time="2025-09-10T00:32:58.960339070Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 10 00:32:58.980433 containerd[1553]: time="2025-09-10T00:32:58.980358824Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:32:58.982655 containerd[1553]: time="2025-09-10T00:32:58.982577748Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:32:58.984289 containerd[1553]: time="2025-09-10T00:32:58.983587645Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.666885453s" Sep 10 00:32:58.984289 containerd[1553]: time="2025-09-10T00:32:58.983623622Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 10 00:32:59.009667 containerd[1553]: time="2025-09-10T00:32:59.009335929Z" level=info msg="CreateContainer within sandbox \"833be9c8bd57ac84b1e32a66bf93aa933edc8e10cec745cd6850bd2d324b447b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 10 00:32:59.020826 containerd[1553]: 2025-09-10 00:32:58.974 [WARNING][5814] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--954df557d--d5768-eth0", GenerateName:"calico-apiserver-954df557d-", Namespace:"calico-apiserver", SelfLink:"", UID:"ac80b016-97cf-437e-8b81-8fe6c2942f59", ResourceVersion:"1205", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 32, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"954df557d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7435b885dd9b3671a81953898554bcf1a56f42c3a60a1090bdee64de91bb064c", Pod:"calico-apiserver-954df557d-d5768", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic753e922da7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:32:59.020826 containerd[1553]: 2025-09-10 00:32:58.974 [INFO][5814] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4" Sep 10 00:32:59.020826 containerd[1553]: 2025-09-10 00:32:58.974 [INFO][5814] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4" iface="eth0" netns="" Sep 10 00:32:59.020826 containerd[1553]: 2025-09-10 00:32:58.974 [INFO][5814] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4" Sep 10 00:32:59.020826 containerd[1553]: 2025-09-10 00:32:58.974 [INFO][5814] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4" Sep 10 00:32:59.020826 containerd[1553]: 2025-09-10 00:32:59.005 [INFO][5822] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4" HandleID="k8s-pod-network.6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4" Workload="localhost-k8s-calico--apiserver--954df557d--d5768-eth0" Sep 10 00:32:59.020826 containerd[1553]: 2025-09-10 00:32:59.005 [INFO][5822] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:32:59.020826 containerd[1553]: 2025-09-10 00:32:59.005 [INFO][5822] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:32:59.020826 containerd[1553]: 2025-09-10 00:32:59.012 [WARNING][5822] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4" HandleID="k8s-pod-network.6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4" Workload="localhost-k8s-calico--apiserver--954df557d--d5768-eth0" Sep 10 00:32:59.020826 containerd[1553]: 2025-09-10 00:32:59.012 [INFO][5822] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4" HandleID="k8s-pod-network.6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4" Workload="localhost-k8s-calico--apiserver--954df557d--d5768-eth0" Sep 10 00:32:59.020826 containerd[1553]: 2025-09-10 00:32:59.014 [INFO][5822] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:32:59.020826 containerd[1553]: 2025-09-10 00:32:59.017 [INFO][5814] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4" Sep 10 00:32:59.021387 containerd[1553]: time="2025-09-10T00:32:59.020878081Z" level=info msg="TearDown network for sandbox \"6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4\" successfully" Sep 10 00:32:59.032699 containerd[1553]: time="2025-09-10T00:32:59.032652097Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 10 00:32:59.037290 containerd[1553]: time="2025-09-10T00:32:59.037030104Z" level=info msg="CreateContainer within sandbox \"833be9c8bd57ac84b1e32a66bf93aa933edc8e10cec745cd6850bd2d324b447b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"9b57fcf25d18e52c8c94cd198d496632c75bca04cd8de1ce5938b838523b83bd\"" Sep 10 00:32:59.038931 containerd[1553]: time="2025-09-10T00:32:59.037755987Z" level=info msg="StartContainer for \"9b57fcf25d18e52c8c94cd198d496632c75bca04cd8de1ce5938b838523b83bd\"" Sep 10 00:32:59.039570 containerd[1553]: time="2025-09-10T00:32:59.039527341Z" level=info msg="RemovePodSandbox \"6253316416806f31e0cff0f9fdb13bb29d354f69a1928835f9cefa26179ac9d4\" returns successfully" Sep 10 00:32:59.048125 containerd[1553]: time="2025-09-10T00:32:59.047993971Z" level=info msg="StopPodSandbox for \"05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b\"" Sep 10 00:32:59.105926 containerd[1553]: time="2025-09-10T00:32:59.105700794Z" level=info msg="StartContainer for \"9b57fcf25d18e52c8c94cd198d496632c75bca04cd8de1ce5938b838523b83bd\" returns successfully" Sep 10 00:32:59.145871 containerd[1553]: 2025-09-10 00:32:59.107 [WARNING][5851] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--b94b4bd57--2vk8t-eth0", GenerateName:"calico-kube-controllers-b94b4bd57-", Namespace:"calico-system", SelfLink:"", UID:"e74294c4-7f70-425e-bbde-a3f523072902", ResourceVersion:"1115", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 32, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b94b4bd57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7de755c72acee3feaca14d68af691b75bee3614473f844bceba402b000461480", Pod:"calico-kube-controllers-b94b4bd57-2vk8t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia324af93cbd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:32:59.145871 containerd[1553]: 2025-09-10 00:32:59.108 [INFO][5851] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b" Sep 10 00:32:59.145871 containerd[1553]: 2025-09-10 00:32:59.108 [INFO][5851] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b" iface="eth0" netns="" Sep 10 00:32:59.145871 containerd[1553]: 2025-09-10 00:32:59.108 [INFO][5851] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b" Sep 10 00:32:59.145871 containerd[1553]: 2025-09-10 00:32:59.108 [INFO][5851] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b" Sep 10 00:32:59.145871 containerd[1553]: 2025-09-10 00:32:59.132 [INFO][5879] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b" HandleID="k8s-pod-network.05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b" Workload="localhost-k8s-calico--kube--controllers--b94b4bd57--2vk8t-eth0" Sep 10 00:32:59.145871 containerd[1553]: 2025-09-10 00:32:59.132 [INFO][5879] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:32:59.145871 containerd[1553]: 2025-09-10 00:32:59.132 [INFO][5879] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:32:59.145871 containerd[1553]: 2025-09-10 00:32:59.138 [WARNING][5879] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b" HandleID="k8s-pod-network.05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b" Workload="localhost-k8s-calico--kube--controllers--b94b4bd57--2vk8t-eth0" Sep 10 00:32:59.145871 containerd[1553]: 2025-09-10 00:32:59.138 [INFO][5879] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b" HandleID="k8s-pod-network.05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b" Workload="localhost-k8s-calico--kube--controllers--b94b4bd57--2vk8t-eth0" Sep 10 00:32:59.145871 containerd[1553]: 2025-09-10 00:32:59.139 [INFO][5879] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:32:59.145871 containerd[1553]: 2025-09-10 00:32:59.142 [INFO][5851] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b" Sep 10 00:32:59.145871 containerd[1553]: time="2025-09-10T00:32:59.145832491Z" level=info msg="TearDown network for sandbox \"05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b\" successfully" Sep 10 00:32:59.145871 containerd[1553]: time="2025-09-10T00:32:59.145860985Z" level=info msg="StopPodSandbox for \"05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b\" returns successfully" Sep 10 00:32:59.146397 containerd[1553]: time="2025-09-10T00:32:59.146343640Z" level=info msg="RemovePodSandbox for \"05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b\"" Sep 10 00:32:59.146397 containerd[1553]: time="2025-09-10T00:32:59.146369620Z" level=info msg="Forcibly stopping sandbox \"05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b\"" Sep 10 00:32:59.221150 containerd[1553]: 2025-09-10 00:32:59.181 [WARNING][5900] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--b94b4bd57--2vk8t-eth0", GenerateName:"calico-kube-controllers-b94b4bd57-", Namespace:"calico-system", SelfLink:"", UID:"e74294c4-7f70-425e-bbde-a3f523072902", ResourceVersion:"1115", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 32, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b94b4bd57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7de755c72acee3feaca14d68af691b75bee3614473f844bceba402b000461480", Pod:"calico-kube-controllers-b94b4bd57-2vk8t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia324af93cbd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:32:59.221150 containerd[1553]: 2025-09-10 00:32:59.182 [INFO][5900] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b" Sep 10 00:32:59.221150 containerd[1553]: 2025-09-10 00:32:59.182 [INFO][5900] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b" iface="eth0" netns="" Sep 10 00:32:59.221150 containerd[1553]: 2025-09-10 00:32:59.182 [INFO][5900] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b" Sep 10 00:32:59.221150 containerd[1553]: 2025-09-10 00:32:59.182 [INFO][5900] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b" Sep 10 00:32:59.221150 containerd[1553]: 2025-09-10 00:32:59.206 [INFO][5909] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b" HandleID="k8s-pod-network.05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b" Workload="localhost-k8s-calico--kube--controllers--b94b4bd57--2vk8t-eth0" Sep 10 00:32:59.221150 containerd[1553]: 2025-09-10 00:32:59.206 [INFO][5909] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:32:59.221150 containerd[1553]: 2025-09-10 00:32:59.206 [INFO][5909] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:32:59.221150 containerd[1553]: 2025-09-10 00:32:59.212 [WARNING][5909] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b" HandleID="k8s-pod-network.05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b" Workload="localhost-k8s-calico--kube--controllers--b94b4bd57--2vk8t-eth0" Sep 10 00:32:59.221150 containerd[1553]: 2025-09-10 00:32:59.212 [INFO][5909] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b" HandleID="k8s-pod-network.05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b" Workload="localhost-k8s-calico--kube--controllers--b94b4bd57--2vk8t-eth0" Sep 10 00:32:59.221150 containerd[1553]: 2025-09-10 00:32:59.214 [INFO][5909] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:32:59.221150 containerd[1553]: 2025-09-10 00:32:59.217 [INFO][5900] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b" Sep 10 00:32:59.221641 containerd[1553]: time="2025-09-10T00:32:59.221206149Z" level=info msg="TearDown network for sandbox \"05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b\" successfully" Sep 10 00:32:59.225262 containerd[1553]: time="2025-09-10T00:32:59.225222177Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 10 00:32:59.225333 containerd[1553]: time="2025-09-10T00:32:59.225301075Z" level=info msg="RemovePodSandbox \"05a17e5134dcc42efc4fe65cd99b39f81a5630c3b9480037c07571cb8472929b\" returns successfully" Sep 10 00:32:59.225794 containerd[1553]: time="2025-09-10T00:32:59.225760918Z" level=info msg="StopPodSandbox for \"421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c\"" Sep 10 00:32:59.307142 containerd[1553]: 2025-09-10 00:32:59.269 [WARNING][5926] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c" WorkloadEndpoint="localhost-k8s-whisker--7dcf94495f--gb68x-eth0" Sep 10 00:32:59.307142 containerd[1553]: 2025-09-10 00:32:59.269 [INFO][5926] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c" Sep 10 00:32:59.307142 containerd[1553]: 2025-09-10 00:32:59.270 [INFO][5926] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c" iface="eth0" netns="" Sep 10 00:32:59.307142 containerd[1553]: 2025-09-10 00:32:59.270 [INFO][5926] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c" Sep 10 00:32:59.307142 containerd[1553]: 2025-09-10 00:32:59.270 [INFO][5926] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c" Sep 10 00:32:59.307142 containerd[1553]: 2025-09-10 00:32:59.292 [INFO][5934] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c" HandleID="k8s-pod-network.421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c" Workload="localhost-k8s-whisker--7dcf94495f--gb68x-eth0" Sep 10 00:32:59.307142 containerd[1553]: 2025-09-10 00:32:59.292 [INFO][5934] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:32:59.307142 containerd[1553]: 2025-09-10 00:32:59.292 [INFO][5934] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:32:59.307142 containerd[1553]: 2025-09-10 00:32:59.299 [WARNING][5934] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c" HandleID="k8s-pod-network.421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c" Workload="localhost-k8s-whisker--7dcf94495f--gb68x-eth0" Sep 10 00:32:59.307142 containerd[1553]: 2025-09-10 00:32:59.299 [INFO][5934] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c" HandleID="k8s-pod-network.421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c" Workload="localhost-k8s-whisker--7dcf94495f--gb68x-eth0" Sep 10 00:32:59.307142 containerd[1553]: 2025-09-10 00:32:59.300 [INFO][5934] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:32:59.307142 containerd[1553]: 2025-09-10 00:32:59.303 [INFO][5926] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c" Sep 10 00:32:59.307142 containerd[1553]: time="2025-09-10T00:32:59.307115031Z" level=info msg="TearDown network for sandbox \"421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c\" successfully" Sep 10 00:32:59.307142 containerd[1553]: time="2025-09-10T00:32:59.307147211Z" level=info msg="StopPodSandbox for \"421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c\" returns successfully" Sep 10 00:32:59.308196 containerd[1553]: time="2025-09-10T00:32:59.307770221Z" level=info msg="RemovePodSandbox for \"421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c\"" Sep 10 00:32:59.308196 containerd[1553]: time="2025-09-10T00:32:59.307815686Z" level=info msg="Forcibly stopping sandbox \"421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c\"" Sep 10 00:32:59.379072 containerd[1553]: 2025-09-10 00:32:59.343 [WARNING][5952] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c" WorkloadEndpoint="localhost-k8s-whisker--7dcf94495f--gb68x-eth0" Sep 10 00:32:59.379072 containerd[1553]: 2025-09-10 00:32:59.343 [INFO][5952] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c" Sep 10 00:32:59.379072 containerd[1553]: 2025-09-10 00:32:59.343 [INFO][5952] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c" iface="eth0" netns="" Sep 10 00:32:59.379072 containerd[1553]: 2025-09-10 00:32:59.343 [INFO][5952] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c" Sep 10 00:32:59.379072 containerd[1553]: 2025-09-10 00:32:59.343 [INFO][5952] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c" Sep 10 00:32:59.379072 containerd[1553]: 2025-09-10 00:32:59.366 [INFO][5960] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c" HandleID="k8s-pod-network.421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c" Workload="localhost-k8s-whisker--7dcf94495f--gb68x-eth0" Sep 10 00:32:59.379072 containerd[1553]: 2025-09-10 00:32:59.366 [INFO][5960] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:32:59.379072 containerd[1553]: 2025-09-10 00:32:59.366 [INFO][5960] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:32:59.379072 containerd[1553]: 2025-09-10 00:32:59.371 [WARNING][5960] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c" HandleID="k8s-pod-network.421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c" Workload="localhost-k8s-whisker--7dcf94495f--gb68x-eth0" Sep 10 00:32:59.379072 containerd[1553]: 2025-09-10 00:32:59.371 [INFO][5960] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c" HandleID="k8s-pod-network.421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c" Workload="localhost-k8s-whisker--7dcf94495f--gb68x-eth0" Sep 10 00:32:59.379072 containerd[1553]: 2025-09-10 00:32:59.372 [INFO][5960] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:32:59.379072 containerd[1553]: 2025-09-10 00:32:59.375 [INFO][5952] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c" Sep 10 00:32:59.379787 containerd[1553]: time="2025-09-10T00:32:59.379109828Z" level=info msg="TearDown network for sandbox \"421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c\" successfully" Sep 10 00:32:59.383227 containerd[1553]: time="2025-09-10T00:32:59.383203923Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 10 00:32:59.383292 containerd[1553]: time="2025-09-10T00:32:59.383276599Z" level=info msg="RemovePodSandbox \"421400929666ce1c4ff6543d872b10022f4081e0581df9a546f70829b76a2b7c\" returns successfully" Sep 10 00:32:59.383806 containerd[1553]: time="2025-09-10T00:32:59.383782087Z" level=info msg="StopPodSandbox for \"cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974\"" Sep 10 00:32:59.452428 containerd[1553]: 2025-09-10 00:32:59.416 [WARNING][5977] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--5kznd-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"08e6a60c-fbd0-4ac8-87e2-26029f752560", ResourceVersion:"1077", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 32, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e482f39f8645f1cf1d828a0141c056e73f77b5d069f9a5dc2af8438a79b3f620", Pod:"coredns-7c65d6cfc9-5kznd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6916278af54", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:32:59.452428 containerd[1553]: 2025-09-10 00:32:59.416 [INFO][5977] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974" Sep 10 00:32:59.452428 containerd[1553]: 2025-09-10 00:32:59.416 [INFO][5977] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974" iface="eth0" netns="" Sep 10 00:32:59.452428 containerd[1553]: 2025-09-10 00:32:59.416 [INFO][5977] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974" Sep 10 00:32:59.452428 containerd[1553]: 2025-09-10 00:32:59.416 [INFO][5977] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974" Sep 10 00:32:59.452428 containerd[1553]: 2025-09-10 00:32:59.439 [INFO][5985] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974" HandleID="k8s-pod-network.cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974" Workload="localhost-k8s-coredns--7c65d6cfc9--5kznd-eth0" Sep 10 00:32:59.452428 containerd[1553]: 2025-09-10 00:32:59.439 [INFO][5985] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:32:59.452428 containerd[1553]: 2025-09-10 00:32:59.439 [INFO][5985] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:32:59.452428 containerd[1553]: 2025-09-10 00:32:59.444 [WARNING][5985] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974" HandleID="k8s-pod-network.cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974" Workload="localhost-k8s-coredns--7c65d6cfc9--5kznd-eth0" Sep 10 00:32:59.452428 containerd[1553]: 2025-09-10 00:32:59.444 [INFO][5985] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974" HandleID="k8s-pod-network.cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974" Workload="localhost-k8s-coredns--7c65d6cfc9--5kznd-eth0" Sep 10 00:32:59.452428 containerd[1553]: 2025-09-10 00:32:59.446 [INFO][5985] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:32:59.452428 containerd[1553]: 2025-09-10 00:32:59.449 [INFO][5977] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974" Sep 10 00:32:59.452428 containerd[1553]: time="2025-09-10T00:32:59.452395820Z" level=info msg="TearDown network for sandbox \"cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974\" successfully" Sep 10 00:32:59.452428 containerd[1553]: time="2025-09-10T00:32:59.452423421Z" level=info msg="StopPodSandbox for \"cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974\" returns successfully" Sep 10 00:32:59.453149 containerd[1553]: time="2025-09-10T00:32:59.452820556Z" level=info msg="RemovePodSandbox for \"cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974\"" Sep 10 00:32:59.453149 containerd[1553]: time="2025-09-10T00:32:59.452848379Z" level=info msg="Forcibly stopping sandbox \"cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974\"" Sep 10 00:32:59.462149 kubelet[2646]: I0910 00:32:59.462100 2646 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 10 00:32:59.462849 kubelet[2646]: I0910 00:32:59.462828 2646 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 10 00:32:59.533062 containerd[1553]: 2025-09-10 00:32:59.492 [WARNING][6002] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--5kznd-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"08e6a60c-fbd0-4ac8-87e2-26029f752560", ResourceVersion:"1077", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 32, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e482f39f8645f1cf1d828a0141c056e73f77b5d069f9a5dc2af8438a79b3f620", Pod:"coredns-7c65d6cfc9-5kznd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6916278af54", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:32:59.533062 containerd[1553]: 2025-09-10 00:32:59.492 [INFO][6002] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974" Sep 10 00:32:59.533062 containerd[1553]: 2025-09-10 00:32:59.492 [INFO][6002] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974" iface="eth0" netns="" Sep 10 00:32:59.533062 containerd[1553]: 2025-09-10 00:32:59.493 [INFO][6002] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974" Sep 10 00:32:59.533062 containerd[1553]: 2025-09-10 00:32:59.493 [INFO][6002] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974" Sep 10 00:32:59.533062 containerd[1553]: 2025-09-10 00:32:59.517 [INFO][6011] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974" HandleID="k8s-pod-network.cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974" Workload="localhost-k8s-coredns--7c65d6cfc9--5kznd-eth0" Sep 10 00:32:59.533062 containerd[1553]: 2025-09-10 00:32:59.517 [INFO][6011] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:32:59.533062 containerd[1553]: 2025-09-10 00:32:59.517 [INFO][6011] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:32:59.533062 containerd[1553]: 2025-09-10 00:32:59.522 [WARNING][6011] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974" HandleID="k8s-pod-network.cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974" Workload="localhost-k8s-coredns--7c65d6cfc9--5kznd-eth0" Sep 10 00:32:59.533062 containerd[1553]: 2025-09-10 00:32:59.522 [INFO][6011] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974" HandleID="k8s-pod-network.cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974" Workload="localhost-k8s-coredns--7c65d6cfc9--5kznd-eth0" Sep 10 00:32:59.533062 containerd[1553]: 2025-09-10 00:32:59.523 [INFO][6011] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:32:59.533062 containerd[1553]: 2025-09-10 00:32:59.526 [INFO][6002] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974" Sep 10 00:32:59.533545 containerd[1553]: time="2025-09-10T00:32:59.533112155Z" level=info msg="TearDown network for sandbox \"cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974\" successfully" Sep 10 00:32:59.537260 containerd[1553]: time="2025-09-10T00:32:59.537220806Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 10 00:32:59.537326 containerd[1553]: time="2025-09-10T00:32:59.537292911Z" level=info msg="RemovePodSandbox \"cd4eafc0d09a8b0819b32c31a987803da5e56a25f8dcfdc74c2bd1e719871974\" returns successfully" Sep 10 00:32:59.537872 containerd[1553]: time="2025-09-10T00:32:59.537850658Z" level=info msg="StopPodSandbox for \"bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc\"" Sep 10 00:32:59.614943 containerd[1553]: 2025-09-10 00:32:59.571 [WARNING][6029] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--9gf45-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a3dd0117-a752-4081-b55b-d575ef8b3051", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 32, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4039b149484396cacfdbb6e097e98b8e4f4135ee681913ddad6b917a415f4485", Pod:"coredns-7c65d6cfc9-9gf45", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic3e8b23db92", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:32:59.614943 containerd[1553]: 2025-09-10 00:32:59.571 [INFO][6029] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc" Sep 10 00:32:59.614943 containerd[1553]: 2025-09-10 00:32:59.571 [INFO][6029] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc" iface="eth0" netns="" Sep 10 00:32:59.614943 containerd[1553]: 2025-09-10 00:32:59.571 [INFO][6029] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc" Sep 10 00:32:59.614943 containerd[1553]: 2025-09-10 00:32:59.571 [INFO][6029] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc" Sep 10 00:32:59.614943 containerd[1553]: 2025-09-10 00:32:59.600 [INFO][6038] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc" HandleID="k8s-pod-network.bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc" Workload="localhost-k8s-coredns--7c65d6cfc9--9gf45-eth0" Sep 10 00:32:59.614943 containerd[1553]: 2025-09-10 00:32:59.600 [INFO][6038] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:32:59.614943 containerd[1553]: 2025-09-10 00:32:59.600 [INFO][6038] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:32:59.614943 containerd[1553]: 2025-09-10 00:32:59.607 [WARNING][6038] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc" HandleID="k8s-pod-network.bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc" Workload="localhost-k8s-coredns--7c65d6cfc9--9gf45-eth0" Sep 10 00:32:59.614943 containerd[1553]: 2025-09-10 00:32:59.607 [INFO][6038] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc" HandleID="k8s-pod-network.bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc" Workload="localhost-k8s-coredns--7c65d6cfc9--9gf45-eth0" Sep 10 00:32:59.614943 containerd[1553]: 2025-09-10 00:32:59.608 [INFO][6038] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:32:59.614943 containerd[1553]: 2025-09-10 00:32:59.611 [INFO][6029] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc" Sep 10 00:32:59.615428 containerd[1553]: time="2025-09-10T00:32:59.614988628Z" level=info msg="TearDown network for sandbox \"bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc\" successfully" Sep 10 00:32:59.615428 containerd[1553]: time="2025-09-10T00:32:59.615016941Z" level=info msg="StopPodSandbox for \"bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc\" returns successfully" Sep 10 00:32:59.615709 containerd[1553]: time="2025-09-10T00:32:59.615656241Z" level=info msg="RemovePodSandbox for \"bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc\"" Sep 10 00:32:59.615741 containerd[1553]: time="2025-09-10T00:32:59.615718799Z" level=info msg="Forcibly stopping sandbox \"bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc\"" Sep 10 00:32:59.643784 kubelet[2646]: I0910 00:32:59.643696 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-lkztn" podStartSLOduration=27.324834501 podStartE2EDuration="44.643674745s" podCreationTimestamp="2025-09-10 00:32:15 +0000 UTC" firstStartedPulling="2025-09-10 00:32:41.688025048 +0000 UTC m=+43.427123047" lastFinishedPulling="2025-09-10 00:32:59.006865292 +0000 UTC m=+60.745963291" observedRunningTime="2025-09-10 00:32:59.642394903 +0000 UTC m=+61.381492922" watchObservedRunningTime="2025-09-10 00:32:59.643674745 +0000 UTC m=+61.382772744" Sep 10 00:32:59.696896 containerd[1553]: 2025-09-10 00:32:59.658 [WARNING][6055] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--9gf45-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a3dd0117-a752-4081-b55b-d575ef8b3051", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 32, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4039b149484396cacfdbb6e097e98b8e4f4135ee681913ddad6b917a415f4485", Pod:"coredns-7c65d6cfc9-9gf45", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic3e8b23db92", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:32:59.696896 containerd[1553]: 2025-09-10 00:32:59.659 [INFO][6055] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc" Sep 10 00:32:59.696896 containerd[1553]: 2025-09-10 00:32:59.659 [INFO][6055] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc" iface="eth0" netns="" Sep 10 00:32:59.696896 containerd[1553]: 2025-09-10 00:32:59.659 [INFO][6055] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc" Sep 10 00:32:59.696896 containerd[1553]: 2025-09-10 00:32:59.659 [INFO][6055] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc" Sep 10 00:32:59.696896 containerd[1553]: 2025-09-10 00:32:59.684 [INFO][6065] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc" HandleID="k8s-pod-network.bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc" Workload="localhost-k8s-coredns--7c65d6cfc9--9gf45-eth0" Sep 10 00:32:59.696896 containerd[1553]: 2025-09-10 00:32:59.684 [INFO][6065] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:32:59.696896 containerd[1553]: 2025-09-10 00:32:59.684 [INFO][6065] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:32:59.696896 containerd[1553]: 2025-09-10 00:32:59.689 [WARNING][6065] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc" HandleID="k8s-pod-network.bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc" Workload="localhost-k8s-coredns--7c65d6cfc9--9gf45-eth0" Sep 10 00:32:59.696896 containerd[1553]: 2025-09-10 00:32:59.689 [INFO][6065] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc" HandleID="k8s-pod-network.bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc" Workload="localhost-k8s-coredns--7c65d6cfc9--9gf45-eth0" Sep 10 00:32:59.696896 containerd[1553]: 2025-09-10 00:32:59.691 [INFO][6065] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:32:59.696896 containerd[1553]: 2025-09-10 00:32:59.693 [INFO][6055] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc" Sep 10 00:32:59.697394 containerd[1553]: time="2025-09-10T00:32:59.696995136Z" level=info msg="TearDown network for sandbox \"bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc\" successfully" Sep 10 00:32:59.702159 containerd[1553]: time="2025-09-10T00:32:59.702102370Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 10 00:32:59.702240 containerd[1553]: time="2025-09-10T00:32:59.702206816Z" level=info msg="RemovePodSandbox \"bb40c8618e42c4054be687294465a07009174521c3d183bbcc4f215a280f36dc\" returns successfully" Sep 10 00:32:59.702894 containerd[1553]: time="2025-09-10T00:32:59.702795612Z" level=info msg="StopPodSandbox for \"3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea\"" Sep 10 00:32:59.772887 containerd[1553]: 2025-09-10 00:32:59.736 [WARNING][6083] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--954df557d--7jgtm-eth0", GenerateName:"calico-apiserver-954df557d-", Namespace:"calico-apiserver", SelfLink:"", UID:"0b9b71be-4940-4c7a-8e7a-6616e525febf", ResourceVersion:"1198", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 32, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"954df557d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3342d3b08d6acbbfc28410fe11d99e4058eba84288d5c220dc49abfb9c3b8f0c", Pod:"calico-apiserver-954df557d-7jgtm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali394065eb720", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:32:59.772887 containerd[1553]: 2025-09-10 00:32:59.737 [INFO][6083] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea" Sep 10 00:32:59.772887 containerd[1553]: 2025-09-10 00:32:59.737 [INFO][6083] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea" iface="eth0" netns="" Sep 10 00:32:59.772887 containerd[1553]: 2025-09-10 00:32:59.737 [INFO][6083] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea" Sep 10 00:32:59.772887 containerd[1553]: 2025-09-10 00:32:59.737 [INFO][6083] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea" Sep 10 00:32:59.772887 containerd[1553]: 2025-09-10 00:32:59.760 [INFO][6092] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea" HandleID="k8s-pod-network.3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea" Workload="localhost-k8s-calico--apiserver--954df557d--7jgtm-eth0" Sep 10 00:32:59.772887 containerd[1553]: 2025-09-10 00:32:59.760 [INFO][6092] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:32:59.772887 containerd[1553]: 2025-09-10 00:32:59.760 [INFO][6092] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:32:59.772887 containerd[1553]: 2025-09-10 00:32:59.765 [WARNING][6092] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea" HandleID="k8s-pod-network.3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea" Workload="localhost-k8s-calico--apiserver--954df557d--7jgtm-eth0" Sep 10 00:32:59.772887 containerd[1553]: 2025-09-10 00:32:59.765 [INFO][6092] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea" HandleID="k8s-pod-network.3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea" Workload="localhost-k8s-calico--apiserver--954df557d--7jgtm-eth0" Sep 10 00:32:59.772887 containerd[1553]: 2025-09-10 00:32:59.766 [INFO][6092] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:32:59.772887 containerd[1553]: 2025-09-10 00:32:59.769 [INFO][6083] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea" Sep 10 00:32:59.773345 containerd[1553]: time="2025-09-10T00:32:59.772946088Z" level=info msg="TearDown network for sandbox \"3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea\" successfully" Sep 10 00:32:59.773345 containerd[1553]: time="2025-09-10T00:32:59.772975172Z" level=info msg="StopPodSandbox for \"3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea\" returns successfully" Sep 10 00:32:59.773590 containerd[1553]: time="2025-09-10T00:32:59.773560020Z" level=info msg="RemovePodSandbox for \"3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea\"" Sep 10 00:32:59.773622 containerd[1553]: time="2025-09-10T00:32:59.773600987Z" level=info msg="Forcibly stopping sandbox \"3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea\"" Sep 10 00:32:59.841413 containerd[1553]: 2025-09-10 00:32:59.806 [WARNING][6110] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--954df557d--7jgtm-eth0", GenerateName:"calico-apiserver-954df557d-", Namespace:"calico-apiserver", SelfLink:"", UID:"0b9b71be-4940-4c7a-8e7a-6616e525febf", ResourceVersion:"1198", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 32, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"954df557d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3342d3b08d6acbbfc28410fe11d99e4058eba84288d5c220dc49abfb9c3b8f0c", Pod:"calico-apiserver-954df557d-7jgtm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali394065eb720", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:32:59.841413 containerd[1553]: 2025-09-10 00:32:59.806 [INFO][6110] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea" Sep 10 00:32:59.841413 containerd[1553]: 2025-09-10 00:32:59.806 [INFO][6110] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea" iface="eth0" netns="" Sep 10 00:32:59.841413 containerd[1553]: 2025-09-10 00:32:59.806 [INFO][6110] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea" Sep 10 00:32:59.841413 containerd[1553]: 2025-09-10 00:32:59.806 [INFO][6110] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea" Sep 10 00:32:59.841413 containerd[1553]: 2025-09-10 00:32:59.828 [INFO][6120] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea" HandleID="k8s-pod-network.3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea" Workload="localhost-k8s-calico--apiserver--954df557d--7jgtm-eth0" Sep 10 00:32:59.841413 containerd[1553]: 2025-09-10 00:32:59.828 [INFO][6120] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:32:59.841413 containerd[1553]: 2025-09-10 00:32:59.828 [INFO][6120] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:32:59.841413 containerd[1553]: 2025-09-10 00:32:59.834 [WARNING][6120] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea" HandleID="k8s-pod-network.3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea" Workload="localhost-k8s-calico--apiserver--954df557d--7jgtm-eth0" Sep 10 00:32:59.841413 containerd[1553]: 2025-09-10 00:32:59.834 [INFO][6120] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea" HandleID="k8s-pod-network.3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea" Workload="localhost-k8s-calico--apiserver--954df557d--7jgtm-eth0" Sep 10 00:32:59.841413 containerd[1553]: 2025-09-10 00:32:59.835 [INFO][6120] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:32:59.841413 containerd[1553]: 2025-09-10 00:32:59.838 [INFO][6110] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea" Sep 10 00:32:59.841918 containerd[1553]: time="2025-09-10T00:32:59.841458780Z" level=info msg="TearDown network for sandbox \"3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea\" successfully" Sep 10 00:32:59.845498 containerd[1553]: time="2025-09-10T00:32:59.845471601Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 10 00:32:59.845549 containerd[1553]: time="2025-09-10T00:32:59.845537605Z" level=info msg="RemovePodSandbox \"3fb548a73460b1bbe159cb22d805a74efe0f86924cf31a4c9b6ed6e42507b8ea\" returns successfully" Sep 10 00:32:59.846046 containerd[1553]: time="2025-09-10T00:32:59.846025320Z" level=info msg="StopPodSandbox for \"ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde\"" Sep 10 00:32:59.912528 containerd[1553]: 2025-09-10 00:32:59.879 [WARNING][6138] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lkztn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"da4aa882-2a3b-4ce6-a838-c6e29f20e7da", ResourceVersion:"1220", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 32, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"833be9c8bd57ac84b1e32a66bf93aa933edc8e10cec745cd6850bd2d324b447b", Pod:"csi-node-driver-lkztn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib3fc0530cfa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:32:59.912528 containerd[1553]: 2025-09-10 00:32:59.879 [INFO][6138] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde" Sep 10 00:32:59.912528 containerd[1553]: 2025-09-10 00:32:59.879 [INFO][6138] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde" iface="eth0" netns="" Sep 10 00:32:59.912528 containerd[1553]: 2025-09-10 00:32:59.879 [INFO][6138] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde" Sep 10 00:32:59.912528 containerd[1553]: 2025-09-10 00:32:59.879 [INFO][6138] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde" Sep 10 00:32:59.912528 containerd[1553]: 2025-09-10 00:32:59.898 [INFO][6147] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde" HandleID="k8s-pod-network.ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde" Workload="localhost-k8s-csi--node--driver--lkztn-eth0" Sep 10 00:32:59.912528 containerd[1553]: 2025-09-10 00:32:59.898 [INFO][6147] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:32:59.912528 containerd[1553]: 2025-09-10 00:32:59.898 [INFO][6147] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:32:59.912528 containerd[1553]: 2025-09-10 00:32:59.905 [WARNING][6147] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde" HandleID="k8s-pod-network.ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde" Workload="localhost-k8s-csi--node--driver--lkztn-eth0" Sep 10 00:32:59.912528 containerd[1553]: 2025-09-10 00:32:59.905 [INFO][6147] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde" HandleID="k8s-pod-network.ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde" Workload="localhost-k8s-csi--node--driver--lkztn-eth0" Sep 10 00:32:59.912528 containerd[1553]: 2025-09-10 00:32:59.907 [INFO][6147] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:32:59.912528 containerd[1553]: 2025-09-10 00:32:59.909 [INFO][6138] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde" Sep 10 00:32:59.913036 containerd[1553]: time="2025-09-10T00:32:59.912580509Z" level=info msg="TearDown network for sandbox \"ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde\" successfully" Sep 10 00:32:59.913036 containerd[1553]: time="2025-09-10T00:32:59.912610064Z" level=info msg="StopPodSandbox for \"ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde\" returns successfully" Sep 10 00:32:59.913191 containerd[1553]: time="2025-09-10T00:32:59.913149016Z" level=info msg="RemovePodSandbox for \"ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde\"" Sep 10 00:32:59.913191 containerd[1553]: time="2025-09-10T00:32:59.913181327Z" level=info msg="Forcibly stopping sandbox \"ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde\"" Sep 10 00:32:59.978891 containerd[1553]: 2025-09-10 00:32:59.945 [WARNING][6165] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lkztn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"da4aa882-2a3b-4ce6-a838-c6e29f20e7da", ResourceVersion:"1220", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 32, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"833be9c8bd57ac84b1e32a66bf93aa933edc8e10cec745cd6850bd2d324b447b", Pod:"csi-node-driver-lkztn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib3fc0530cfa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:32:59.978891 containerd[1553]: 2025-09-10 00:32:59.946 [INFO][6165] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde" Sep 10 00:32:59.978891 containerd[1553]: 2025-09-10 00:32:59.946 [INFO][6165] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde" iface="eth0" netns="" Sep 10 00:32:59.978891 containerd[1553]: 2025-09-10 00:32:59.946 [INFO][6165] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde" Sep 10 00:32:59.978891 containerd[1553]: 2025-09-10 00:32:59.946 [INFO][6165] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde" Sep 10 00:32:59.978891 containerd[1553]: 2025-09-10 00:32:59.966 [INFO][6174] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde" HandleID="k8s-pod-network.ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde" Workload="localhost-k8s-csi--node--driver--lkztn-eth0" Sep 10 00:32:59.978891 containerd[1553]: 2025-09-10 00:32:59.967 [INFO][6174] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:32:59.978891 containerd[1553]: 2025-09-10 00:32:59.967 [INFO][6174] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:32:59.978891 containerd[1553]: 2025-09-10 00:32:59.972 [WARNING][6174] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde" HandleID="k8s-pod-network.ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde" Workload="localhost-k8s-csi--node--driver--lkztn-eth0" Sep 10 00:32:59.978891 containerd[1553]: 2025-09-10 00:32:59.972 [INFO][6174] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde" HandleID="k8s-pod-network.ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde" Workload="localhost-k8s-csi--node--driver--lkztn-eth0" Sep 10 00:32:59.978891 containerd[1553]: 2025-09-10 00:32:59.973 [INFO][6174] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:32:59.978891 containerd[1553]: 2025-09-10 00:32:59.975 [INFO][6165] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde" Sep 10 00:32:59.978891 containerd[1553]: time="2025-09-10T00:32:59.978838149Z" level=info msg="TearDown network for sandbox \"ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde\" successfully" Sep 10 00:32:59.982887 containerd[1553]: time="2025-09-10T00:32:59.982861329Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 10 00:32:59.982944 containerd[1553]: time="2025-09-10T00:32:59.982915372Z" level=info msg="RemovePodSandbox \"ae1f6a1a44a70dce3f5d1f37ea7bdce34608943d0e296bddab6c91e91cce0bde\" returns successfully" Sep 10 00:32:59.983474 containerd[1553]: time="2025-09-10T00:32:59.983437150Z" level=info msg="StopPodSandbox for \"fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6\"" Sep 10 00:33:00.048417 containerd[1553]: 2025-09-10 00:33:00.016 [WARNING][6192] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--cwk66-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"444dd844-3e2d-443a-aa9b-1761b39df54b", ResourceVersion:"1153", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 32, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eb16f27aa405160f8600c36832ac94b7ca435c5da6bacf74757316874b37bc00", Pod:"goldmane-7988f88666-cwk66", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali568b9aa7b6f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:33:00.048417 containerd[1553]: 2025-09-10 00:33:00.016 [INFO][6192] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6" Sep 10 00:33:00.048417 containerd[1553]: 2025-09-10 00:33:00.016 [INFO][6192] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6" iface="eth0" netns="" Sep 10 00:33:00.048417 containerd[1553]: 2025-09-10 00:33:00.016 [INFO][6192] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6" Sep 10 00:33:00.048417 containerd[1553]: 2025-09-10 00:33:00.016 [INFO][6192] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6" Sep 10 00:33:00.048417 containerd[1553]: 2025-09-10 00:33:00.036 [INFO][6201] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6" HandleID="k8s-pod-network.fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6" Workload="localhost-k8s-goldmane--7988f88666--cwk66-eth0" Sep 10 00:33:00.048417 containerd[1553]: 2025-09-10 00:33:00.036 [INFO][6201] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:33:00.048417 containerd[1553]: 2025-09-10 00:33:00.036 [INFO][6201] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:33:00.048417 containerd[1553]: 2025-09-10 00:33:00.041 [WARNING][6201] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6" HandleID="k8s-pod-network.fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6" Workload="localhost-k8s-goldmane--7988f88666--cwk66-eth0" Sep 10 00:33:00.048417 containerd[1553]: 2025-09-10 00:33:00.041 [INFO][6201] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6" HandleID="k8s-pod-network.fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6" Workload="localhost-k8s-goldmane--7988f88666--cwk66-eth0" Sep 10 00:33:00.048417 containerd[1553]: 2025-09-10 00:33:00.043 [INFO][6201] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:33:00.048417 containerd[1553]: 2025-09-10 00:33:00.045 [INFO][6192] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6" Sep 10 00:33:00.048842 containerd[1553]: time="2025-09-10T00:33:00.048458588Z" level=info msg="TearDown network for sandbox \"fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6\" successfully" Sep 10 00:33:00.048842 containerd[1553]: time="2025-09-10T00:33:00.048489386Z" level=info msg="StopPodSandbox for \"fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6\" returns successfully" Sep 10 00:33:00.049021 containerd[1553]: time="2025-09-10T00:33:00.048987159Z" level=info msg="RemovePodSandbox for \"fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6\"" Sep 10 00:33:00.049021 containerd[1553]: time="2025-09-10T00:33:00.049017206Z" level=info msg="Forcibly stopping sandbox \"fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6\"" Sep 10 00:33:00.120738 containerd[1553]: 2025-09-10 00:33:00.084 [WARNING][6220] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--cwk66-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"444dd844-3e2d-443a-aa9b-1761b39df54b", ResourceVersion:"1153", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 32, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eb16f27aa405160f8600c36832ac94b7ca435c5da6bacf74757316874b37bc00", Pod:"goldmane-7988f88666-cwk66", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali568b9aa7b6f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:33:00.120738 containerd[1553]: 2025-09-10 00:33:00.084 [INFO][6220] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6" Sep 10 00:33:00.120738 containerd[1553]: 2025-09-10 00:33:00.084 [INFO][6220] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6" iface="eth0" netns="" Sep 10 00:33:00.120738 containerd[1553]: 2025-09-10 00:33:00.084 [INFO][6220] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6" Sep 10 00:33:00.120738 containerd[1553]: 2025-09-10 00:33:00.084 [INFO][6220] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6" Sep 10 00:33:00.120738 containerd[1553]: 2025-09-10 00:33:00.106 [INFO][6229] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6" HandleID="k8s-pod-network.fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6" Workload="localhost-k8s-goldmane--7988f88666--cwk66-eth0" Sep 10 00:33:00.120738 containerd[1553]: 2025-09-10 00:33:00.106 [INFO][6229] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:33:00.120738 containerd[1553]: 2025-09-10 00:33:00.106 [INFO][6229] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:33:00.120738 containerd[1553]: 2025-09-10 00:33:00.113 [WARNING][6229] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6" HandleID="k8s-pod-network.fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6" Workload="localhost-k8s-goldmane--7988f88666--cwk66-eth0" Sep 10 00:33:00.120738 containerd[1553]: 2025-09-10 00:33:00.113 [INFO][6229] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6" HandleID="k8s-pod-network.fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6" Workload="localhost-k8s-goldmane--7988f88666--cwk66-eth0" Sep 10 00:33:00.120738 containerd[1553]: 2025-09-10 00:33:00.114 [INFO][6229] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:33:00.120738 containerd[1553]: 2025-09-10 00:33:00.117 [INFO][6220] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6" Sep 10 00:33:00.121178 containerd[1553]: time="2025-09-10T00:33:00.120789611Z" level=info msg="TearDown network for sandbox \"fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6\" successfully" Sep 10 00:33:00.125031 containerd[1553]: time="2025-09-10T00:33:00.125003339Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 10 00:33:00.125084 containerd[1553]: time="2025-09-10T00:33:00.125067639Z" level=info msg="RemovePodSandbox \"fc6344c43d5e9499ecdd3192df3096d3a1ab03f23050a0b9d31f1ec0eb451ef6\" returns successfully" Sep 10 00:33:00.305315 systemd-journald[1156]: Under memory pressure, flushing caches. Sep 10 00:33:00.304544 systemd-resolved[1456]: Under memory pressure, flushing caches. Sep 10 00:33:00.304587 systemd-resolved[1456]: Flushed all caches. Sep 10 00:33:02.352652 systemd-resolved[1456]: Under memory pressure, flushing caches. Sep 10 00:33:02.354676 systemd-journald[1156]: Under memory pressure, flushing caches. Sep 10 00:33:02.352660 systemd-resolved[1456]: Flushed all caches. Sep 10 00:33:02.403519 systemd[1]: Started sshd@14-10.0.0.21:22-10.0.0.1:49144.service - OpenSSH per-connection server daemon (10.0.0.1:49144). Sep 10 00:33:02.467795 sshd[6237]: Accepted publickey for core from 10.0.0.1 port 49144 ssh2: RSA SHA256:yotFPVH/8pVol0IcCMTpL4axYdSEk1J0cKg1+3rpd1s Sep 10 00:33:02.469759 sshd[6237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:33:02.475533 systemd-logind[1531]: New session 15 of user core. Sep 10 00:33:02.478776 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 10 00:33:02.690465 sshd[6237]: pam_unix(sshd:session): session closed for user core Sep 10 00:33:02.694998 systemd[1]: sshd@14-10.0.0.21:22-10.0.0.1:49144.service: Deactivated successfully. Sep 10 00:33:02.697612 systemd-logind[1531]: Session 15 logged out. Waiting for processes to exit. Sep 10 00:33:02.697744 systemd[1]: session-15.scope: Deactivated successfully. Sep 10 00:33:02.699107 systemd-logind[1531]: Removed session 15. Sep 10 00:33:07.705523 systemd[1]: Started sshd@15-10.0.0.21:22-10.0.0.1:49150.service - OpenSSH per-connection server daemon (10.0.0.1:49150). Sep 10 00:33:07.739742 sshd[6259]: Accepted publickey for core from 10.0.0.1 port 49150 ssh2: RSA SHA256:yotFPVH/8pVol0IcCMTpL4axYdSEk1J0cKg1+3rpd1s Sep 10 00:33:07.741381 sshd[6259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:33:07.745178 systemd-logind[1531]: New session 16 of user core. Sep 10 00:33:07.755518 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 10 00:33:07.879083 sshd[6259]: pam_unix(sshd:session): session closed for user core Sep 10 00:33:07.883005 systemd[1]: sshd@15-10.0.0.21:22-10.0.0.1:49150.service: Deactivated successfully. Sep 10 00:33:07.885551 systemd-logind[1531]: Session 16 logged out. Waiting for processes to exit. Sep 10 00:33:07.885661 systemd[1]: session-16.scope: Deactivated successfully. Sep 10 00:33:07.886736 systemd-logind[1531]: Removed session 16. Sep 10 00:33:11.348034 kubelet[2646]: E0910 00:33:11.347178 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:33:12.139107 systemd[1]: run-containerd-runc-k8s.io-dedb6c8d4974462c40af73be1a53180a72891f7c5101dba8e4032a7345a971b1-runc.GMGHOV.mount: Deactivated successfully. Sep 10 00:33:12.890749 systemd[1]: Started sshd@16-10.0.0.21:22-10.0.0.1:58618.service - OpenSSH per-connection server daemon (10.0.0.1:58618). Sep 10 00:33:12.932145 sshd[6296]: Accepted publickey for core from 10.0.0.1 port 58618 ssh2: RSA SHA256:yotFPVH/8pVol0IcCMTpL4axYdSEk1J0cKg1+3rpd1s Sep 10 00:33:12.934142 sshd[6296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:33:12.939442 systemd-logind[1531]: New session 17 of user core. Sep 10 00:33:12.950521 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 10 00:33:13.097853 sshd[6296]: pam_unix(sshd:session): session closed for user core Sep 10 00:33:13.105668 systemd[1]: Started sshd@17-10.0.0.21:22-10.0.0.1:58628.service - OpenSSH per-connection server daemon (10.0.0.1:58628). Sep 10 00:33:13.106328 systemd[1]: sshd@16-10.0.0.21:22-10.0.0.1:58618.service: Deactivated successfully. Sep 10 00:33:13.110663 systemd-logind[1531]: Session 17 logged out. Waiting for processes to exit. Sep 10 00:33:13.112408 systemd[1]: session-17.scope: Deactivated successfully. Sep 10 00:33:13.113516 systemd-logind[1531]: Removed session 17. Sep 10 00:33:13.146189 sshd[6308]: Accepted publickey for core from 10.0.0.1 port 58628 ssh2: RSA SHA256:yotFPVH/8pVol0IcCMTpL4axYdSEk1J0cKg1+3rpd1s Sep 10 00:33:13.147792 sshd[6308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:33:13.152634 systemd-logind[1531]: New session 18 of user core. Sep 10 00:33:13.165513 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 10 00:33:13.336777 kubelet[2646]: E0910 00:33:13.336713 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:33:13.524760 sshd[6308]: pam_unix(sshd:session): session closed for user core Sep 10 00:33:13.534529 systemd[1]: Started sshd@18-10.0.0.21:22-10.0.0.1:58636.service - OpenSSH per-connection server daemon (10.0.0.1:58636). Sep 10 00:33:13.535161 systemd[1]: sshd@17-10.0.0.21:22-10.0.0.1:58628.service: Deactivated successfully. Sep 10 00:33:13.540246 systemd-logind[1531]: Session 18 logged out. Waiting for processes to exit. Sep 10 00:33:13.543855 systemd[1]: session-18.scope: Deactivated successfully. Sep 10 00:33:13.546359 systemd-logind[1531]: Removed session 18. Sep 10 00:33:13.579801 sshd[6322]: Accepted publickey for core from 10.0.0.1 port 58636 ssh2: RSA SHA256:yotFPVH/8pVol0IcCMTpL4axYdSEk1J0cKg1+3rpd1s Sep 10 00:33:13.581556 sshd[6322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:33:13.585886 systemd-logind[1531]: New session 19 of user core. Sep 10 00:33:13.595529 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 10 00:33:15.189762 sshd[6322]: pam_unix(sshd:session): session closed for user core Sep 10 00:33:15.202284 systemd[1]: Started sshd@19-10.0.0.21:22-10.0.0.1:58648.service - OpenSSH per-connection server daemon (10.0.0.1:58648). Sep 10 00:33:15.203030 systemd[1]: sshd@18-10.0.0.21:22-10.0.0.1:58636.service: Deactivated successfully. Sep 10 00:33:15.208368 systemd-logind[1531]: Session 19 logged out. Waiting for processes to exit. Sep 10 00:33:15.209135 systemd[1]: session-19.scope: Deactivated successfully. Sep 10 00:33:15.214417 systemd-logind[1531]: Removed session 19. Sep 10 00:33:15.264939 sshd[6345]: Accepted publickey for core from 10.0.0.1 port 58648 ssh2: RSA SHA256:yotFPVH/8pVol0IcCMTpL4axYdSEk1J0cKg1+3rpd1s Sep 10 00:33:15.267332 sshd[6345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:33:15.272327 systemd-logind[1531]: New session 20 of user core. Sep 10 00:33:15.282579 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 10 00:33:15.768608 sshd[6345]: pam_unix(sshd:session): session closed for user core Sep 10 00:33:15.779188 systemd[1]: Started sshd@20-10.0.0.21:22-10.0.0.1:58660.service - OpenSSH per-connection server daemon (10.0.0.1:58660). Sep 10 00:33:15.779989 systemd[1]: sshd@19-10.0.0.21:22-10.0.0.1:58648.service: Deactivated successfully. Sep 10 00:33:15.783560 systemd[1]: session-20.scope: Deactivated successfully. Sep 10 00:33:15.786925 systemd-logind[1531]: Session 20 logged out. Waiting for processes to exit. Sep 10 00:33:15.788405 systemd-logind[1531]: Removed session 20. Sep 10 00:33:15.818053 sshd[6360]: Accepted publickey for core from 10.0.0.1 port 58660 ssh2: RSA SHA256:yotFPVH/8pVol0IcCMTpL4axYdSEk1J0cKg1+3rpd1s Sep 10 00:33:15.818705 sshd[6360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:33:15.824733 systemd-logind[1531]: New session 21 of user core. Sep 10 00:33:15.838612 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 10 00:33:15.961722 sshd[6360]: pam_unix(sshd:session): session closed for user core Sep 10 00:33:15.966962 systemd[1]: sshd@20-10.0.0.21:22-10.0.0.1:58660.service: Deactivated successfully. Sep 10 00:33:15.969791 systemd-logind[1531]: Session 21 logged out. Waiting for processes to exit. Sep 10 00:33:15.970374 systemd[1]: session-21.scope: Deactivated successfully. Sep 10 00:33:15.972007 systemd-logind[1531]: Removed session 21. Sep 10 00:33:20.974481 systemd[1]: Started sshd@21-10.0.0.21:22-10.0.0.1:60492.service - OpenSSH per-connection server daemon (10.0.0.1:60492). Sep 10 00:33:21.016687 sshd[6405]: Accepted publickey for core from 10.0.0.1 port 60492 ssh2: RSA SHA256:yotFPVH/8pVol0IcCMTpL4axYdSEk1J0cKg1+3rpd1s Sep 10 00:33:21.018673 sshd[6405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:33:21.023545 systemd-logind[1531]: New session 22 of user core. Sep 10 00:33:21.034524 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 10 00:33:21.236741 sshd[6405]: pam_unix(sshd:session): session closed for user core Sep 10 00:33:21.240961 systemd[1]: sshd@21-10.0.0.21:22-10.0.0.1:60492.service: Deactivated successfully. Sep 10 00:33:21.245132 systemd[1]: session-22.scope: Deactivated successfully. Sep 10 00:33:21.246650 systemd-logind[1531]: Session 22 logged out. Waiting for processes to exit. Sep 10 00:33:21.247574 systemd-logind[1531]: Removed session 22. Sep 10 00:33:24.337009 kubelet[2646]: E0910 00:33:24.336938 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:33:25.336536 kubelet[2646]: E0910 00:33:25.336488 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:33:26.246525 systemd[1]: Started sshd@22-10.0.0.21:22-10.0.0.1:60502.service - OpenSSH per-connection server daemon (10.0.0.1:60502). Sep 10 00:33:26.305228 sshd[6448]: Accepted publickey for core from 10.0.0.1 port 60502 ssh2: RSA SHA256:yotFPVH/8pVol0IcCMTpL4axYdSEk1J0cKg1+3rpd1s Sep 10 00:33:26.307608 sshd[6448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:33:26.313134 systemd-logind[1531]: New session 23 of user core. Sep 10 00:33:26.322507 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 10 00:33:26.465794 sshd[6448]: pam_unix(sshd:session): session closed for user core Sep 10 00:33:26.470192 systemd[1]: sshd@22-10.0.0.21:22-10.0.0.1:60502.service: Deactivated successfully. Sep 10 00:33:26.473383 systemd[1]: session-23.scope: Deactivated successfully. Sep 10 00:33:26.474029 systemd-logind[1531]: Session 23 logged out. Waiting for processes to exit. Sep 10 00:33:26.474979 systemd-logind[1531]: Removed session 23. Sep 10 00:33:31.481641 systemd[1]: Started sshd@23-10.0.0.21:22-10.0.0.1:41976.service - OpenSSH per-connection server daemon (10.0.0.1:41976). Sep 10 00:33:31.517706 sshd[6464]: Accepted publickey for core from 10.0.0.1 port 41976 ssh2: RSA SHA256:yotFPVH/8pVol0IcCMTpL4axYdSEk1J0cKg1+3rpd1s Sep 10 00:33:31.519654 sshd[6464]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:33:31.524247 systemd-logind[1531]: New session 24 of user core. Sep 10 00:33:31.535592 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 10 00:33:31.650660 sshd[6464]: pam_unix(sshd:session): session closed for user core Sep 10 00:33:31.655110 systemd[1]: sshd@23-10.0.0.21:22-10.0.0.1:41976.service: Deactivated successfully. Sep 10 00:33:31.661209 systemd[1]: session-24.scope: Deactivated successfully. Sep 10 00:33:31.661227 systemd-logind[1531]: Session 24 logged out. Waiting for processes to exit. Sep 10 00:33:31.664329 systemd-logind[1531]: Removed session 24. Sep 10 00:33:33.338265 kubelet[2646]: E0910 00:33:33.336314 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:33:36.662754 systemd[1]: Started sshd@24-10.0.0.21:22-10.0.0.1:41990.service - OpenSSH per-connection server daemon (10.0.0.1:41990). Sep 10 00:33:36.704293 sshd[6502]: Accepted publickey for core from 10.0.0.1 port 41990 ssh2: RSA SHA256:yotFPVH/8pVol0IcCMTpL4axYdSEk1J0cKg1+3rpd1s Sep 10 00:33:36.706143 sshd[6502]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:33:36.713841 systemd-logind[1531]: New session 25 of user core. Sep 10 00:33:36.719664 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 10 00:33:36.932793 sshd[6502]: pam_unix(sshd:session): session closed for user core Sep 10 00:33:36.937141 systemd[1]: sshd@24-10.0.0.21:22-10.0.0.1:41990.service: Deactivated successfully. Sep 10 00:33:36.940190 systemd-logind[1531]: Session 25 logged out. Waiting for processes to exit. Sep 10 00:33:36.940360 systemd[1]: session-25.scope: Deactivated successfully. Sep 10 00:33:36.941518 systemd-logind[1531]: Removed session 25.