Sep 12 17:34:29.969828 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 16:05:08 -00 2025 Sep 12 17:34:29.969854 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:34:29.969870 kernel: BIOS-provided physical RAM map: Sep 12 17:34:29.969877 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 12 17:34:29.969883 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 12 17:34:29.969890 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 12 17:34:29.969899 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 12 17:34:29.969905 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 12 17:34:29.969917 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 12 17:34:29.969927 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 12 17:34:29.969935 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 12 17:34:29.969942 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 12 17:34:29.969951 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 12 17:34:29.969958 kernel: NX (Execute Disable) protection: active Sep 12 17:34:29.969965 kernel: APIC: Static calls initialized Sep 12 17:34:29.969979 kernel: SMBIOS 2.8 present. Sep 12 17:34:29.969986 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 12 17:34:29.969993 kernel: Hypervisor detected: KVM Sep 12 17:34:29.969999 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 12 17:34:29.970006 kernel: kvm-clock: using sched offset of 4206621645 cycles Sep 12 17:34:29.970013 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 12 17:34:29.970020 kernel: tsc: Detected 2794.750 MHz processor Sep 12 17:34:29.970027 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 17:34:29.970034 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 17:34:29.970041 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 12 17:34:29.970051 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 12 17:34:29.970058 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 17:34:29.970065 kernel: Using GB pages for direct mapping Sep 12 17:34:29.970072 kernel: ACPI: Early table checksum verification disabled Sep 12 17:34:29.970078 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 12 17:34:29.970085 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:34:29.970092 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:34:29.970099 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:34:29.970108 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 12 17:34:29.970115 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:34:29.970122 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:34:29.970129 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:34:29.970135 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:34:29.970142 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 12 17:34:29.970149 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 12 17:34:29.970160 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 12 17:34:29.970173 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 12 17:34:29.970181 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 12 17:34:29.970188 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 12 17:34:29.970197 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 12 17:34:29.970207 kernel: No NUMA configuration found Sep 12 17:34:29.970214 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 12 17:34:29.970221 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Sep 12 17:34:29.970231 kernel: Zone ranges: Sep 12 17:34:29.970238 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 17:34:29.970250 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 12 17:34:29.970257 kernel: Normal empty Sep 12 17:34:29.970264 kernel: Movable zone start for each node Sep 12 17:34:29.970271 kernel: Early memory node ranges Sep 12 17:34:29.970278 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 12 17:34:29.970287 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 12 17:34:29.970294 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 12 17:34:29.970304 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 17:34:29.970314 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 12 17:34:29.970321 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 12 17:34:29.970328 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 12 17:34:29.970335 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 12 17:34:29.970342 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 12 17:34:29.970349 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 12 17:34:29.970356 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 12 17:34:29.970363 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 17:34:29.970373 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 12 17:34:29.970380 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 12 17:34:29.970387 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 17:34:29.970394 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 12 17:34:29.970418 kernel: TSC deadline timer available Sep 12 17:34:29.970452 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 12 17:34:29.970459 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 12 17:34:29.970466 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 12 17:34:29.970476 kernel: kvm-guest: setup PV sched yield Sep 12 17:34:29.970486 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 12 17:34:29.970494 kernel: Booting paravirtualized kernel on KVM Sep 12 17:34:29.970501 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 17:34:29.970508 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 12 17:34:29.970519 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u524288 Sep 12 17:34:29.970526 kernel: pcpu-alloc: s197160 r8192 d32216 u524288 alloc=1*2097152 Sep 12 17:34:29.970533 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 12 17:34:29.970540 kernel: kvm-guest: PV spinlocks enabled Sep 12 17:34:29.970547 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 12 17:34:29.970558 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:34:29.970566 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:34:29.970573 kernel: random: crng init done Sep 12 17:34:29.970580 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 17:34:29.970587 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 17:34:29.970594 kernel: Fallback order for Node 0: 0 Sep 12 17:34:29.970601 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Sep 12 17:34:29.970608 kernel: Policy zone: DMA32 Sep 12 17:34:29.970618 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:34:29.970625 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 136900K reserved, 0K cma-reserved) Sep 12 17:34:29.970632 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 12 17:34:29.970640 kernel: ftrace: allocating 37974 entries in 149 pages Sep 12 17:34:29.970647 kernel: ftrace: allocated 149 pages with 4 groups Sep 12 17:34:29.970654 kernel: Dynamic Preempt: voluntary Sep 12 17:34:29.970661 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:34:29.970669 kernel: rcu: RCU event tracing is enabled. Sep 12 17:34:29.970676 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 12 17:34:29.970686 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:34:29.970693 kernel: Rude variant of Tasks RCU enabled. Sep 12 17:34:29.970700 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:34:29.970707 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:34:29.970726 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 12 17:34:29.970734 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 12 17:34:29.970741 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:34:29.970748 kernel: Console: colour VGA+ 80x25 Sep 12 17:34:29.970755 kernel: printk: console [ttyS0] enabled Sep 12 17:34:29.970762 kernel: ACPI: Core revision 20230628 Sep 12 17:34:29.970772 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 12 17:34:29.970780 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 17:34:29.970787 kernel: x2apic enabled Sep 12 17:34:29.970794 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 17:34:29.970801 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 12 17:34:29.970808 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 12 17:34:29.970816 kernel: kvm-guest: setup PV IPIs Sep 12 17:34:29.970843 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 12 17:34:29.970851 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 12 17:34:29.970858 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Sep 12 17:34:29.970872 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 12 17:34:29.970883 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 12 17:34:29.970890 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 12 17:34:29.970898 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 17:34:29.970906 kernel: Spectre V2 : Mitigation: Retpolines Sep 12 17:34:29.970913 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 17:34:29.970923 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 12 17:34:29.970931 kernel: active return thunk: retbleed_return_thunk Sep 12 17:34:29.970940 kernel: RETBleed: Mitigation: untrained return thunk Sep 12 17:34:29.970948 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 12 17:34:29.970963 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 12 17:34:29.970971 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 12 17:34:29.970981 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 12 17:34:29.970989 kernel: active return thunk: srso_return_thunk Sep 12 17:34:29.970999 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 12 17:34:29.971010 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 17:34:29.971031 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 17:34:29.971049 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 17:34:29.971072 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 17:34:29.971090 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 12 17:34:29.971109 kernel: Freeing SMP alternatives memory: 32K Sep 12 17:34:29.971121 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:34:29.971128 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 17:34:29.971138 kernel: landlock: Up and running. Sep 12 17:34:29.971146 kernel: SELinux: Initializing. Sep 12 17:34:29.971154 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:34:29.971161 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:34:29.971169 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 12 17:34:29.971176 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 17:34:29.971184 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 17:34:29.971192 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 17:34:29.971202 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 12 17:34:29.971212 kernel: ... version: 0 Sep 12 17:34:29.971219 kernel: ... bit width: 48 Sep 12 17:34:29.971227 kernel: ... generic registers: 6 Sep 12 17:34:29.971242 kernel: ... value mask: 0000ffffffffffff Sep 12 17:34:29.971253 kernel: ... max period: 00007fffffffffff Sep 12 17:34:29.971262 kernel: ... fixed-purpose events: 0 Sep 12 17:34:29.971272 kernel: ... event mask: 000000000000003f Sep 12 17:34:29.971279 kernel: signal: max sigframe size: 1776 Sep 12 17:34:29.971286 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:34:29.971320 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:34:29.971327 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:34:29.971335 kernel: smpboot: x86: Booting SMP configuration: Sep 12 17:34:29.971342 kernel: .... node #0, CPUs: #1 #2 #3 Sep 12 17:34:29.971350 kernel: smp: Brought up 1 node, 4 CPUs Sep 12 17:34:29.971357 kernel: smpboot: Max logical packages: 1 Sep 12 17:34:29.971364 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Sep 12 17:34:29.971372 kernel: devtmpfs: initialized Sep 12 17:34:29.971379 kernel: x86/mm: Memory block size: 128MB Sep 12 17:34:29.971390 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:34:29.971397 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 12 17:34:29.971405 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:34:29.971412 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:34:29.971420 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:34:29.971438 kernel: audit: type=2000 audit(1757698468.563:1): state=initialized audit_enabled=0 res=1 Sep 12 17:34:29.971446 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:34:29.971454 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 17:34:29.971461 kernel: cpuidle: using governor menu Sep 12 17:34:29.971471 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:34:29.971479 kernel: dca service started, version 1.12.1 Sep 12 17:34:29.971487 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 12 17:34:29.971495 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 12 17:34:29.971502 kernel: PCI: Using configuration type 1 for base access Sep 12 17:34:29.971510 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 17:34:29.971517 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 17:34:29.971525 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 17:34:29.971532 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:34:29.971542 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:34:29.971550 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:34:29.971557 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:34:29.971565 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:34:29.971572 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 17:34:29.971580 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 12 17:34:29.971587 kernel: ACPI: Interpreter enabled Sep 12 17:34:29.971595 kernel: ACPI: PM: (supports S0 S3 S5) Sep 12 17:34:29.971602 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 17:34:29.971613 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 17:34:29.971620 kernel: PCI: Using E820 reservations for host bridge windows Sep 12 17:34:29.971628 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 12 17:34:29.971635 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 17:34:29.971848 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 17:34:29.971985 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 12 17:34:29.972106 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 12 17:34:29.972116 kernel: PCI host bridge to bus 0000:00 Sep 12 17:34:29.972261 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 17:34:29.972374 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 12 17:34:29.972501 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 17:34:29.972614 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 12 17:34:29.972734 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 12 17:34:29.972850 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 12 17:34:29.972965 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 17:34:29.973117 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 12 17:34:29.973260 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 12 17:34:29.973382 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Sep 12 17:34:29.973518 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Sep 12 17:34:29.973643 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Sep 12 17:34:29.973782 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 12 17:34:29.973962 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 12 17:34:29.974104 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Sep 12 17:34:29.974268 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Sep 12 17:34:29.974405 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Sep 12 17:34:29.974591 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 12 17:34:29.974738 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Sep 12 17:34:29.974861 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Sep 12 17:34:29.974989 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Sep 12 17:34:29.975146 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 12 17:34:29.975350 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Sep 12 17:34:29.975497 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Sep 12 17:34:29.975641 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 12 17:34:29.975779 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Sep 12 17:34:29.975916 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 12 17:34:29.976042 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 12 17:34:29.976191 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 12 17:34:29.976313 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Sep 12 17:34:29.976474 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Sep 12 17:34:29.976618 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 12 17:34:29.976749 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Sep 12 17:34:29.976766 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 12 17:34:29.976775 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 12 17:34:29.976783 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 12 17:34:29.976790 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 12 17:34:29.976798 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 12 17:34:29.976805 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 12 17:34:29.976813 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 12 17:34:29.976820 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 12 17:34:29.976828 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 12 17:34:29.976840 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 12 17:34:29.976848 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 12 17:34:29.976855 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 12 17:34:29.976863 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 12 17:34:29.976870 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 12 17:34:29.976880 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 12 17:34:29.976888 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 12 17:34:29.976895 kernel: iommu: Default domain type: Translated Sep 12 17:34:29.976903 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 17:34:29.976913 kernel: PCI: Using ACPI for IRQ routing Sep 12 17:34:29.976920 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 17:34:29.976928 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 12 17:34:29.976935 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 12 17:34:29.977058 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 12 17:34:29.977176 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 12 17:34:29.977313 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 12 17:34:29.977323 kernel: vgaarb: loaded Sep 12 17:34:29.977335 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 12 17:34:29.977342 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 12 17:34:29.977350 kernel: clocksource: Switched to clocksource kvm-clock Sep 12 17:34:29.977357 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:34:29.977365 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:34:29.977372 kernel: pnp: PnP ACPI init Sep 12 17:34:29.977528 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 12 17:34:29.977540 kernel: pnp: PnP ACPI: found 6 devices Sep 12 17:34:29.977547 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 17:34:29.977559 kernel: NET: Registered PF_INET protocol family Sep 12 17:34:29.977566 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 17:34:29.977574 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 17:34:29.977582 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:34:29.977590 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 17:34:29.977597 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 17:34:29.977605 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 17:34:29.977612 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:34:29.977622 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:34:29.977630 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:34:29.977637 kernel: NET: Registered PF_XDP protocol family Sep 12 17:34:29.977760 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 12 17:34:29.977870 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 12 17:34:29.977981 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 12 17:34:29.978090 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 12 17:34:29.978201 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 12 17:34:29.978309 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 12 17:34:29.978328 kernel: PCI: CLS 0 bytes, default 64 Sep 12 17:34:29.978339 kernel: Initialise system trusted keyrings Sep 12 17:34:29.978347 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 17:34:29.978355 kernel: Key type asymmetric registered Sep 12 17:34:29.978362 kernel: Asymmetric key parser 'x509' registered Sep 12 17:34:29.978370 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 12 17:34:29.978377 kernel: io scheduler mq-deadline registered Sep 12 17:34:29.978385 kernel: io scheduler kyber registered Sep 12 17:34:29.978392 kernel: io scheduler bfq registered Sep 12 17:34:29.978403 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 17:34:29.978411 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 12 17:34:29.978419 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 12 17:34:29.978493 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 12 17:34:29.978504 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:34:29.978511 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 17:34:29.978519 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 12 17:34:29.978527 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 12 17:34:29.978534 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 12 17:34:29.978546 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 12 17:34:29.978686 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 12 17:34:29.978812 kernel: rtc_cmos 00:04: registered as rtc0 Sep 12 17:34:29.978925 kernel: rtc_cmos 00:04: setting system clock to 2025-09-12T17:34:29 UTC (1757698469) Sep 12 17:34:29.979036 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 12 17:34:29.979046 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 12 17:34:29.979053 kernel: hpet: Lost 2 RTC interrupts Sep 12 17:34:29.979060 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:34:29.979074 kernel: Segment Routing with IPv6 Sep 12 17:34:29.979082 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:34:29.979089 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:34:29.979099 kernel: Key type dns_resolver registered Sep 12 17:34:29.979107 kernel: IPI shorthand broadcast: enabled Sep 12 17:34:29.979116 kernel: sched_clock: Marking stable (784002028, 107065087)->(903972849, -12905734) Sep 12 17:34:29.979124 kernel: registered taskstats version 1 Sep 12 17:34:29.979132 kernel: Loading compiled-in X.509 certificates Sep 12 17:34:29.979148 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 449ba23cbe21e08b3bddb674b4885682335ee1f9' Sep 12 17:34:29.979158 kernel: Key type .fscrypt registered Sep 12 17:34:29.979166 kernel: Key type fscrypt-provisioning registered Sep 12 17:34:29.979173 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 17:34:29.979181 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:34:29.979188 kernel: ima: No architecture policies found Sep 12 17:34:29.979196 kernel: clk: Disabling unused clocks Sep 12 17:34:29.979203 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 12 17:34:29.979211 kernel: Write protecting the kernel read-only data: 36864k Sep 12 17:34:29.979221 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 12 17:34:29.979228 kernel: Run /init as init process Sep 12 17:34:29.979236 kernel: with arguments: Sep 12 17:34:29.979243 kernel: /init Sep 12 17:34:29.979251 kernel: with environment: Sep 12 17:34:29.979258 kernel: HOME=/ Sep 12 17:34:29.979265 kernel: TERM=linux Sep 12 17:34:29.979273 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:34:29.979282 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:34:29.979295 systemd[1]: Detected virtualization kvm. Sep 12 17:34:29.979303 systemd[1]: Detected architecture x86-64. Sep 12 17:34:29.979311 systemd[1]: Running in initrd. Sep 12 17:34:29.979319 systemd[1]: No hostname configured, using default hostname. Sep 12 17:34:29.979327 systemd[1]: Hostname set to . Sep 12 17:34:29.979335 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:34:29.979343 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:34:29.979351 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:34:29.979362 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:34:29.979382 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:34:29.979395 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:34:29.979403 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:34:29.979412 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:34:29.979424 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:34:29.979444 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:34:29.979453 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:34:29.979461 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:34:29.979476 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:34:29.979484 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:34:29.979492 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:34:29.979500 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:34:29.979512 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:34:29.979520 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:34:29.979528 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:34:29.979536 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 17:34:29.979545 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:34:29.979553 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:34:29.979561 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:34:29.979570 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:34:29.979580 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:34:29.979589 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:34:29.979597 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:34:29.979605 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:34:29.979613 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:34:29.979622 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:34:29.979630 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:34:29.979638 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:34:29.979647 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:34:29.979657 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:34:29.979684 systemd-journald[193]: Collecting audit messages is disabled. Sep 12 17:34:29.979706 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:34:29.979726 systemd-journald[193]: Journal started Sep 12 17:34:29.979746 systemd-journald[193]: Runtime Journal (/run/log/journal/6a0e54f76fe5470ea159f5dd2e783f22) is 6.0M, max 48.4M, 42.3M free. Sep 12 17:34:29.968229 systemd-modules-load[194]: Inserted module 'overlay' Sep 12 17:34:30.002255 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:34:30.002272 kernel: Bridge firewalling registered Sep 12 17:34:29.998567 systemd-modules-load[194]: Inserted module 'br_netfilter' Sep 12 17:34:30.006484 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:34:30.007021 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:34:30.009406 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:34:30.011910 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:34:30.032666 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:34:30.036150 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:34:30.038852 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:34:30.042099 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:34:30.052724 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:34:30.055540 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:34:30.058406 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:34:30.061283 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:34:30.071624 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:34:30.074527 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:34:30.085353 dracut-cmdline[227]: dracut-dracut-053 Sep 12 17:34:30.088506 dracut-cmdline[227]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:34:30.121379 systemd-resolved[229]: Positive Trust Anchors: Sep 12 17:34:30.121396 systemd-resolved[229]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:34:30.121457 systemd-resolved[229]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:34:30.124333 systemd-resolved[229]: Defaulting to hostname 'linux'. Sep 12 17:34:30.125488 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:34:30.131894 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:34:30.182513 kernel: SCSI subsystem initialized Sep 12 17:34:30.192476 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:34:30.203462 kernel: iscsi: registered transport (tcp) Sep 12 17:34:30.226723 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:34:30.226816 kernel: QLogic iSCSI HBA Driver Sep 12 17:34:30.284372 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:34:30.304662 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:34:30.333485 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:34:30.333543 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:34:30.333555 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 17:34:30.383481 kernel: raid6: avx2x4 gen() 29236 MB/s Sep 12 17:34:30.400458 kernel: raid6: avx2x2 gen() 29651 MB/s Sep 12 17:34:30.417512 kernel: raid6: avx2x1 gen() 24730 MB/s Sep 12 17:34:30.417535 kernel: raid6: using algorithm avx2x2 gen() 29651 MB/s Sep 12 17:34:30.435569 kernel: raid6: .... xor() 19368 MB/s, rmw enabled Sep 12 17:34:30.435606 kernel: raid6: using avx2x2 recovery algorithm Sep 12 17:34:30.456456 kernel: xor: automatically using best checksumming function avx Sep 12 17:34:30.612461 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:34:30.625422 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:34:30.634619 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:34:30.646141 systemd-udevd[414]: Using default interface naming scheme 'v255'. Sep 12 17:34:30.650948 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:34:30.656695 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:34:30.673329 dracut-pre-trigger[422]: rd.md=0: removing MD RAID activation Sep 12 17:34:30.706358 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:34:30.722578 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:34:30.792854 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:34:30.805864 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:34:30.818228 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:34:30.820826 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:34:30.823619 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:34:30.826143 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:34:30.830449 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 12 17:34:30.837613 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 12 17:34:30.842163 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 17:34:30.842188 kernel: libata version 3.00 loaded. Sep 12 17:34:30.840855 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:34:30.847771 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 17:34:30.847802 kernel: GPT:9289727 != 19775487 Sep 12 17:34:30.847812 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 17:34:30.847822 kernel: GPT:9289727 != 19775487 Sep 12 17:34:30.847831 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 17:34:30.847842 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:34:30.852063 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:34:30.893448 kernel: ahci 0000:00:1f.2: version 3.0 Sep 12 17:34:30.893651 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 12 17:34:30.893665 kernel: AVX2 version of gcm_enc/dec engaged. Sep 12 17:34:30.894006 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:34:30.895738 kernel: AES CTR mode by8 optimization enabled Sep 12 17:34:30.894418 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:34:30.898507 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:34:30.902978 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 12 17:34:30.903163 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 12 17:34:30.900621 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:34:30.900848 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:34:30.904013 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:34:30.910470 kernel: scsi host0: ahci Sep 12 17:34:30.912494 kernel: scsi host1: ahci Sep 12 17:34:30.915018 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:34:30.923527 kernel: BTRFS: device fsid 6dad227e-2c0d-42e6-b0d2-5c756384bc19 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (470) Sep 12 17:34:30.923545 kernel: scsi host2: ahci Sep 12 17:34:30.923726 kernel: scsi host3: ahci Sep 12 17:34:30.926455 kernel: scsi host4: ahci Sep 12 17:34:30.930105 kernel: scsi host5: ahci Sep 12 17:34:30.930310 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Sep 12 17:34:30.930322 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Sep 12 17:34:30.931704 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Sep 12 17:34:30.931731 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Sep 12 17:34:30.933446 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Sep 12 17:34:30.933468 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Sep 12 17:34:30.938388 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 12 17:34:30.948454 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (467) Sep 12 17:34:30.948750 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 12 17:34:30.953769 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 12 17:34:30.954722 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 12 17:34:30.964461 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 17:34:31.001567 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:34:31.002023 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:34:31.005866 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:34:31.016773 disk-uuid[555]: Primary Header is updated. Sep 12 17:34:31.016773 disk-uuid[555]: Secondary Entries is updated. Sep 12 17:34:31.016773 disk-uuid[555]: Secondary Header is updated. Sep 12 17:34:31.021444 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:34:31.026455 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:34:31.029322 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:34:31.033265 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:34:31.037451 kernel: block device autoloading is deprecated and will be removed. Sep 12 17:34:31.241459 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 12 17:34:31.241514 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 12 17:34:31.242638 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 12 17:34:31.242654 kernel: ata3.00: applying bridge limits Sep 12 17:34:31.244126 kernel: ata3.00: configured for UDMA/100 Sep 12 17:34:31.244176 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 12 17:34:31.244194 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 12 17:34:31.245464 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 12 17:34:31.251461 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 12 17:34:31.251480 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 12 17:34:31.305010 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 12 17:34:31.305234 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 12 17:34:31.317457 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 12 17:34:32.032342 disk-uuid[559]: The operation has completed successfully. Sep 12 17:34:32.033868 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:34:32.087941 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:34:32.088073 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:34:32.092694 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:34:32.097150 sh[596]: Success Sep 12 17:34:32.111477 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 12 17:34:32.148632 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:34:32.172071 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:34:32.175888 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:34:32.190105 kernel: BTRFS info (device dm-0): first mount of filesystem 6dad227e-2c0d-42e6-b0d2-5c756384bc19 Sep 12 17:34:32.190136 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:34:32.190147 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 17:34:32.191088 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:34:32.191811 kernel: BTRFS info (device dm-0): using free space tree Sep 12 17:34:32.196930 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:34:32.199179 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:34:32.207678 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:34:32.210297 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:34:32.218688 kernel: BTRFS info (device vda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:34:32.218716 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:34:32.218733 kernel: BTRFS info (device vda6): using free space tree Sep 12 17:34:32.222765 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 17:34:32.233324 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 12 17:34:32.235109 kernel: BTRFS info (device vda6): last unmount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:34:32.245708 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:34:32.253624 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:34:32.433244 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:34:32.442556 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:34:32.448852 ignition[686]: Ignition 2.19.0 Sep 12 17:34:32.448865 ignition[686]: Stage: fetch-offline Sep 12 17:34:32.449290 ignition[686]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:34:32.449304 ignition[686]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:34:32.449461 ignition[686]: parsed url from cmdline: "" Sep 12 17:34:32.449465 ignition[686]: no config URL provided Sep 12 17:34:32.449471 ignition[686]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:34:32.449480 ignition[686]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:34:32.449508 ignition[686]: op(1): [started] loading QEMU firmware config module Sep 12 17:34:32.449513 ignition[686]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 12 17:34:32.458799 ignition[686]: op(1): [finished] loading QEMU firmware config module Sep 12 17:34:32.466011 systemd-networkd[783]: lo: Link UP Sep 12 17:34:32.466021 systemd-networkd[783]: lo: Gained carrier Sep 12 17:34:32.467624 systemd-networkd[783]: Enumeration completed Sep 12 17:34:32.468040 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:34:32.468044 systemd-networkd[783]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:34:32.468178 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:34:32.469083 systemd-networkd[783]: eth0: Link UP Sep 12 17:34:32.469087 systemd-networkd[783]: eth0: Gained carrier Sep 12 17:34:32.469093 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:34:32.472993 systemd[1]: Reached target network.target - Network. Sep 12 17:34:32.482473 systemd-networkd[783]: eth0: DHCPv4 address 10.0.0.90/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 17:34:32.512143 ignition[686]: parsing config with SHA512: d9045b368c68c9accb4d966abe2b31bc49b27a9214c3f46033d65c452cf355df0cae30ac58582716259397428a359df24a76f594938112719429b408a8598541 Sep 12 17:34:32.517796 unknown[686]: fetched base config from "system" Sep 12 17:34:32.518475 ignition[686]: fetch-offline: fetch-offline passed Sep 12 17:34:32.517812 unknown[686]: fetched user config from "qemu" Sep 12 17:34:32.518562 ignition[686]: Ignition finished successfully Sep 12 17:34:32.520746 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:34:32.522332 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 12 17:34:32.527581 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:34:32.548296 ignition[789]: Ignition 2.19.0 Sep 12 17:34:32.548309 ignition[789]: Stage: kargs Sep 12 17:34:32.548501 ignition[789]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:34:32.548512 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:34:32.549352 ignition[789]: kargs: kargs passed Sep 12 17:34:32.549395 ignition[789]: Ignition finished successfully Sep 12 17:34:32.555713 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:34:32.569580 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:34:32.581035 ignition[797]: Ignition 2.19.0 Sep 12 17:34:32.581048 ignition[797]: Stage: disks Sep 12 17:34:32.581217 ignition[797]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:34:32.581230 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:34:32.582131 ignition[797]: disks: disks passed Sep 12 17:34:32.582174 ignition[797]: Ignition finished successfully Sep 12 17:34:32.587622 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:34:32.589780 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:34:32.590068 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:34:32.592272 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:34:32.592768 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:34:32.593089 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:34:32.608621 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:34:32.621956 systemd-fsck[808]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 12 17:34:32.628358 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:34:32.635621 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:34:32.723469 kernel: EXT4-fs (vda9): mounted filesystem 791ad691-63ae-4dbc-8ce3-6c8819e56736 r/w with ordered data mode. Quota mode: none. Sep 12 17:34:32.724381 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:34:32.725232 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:34:32.737529 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:34:32.739184 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:34:32.740260 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 17:34:32.740297 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:34:32.740318 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:34:32.750598 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (816) Sep 12 17:34:32.746857 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:34:32.749300 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:34:32.755230 kernel: BTRFS info (device vda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:34:32.755248 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:34:32.755267 kernel: BTRFS info (device vda6): using free space tree Sep 12 17:34:32.756455 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 17:34:32.758674 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:34:32.923619 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:34:32.928344 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:34:32.932171 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:34:32.936558 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:34:33.020564 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:34:33.030629 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:34:33.032330 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:34:33.038466 kernel: BTRFS info (device vda6): last unmount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:34:33.059717 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:34:33.072696 ignition[929]: INFO : Ignition 2.19.0 Sep 12 17:34:33.072696 ignition[929]: INFO : Stage: mount Sep 12 17:34:33.074278 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:34:33.074278 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:34:33.074278 ignition[929]: INFO : mount: mount passed Sep 12 17:34:33.074278 ignition[929]: INFO : Ignition finished successfully Sep 12 17:34:33.076089 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:34:33.081569 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:34:33.190075 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:34:33.198746 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:34:33.205477 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (942) Sep 12 17:34:33.207542 kernel: BTRFS info (device vda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:34:33.207566 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:34:33.207577 kernel: BTRFS info (device vda6): using free space tree Sep 12 17:34:33.210455 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 17:34:33.212306 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:34:33.236112 ignition[959]: INFO : Ignition 2.19.0 Sep 12 17:34:33.236112 ignition[959]: INFO : Stage: files Sep 12 17:34:33.238255 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:34:33.238255 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:34:33.238255 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:34:33.241897 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:34:33.241897 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:34:33.245227 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:34:33.246772 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:34:33.246772 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:34:33.246017 unknown[959]: wrote ssh authorized keys file for user: core Sep 12 17:34:33.250708 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 12 17:34:33.250708 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 12 17:34:33.250708 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 12 17:34:33.250708 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 12 17:34:33.291259 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 12 17:34:33.730710 systemd-networkd[783]: eth0: Gained IPv6LL Sep 12 17:34:33.763720 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 12 17:34:33.765794 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:34:33.765794 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:34:33.765794 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:34:33.765794 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:34:33.765794 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:34:33.765794 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:34:33.765794 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:34:33.779221 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:34:33.781799 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:34:33.783967 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:34:33.785930 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:34:33.788509 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:34:33.788509 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:34:33.793111 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 12 17:34:34.124419 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 12 17:34:35.191466 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:34:35.191466 ignition[959]: INFO : files: op(c): [started] processing unit "containerd.service" Sep 12 17:34:35.195181 ignition[959]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 12 17:34:35.195181 ignition[959]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 12 17:34:35.195181 ignition[959]: INFO : files: op(c): [finished] processing unit "containerd.service" Sep 12 17:34:35.195181 ignition[959]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Sep 12 17:34:35.195181 ignition[959]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:34:35.203909 ignition[959]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:34:35.203909 ignition[959]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Sep 12 17:34:35.203909 ignition[959]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Sep 12 17:34:35.203909 ignition[959]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 17:34:35.203909 ignition[959]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 17:34:35.203909 ignition[959]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Sep 12 17:34:35.203909 ignition[959]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Sep 12 17:34:35.272054 ignition[959]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 17:34:35.277119 ignition[959]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 17:34:35.280649 ignition[959]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Sep 12 17:34:35.280649 ignition[959]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:34:35.280649 ignition[959]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:34:35.280649 ignition[959]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:34:35.280649 ignition[959]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:34:35.280649 ignition[959]: INFO : files: files passed Sep 12 17:34:35.280649 ignition[959]: INFO : Ignition finished successfully Sep 12 17:34:35.281656 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:34:35.295574 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:34:35.299024 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:34:35.301754 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:34:35.301864 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:34:35.316571 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory Sep 12 17:34:35.320840 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:34:35.320840 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:34:35.324013 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:34:35.323899 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:34:35.325808 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:34:35.334567 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:34:35.366566 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:34:35.366707 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:34:35.367318 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:34:35.370347 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:34:35.370867 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:34:35.373947 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:34:35.397504 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:34:35.413600 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:34:35.425859 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:34:35.426238 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:34:35.428590 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:34:35.428867 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:34:35.428992 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:34:35.432802 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:34:35.433122 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:34:35.433458 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:34:35.433938 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:34:35.434258 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:34:35.434755 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:34:35.435068 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:34:35.435405 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:34:35.435903 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:34:35.436219 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:34:35.436685 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:34:35.436815 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:34:35.454004 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:34:35.454361 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:34:35.454818 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:34:35.454957 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:34:35.460503 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:34:35.460673 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:34:35.464116 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:34:35.464249 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:34:35.466928 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:34:35.468988 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:34:35.474510 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:34:35.477215 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:34:35.477781 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:34:35.478092 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:34:35.478197 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:34:35.481035 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:34:35.481135 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:34:35.482750 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:34:35.482878 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:34:35.484927 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:34:35.485048 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:34:35.499604 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:34:35.499904 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:34:35.500025 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:34:35.502748 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:34:35.506950 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:34:35.508350 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:34:35.511456 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:34:35.511829 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:34:35.517545 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:34:35.518709 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:34:35.521309 ignition[1013]: INFO : Ignition 2.19.0 Sep 12 17:34:35.521309 ignition[1013]: INFO : Stage: umount Sep 12 17:34:35.521309 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:34:35.521309 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:34:35.527151 ignition[1013]: INFO : umount: umount passed Sep 12 17:34:35.527151 ignition[1013]: INFO : Ignition finished successfully Sep 12 17:34:35.523774 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:34:35.523920 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:34:35.525325 systemd[1]: Stopped target network.target - Network. Sep 12 17:34:35.527158 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:34:35.527212 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:34:35.528991 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:34:35.529041 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:34:35.530843 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:34:35.530889 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:34:35.532899 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:34:35.532949 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:34:35.535327 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:34:35.537252 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:34:35.540525 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:34:35.540776 systemd-networkd[783]: eth0: DHCPv6 lease lost Sep 12 17:34:35.541024 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:34:35.541141 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:34:35.543900 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:34:35.544030 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:34:35.546756 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:34:35.546831 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:34:35.554639 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:34:35.555753 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:34:35.555818 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:34:35.558130 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:34:35.558179 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:34:35.560387 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:34:35.560456 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:34:35.561610 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:34:35.561657 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:34:35.563829 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:34:35.576302 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:34:35.576450 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:34:35.583199 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:34:35.583383 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:34:35.585547 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:34:35.585611 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:34:35.587589 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:34:35.587632 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:34:35.589513 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:34:35.589574 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:34:35.591688 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:34:35.591740 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:34:35.593819 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:34:35.593865 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:34:35.610731 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:34:35.611888 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:34:35.613093 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:34:35.617037 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:34:35.617103 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:34:35.620827 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:34:35.621988 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:34:35.758921 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:34:35.760032 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:34:35.762652 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:34:35.764851 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:34:35.764923 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:34:35.778586 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:34:35.785181 systemd[1]: Switching root. Sep 12 17:34:35.822224 systemd-journald[193]: Journal stopped Sep 12 17:34:37.352126 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Sep 12 17:34:37.352391 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 17:34:37.352416 kernel: SELinux: policy capability open_perms=1 Sep 12 17:34:37.352451 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 17:34:37.352469 kernel: SELinux: policy capability always_check_network=0 Sep 12 17:34:37.353745 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 17:34:37.353767 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 17:34:37.353783 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 17:34:37.353798 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 17:34:37.353813 kernel: audit: type=1403 audit(1757698476.569:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 17:34:37.353837 systemd[1]: Successfully loaded SELinux policy in 52.446ms. Sep 12 17:34:37.353867 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.977ms. Sep 12 17:34:37.353885 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:34:37.353902 systemd[1]: Detected virtualization kvm. Sep 12 17:34:37.353922 systemd[1]: Detected architecture x86-64. Sep 12 17:34:37.353938 systemd[1]: Detected first boot. Sep 12 17:34:37.353954 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:34:37.353970 zram_generator::config[1078]: No configuration found. Sep 12 17:34:37.353987 systemd[1]: Populated /etc with preset unit settings. Sep 12 17:34:37.354003 systemd[1]: Queued start job for default target multi-user.target. Sep 12 17:34:37.354019 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 12 17:34:37.354036 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 17:34:37.354056 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 17:34:37.354073 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 17:34:37.354088 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 17:34:37.354104 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 17:34:37.354120 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 17:34:37.354136 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 17:34:37.354152 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 17:34:37.354168 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:34:37.354185 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:34:37.354205 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 17:34:37.354221 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 17:34:37.354244 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 17:34:37.354261 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:34:37.354277 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 17:34:37.354293 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:34:37.354309 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 17:34:37.354324 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:34:37.354341 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:34:37.354361 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:34:37.354378 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:34:37.354393 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 17:34:37.354409 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 17:34:37.354425 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:34:37.354460 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 17:34:37.354476 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:34:37.354492 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:34:37.354525 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:34:37.354552 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 17:34:37.354568 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 17:34:37.354584 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 17:34:37.354599 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 17:34:37.354615 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:34:37.354632 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 17:34:37.354649 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 17:34:37.354672 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 17:34:37.355884 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 17:34:37.355905 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:34:37.355920 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:34:37.355937 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 17:34:37.355952 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:34:37.355968 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:34:37.355984 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:34:37.356000 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 17:34:37.356020 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:34:37.356036 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 17:34:37.356053 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 12 17:34:37.356069 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Sep 12 17:34:37.356085 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:34:37.356100 kernel: fuse: init (API version 7.39) Sep 12 17:34:37.356115 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:34:37.356149 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:34:37.356166 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 17:34:37.356185 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:34:37.356201 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:34:37.356216 kernel: loop: module loaded Sep 12 17:34:37.356231 kernel: ACPI: bus type drm_connector registered Sep 12 17:34:37.356247 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 17:34:37.356263 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 17:34:37.356278 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 17:34:37.356322 systemd-journald[1159]: Collecting audit messages is disabled. Sep 12 17:34:37.356356 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 17:34:37.356373 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 17:34:37.356389 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 17:34:37.356404 systemd-journald[1159]: Journal started Sep 12 17:34:37.356448 systemd-journald[1159]: Runtime Journal (/run/log/journal/6a0e54f76fe5470ea159f5dd2e783f22) is 6.0M, max 48.4M, 42.3M free. Sep 12 17:34:37.359863 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:34:37.361276 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 17:34:37.362920 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:34:37.364600 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 17:34:37.364814 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 17:34:37.366410 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:34:37.366652 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:34:37.368214 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:34:37.368425 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:34:37.370165 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:34:37.370373 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:34:37.372011 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 17:34:37.372224 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 17:34:37.373752 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:34:37.373960 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:34:37.375822 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:34:37.377481 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:34:37.379286 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 17:34:37.393927 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:34:37.407516 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 17:34:37.409994 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 17:34:37.411298 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 17:34:37.415348 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 17:34:37.417670 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 17:34:37.421540 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:34:37.424386 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 17:34:37.428554 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:34:37.431390 systemd-journald[1159]: Time spent on flushing to /var/log/journal/6a0e54f76fe5470ea159f5dd2e783f22 is 30.251ms for 940 entries. Sep 12 17:34:37.431390 systemd-journald[1159]: System Journal (/var/log/journal/6a0e54f76fe5470ea159f5dd2e783f22) is 8.0M, max 195.6M, 187.6M free. Sep 12 17:34:37.471802 systemd-journald[1159]: Received client request to flush runtime journal. Sep 12 17:34:37.433500 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:34:37.438294 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:34:37.442990 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 17:34:37.445842 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 17:34:37.450470 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:34:37.452351 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 17:34:37.457021 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 17:34:37.467650 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 17:34:37.475065 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 17:34:37.477164 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:34:37.487520 udevadm[1219]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 12 17:34:37.488449 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Sep 12 17:34:37.488466 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Sep 12 17:34:37.495583 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:34:37.504645 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 17:34:37.531475 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 17:34:37.540594 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:34:37.558648 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Sep 12 17:34:37.558667 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Sep 12 17:34:37.564680 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:34:38.089185 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 17:34:38.102573 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:34:38.128749 systemd-udevd[1239]: Using default interface naming scheme 'v255'. Sep 12 17:34:38.144962 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:34:38.152772 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:34:38.166612 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 17:34:38.185793 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Sep 12 17:34:38.333459 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1256) Sep 12 17:34:38.335750 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 17:34:38.437268 systemd-networkd[1243]: lo: Link UP Sep 12 17:34:38.437644 systemd-networkd[1243]: lo: Gained carrier Sep 12 17:34:38.439344 systemd-networkd[1243]: Enumeration completed Sep 12 17:34:38.439838 systemd-networkd[1243]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:34:38.439895 systemd-networkd[1243]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:34:38.440732 systemd-networkd[1243]: eth0: Link UP Sep 12 17:34:38.440787 systemd-networkd[1243]: eth0: Gained carrier Sep 12 17:34:38.440835 systemd-networkd[1243]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:34:38.444857 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:34:38.449499 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 12 17:34:38.450573 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 17:34:38.459132 systemd-networkd[1243]: eth0: DHCPv4 address 10.0.0.90/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 17:34:38.463457 kernel: ACPI: button: Power Button [PWRF] Sep 12 17:34:38.466634 systemd-networkd[1243]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:34:38.488447 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 12 17:34:38.569408 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 12 17:34:38.569832 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 12 17:34:38.570029 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 12 17:34:38.575748 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:34:38.581707 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 17:34:38.598451 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 17:34:38.684896 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:34:38.694757 kernel: kvm_amd: TSC scaling supported Sep 12 17:34:38.694841 kernel: kvm_amd: Nested Virtualization enabled Sep 12 17:34:38.694856 kernel: kvm_amd: Nested Paging enabled Sep 12 17:34:38.695717 kernel: kvm_amd: LBR virtualization supported Sep 12 17:34:38.695739 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 12 17:34:38.696688 kernel: kvm_amd: Virtual GIF supported Sep 12 17:34:38.716453 kernel: EDAC MC: Ver: 3.0.0 Sep 12 17:34:38.748940 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 17:34:38.762583 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 17:34:38.771705 lvm[1287]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:34:38.801491 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 17:34:38.802982 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:34:38.814554 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 17:34:38.819212 lvm[1290]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:34:38.856723 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 17:34:38.858168 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:34:38.859462 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 17:34:38.859487 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:34:38.860517 systemd[1]: Reached target machines.target - Containers. Sep 12 17:34:38.862580 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 12 17:34:38.876707 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 17:34:38.879616 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 17:34:38.880805 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:34:38.881937 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 17:34:38.884288 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 12 17:34:38.887512 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 17:34:38.889775 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 17:34:38.900333 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 17:34:38.903303 kernel: loop0: detected capacity change from 0 to 221472 Sep 12 17:34:38.981486 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 17:34:39.014472 kernel: loop1: detected capacity change from 0 to 140768 Sep 12 17:34:39.096457 kernel: loop2: detected capacity change from 0 to 142488 Sep 12 17:34:39.195462 kernel: loop3: detected capacity change from 0 to 221472 Sep 12 17:34:39.306458 kernel: loop4: detected capacity change from 0 to 140768 Sep 12 17:34:39.318468 kernel: loop5: detected capacity change from 0 to 142488 Sep 12 17:34:39.328179 (sd-merge)[1308]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 12 17:34:39.328817 (sd-merge)[1308]: Merged extensions into '/usr'. Sep 12 17:34:39.333506 systemd[1]: Reloading requested from client PID 1298 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 17:34:39.333531 systemd[1]: Reloading... Sep 12 17:34:39.393469 zram_generator::config[1337]: No configuration found. Sep 12 17:34:39.525354 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:34:39.530546 ldconfig[1295]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 17:34:39.591834 systemd[1]: Reloading finished in 257 ms. Sep 12 17:34:39.614200 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 17:34:39.615831 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 17:34:39.618692 systemd-networkd[1243]: eth0: Gained IPv6LL Sep 12 17:34:39.630560 systemd[1]: Starting ensure-sysext.service... Sep 12 17:34:39.632608 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:34:39.634388 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 17:34:39.640191 systemd[1]: Reloading requested from client PID 1382 ('systemctl') (unit ensure-sysext.service)... Sep 12 17:34:39.640205 systemd[1]: Reloading... Sep 12 17:34:39.725464 zram_generator::config[1410]: No configuration found. Sep 12 17:34:39.891939 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 17:34:39.892461 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 17:34:39.893610 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 17:34:39.894008 systemd-tmpfiles[1383]: ACLs are not supported, ignoring. Sep 12 17:34:39.894090 systemd-tmpfiles[1383]: ACLs are not supported, ignoring. Sep 12 17:34:39.897780 systemd-tmpfiles[1383]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:34:39.897792 systemd-tmpfiles[1383]: Skipping /boot Sep 12 17:34:39.910201 systemd-tmpfiles[1383]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:34:39.910215 systemd-tmpfiles[1383]: Skipping /boot Sep 12 17:34:39.948590 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:34:40.030891 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 17:34:40.031441 systemd[1]: Reloading finished in 390 ms. Sep 12 17:34:40.054769 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 12 17:34:40.066303 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:34:40.086715 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:34:40.089703 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 17:34:40.092490 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 17:34:40.096665 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:34:40.100750 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 17:34:40.104855 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:34:40.105847 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:34:40.115673 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:34:40.121364 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:34:40.124204 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:34:40.125611 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:34:40.125720 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:34:40.126606 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:34:40.126836 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:34:40.131072 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:34:40.131292 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:34:40.135874 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:34:40.136295 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:34:40.144864 augenrules[1490]: No rules Sep 12 17:34:40.144590 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 17:34:40.148849 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:34:40.151674 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 17:34:40.158150 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:34:40.158525 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:34:40.165661 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:34:40.170686 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:34:40.172768 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:34:40.178690 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:34:40.181533 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:34:40.184673 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 17:34:40.185787 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:34:40.189416 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 17:34:40.191630 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:34:40.191983 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:34:40.193718 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:34:40.193936 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:34:40.195704 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:34:40.195923 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:34:40.197667 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:34:40.197961 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:34:40.203051 systemd[1]: Finished ensure-sysext.service. Sep 12 17:34:40.204363 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 17:34:40.212661 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:34:40.212745 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:34:40.222585 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 17:34:40.223732 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:34:40.224269 systemd-resolved[1465]: Positive Trust Anchors: Sep 12 17:34:40.224279 systemd-resolved[1465]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:34:40.224310 systemd-resolved[1465]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:34:40.228902 systemd-resolved[1465]: Defaulting to hostname 'linux'. Sep 12 17:34:40.230947 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:34:40.232084 systemd[1]: Reached target network.target - Network. Sep 12 17:34:40.233017 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 17:34:40.234246 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:34:40.310605 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 17:34:40.311705 systemd-timesyncd[1522]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 12 17:34:40.311768 systemd-timesyncd[1522]: Initial clock synchronization to Fri 2025-09-12 17:34:40.086635 UTC. Sep 12 17:34:40.312337 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:34:40.313709 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 17:34:40.314947 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 17:34:40.316221 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 17:34:40.317512 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 17:34:40.317551 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:34:40.318573 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 17:34:40.319982 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 17:34:40.321175 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 17:34:40.322455 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:34:40.324424 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 17:34:40.327989 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 17:34:40.330299 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 17:34:40.336824 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 17:34:40.337969 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:34:40.338957 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:34:40.340112 systemd[1]: System is tainted: cgroupsv1 Sep 12 17:34:40.340161 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:34:40.340193 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:34:40.341844 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 17:34:40.344067 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 12 17:34:40.346348 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 17:34:40.350522 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 17:34:40.353709 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 17:34:40.354897 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 17:34:40.356307 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:34:40.361053 jq[1529]: false Sep 12 17:34:40.361274 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 17:34:40.363414 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 17:34:40.376627 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 17:34:40.377387 extend-filesystems[1532]: Found loop3 Sep 12 17:34:40.382096 extend-filesystems[1532]: Found loop4 Sep 12 17:34:40.382096 extend-filesystems[1532]: Found loop5 Sep 12 17:34:40.382096 extend-filesystems[1532]: Found sr0 Sep 12 17:34:40.382096 extend-filesystems[1532]: Found vda Sep 12 17:34:40.382096 extend-filesystems[1532]: Found vda1 Sep 12 17:34:40.382096 extend-filesystems[1532]: Found vda2 Sep 12 17:34:40.382096 extend-filesystems[1532]: Found vda3 Sep 12 17:34:40.382096 extend-filesystems[1532]: Found usr Sep 12 17:34:40.382096 extend-filesystems[1532]: Found vda4 Sep 12 17:34:40.382096 extend-filesystems[1532]: Found vda6 Sep 12 17:34:40.382096 extend-filesystems[1532]: Found vda7 Sep 12 17:34:40.382096 extend-filesystems[1532]: Found vda9 Sep 12 17:34:40.382096 extend-filesystems[1532]: Checking size of /dev/vda9 Sep 12 17:34:40.382294 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 17:34:40.388152 dbus-daemon[1528]: [system] SELinux support is enabled Sep 12 17:34:40.393686 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 17:34:40.405485 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 17:34:40.407981 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 17:34:40.412308 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 17:34:40.415536 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 17:34:40.417809 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 17:34:40.426922 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 17:34:40.427250 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 17:34:40.428943 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 17:34:40.429260 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 17:34:40.433499 jq[1563]: true Sep 12 17:34:40.432885 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 17:34:40.441037 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 17:34:40.441365 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 17:34:40.446774 extend-filesystems[1532]: Resized partition /dev/vda9 Sep 12 17:34:40.454491 extend-filesystems[1573]: resize2fs 1.47.1 (20-May-2024) Sep 12 17:34:40.459581 update_engine[1558]: I20250912 17:34:40.457316 1558 main.cc:92] Flatcar Update Engine starting Sep 12 17:34:40.469900 update_engine[1558]: I20250912 17:34:40.469841 1558 update_check_scheduler.cc:74] Next update check in 3m20s Sep 12 17:34:40.477998 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 12 17:34:40.478059 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1256) Sep 12 17:34:40.478588 systemd-logind[1556]: Watching system buttons on /dev/input/event1 (Power Button) Sep 12 17:34:40.478791 systemd-logind[1556]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 17:34:40.480794 systemd-logind[1556]: New seat seat0. Sep 12 17:34:40.486202 (ntainerd)[1575]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 17:34:40.488556 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 17:34:40.492579 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 12 17:34:40.492980 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 12 17:34:40.514712 jq[1574]: true Sep 12 17:34:40.506654 dbus-daemon[1528]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 12 17:34:40.533467 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 12 17:34:40.536997 tar[1570]: linux-amd64/helm Sep 12 17:34:40.557529 systemd[1]: Started update-engine.service - Update Engine. Sep 12 17:34:40.559890 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 17:34:40.564865 extend-filesystems[1573]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 12 17:34:40.564865 extend-filesystems[1573]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 17:34:40.564865 extend-filesystems[1573]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 12 17:34:40.560090 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 17:34:40.578666 sshd_keygen[1562]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 17:34:40.578770 extend-filesystems[1532]: Resized filesystem in /dev/vda9 Sep 12 17:34:40.560206 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 17:34:40.561800 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 17:34:40.561932 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 17:34:40.564012 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 17:34:40.573365 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 17:34:40.580302 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 17:34:40.580904 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 17:34:40.590350 bash[1608]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:34:40.592642 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 17:34:40.602126 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 12 17:34:40.613647 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 17:34:40.626716 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 17:34:40.641268 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 17:34:40.641669 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 17:34:40.656501 locksmithd[1610]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 17:34:40.658372 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 17:34:40.699512 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 17:34:40.712114 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 17:34:40.718119 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 17:34:40.719518 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 17:34:41.073611 containerd[1575]: time="2025-09-12T17:34:41.073245341Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 12 17:34:41.114927 containerd[1575]: time="2025-09-12T17:34:41.114599108Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:34:41.117788 containerd[1575]: time="2025-09-12T17:34:41.117752737Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:34:41.119576 containerd[1575]: time="2025-09-12T17:34:41.117838385Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 17:34:41.119576 containerd[1575]: time="2025-09-12T17:34:41.117865931Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 17:34:41.119576 containerd[1575]: time="2025-09-12T17:34:41.118099201Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 17:34:41.119576 containerd[1575]: time="2025-09-12T17:34:41.118116387Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 17:34:41.119576 containerd[1575]: time="2025-09-12T17:34:41.118198967Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:34:41.119576 containerd[1575]: time="2025-09-12T17:34:41.118211284Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:34:41.119576 containerd[1575]: time="2025-09-12T17:34:41.118533445Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:34:41.119576 containerd[1575]: time="2025-09-12T17:34:41.118551157Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 17:34:41.119576 containerd[1575]: time="2025-09-12T17:34:41.118565178Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:34:41.119576 containerd[1575]: time="2025-09-12T17:34:41.118575022Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 17:34:41.119576 containerd[1575]: time="2025-09-12T17:34:41.118717309Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:34:41.119576 containerd[1575]: time="2025-09-12T17:34:41.118985254Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:34:41.119854 containerd[1575]: time="2025-09-12T17:34:41.119192156Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:34:41.119854 containerd[1575]: time="2025-09-12T17:34:41.119206325Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 17:34:41.119854 containerd[1575]: time="2025-09-12T17:34:41.119317619Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 17:34:41.119854 containerd[1575]: time="2025-09-12T17:34:41.119378437Z" level=info msg="metadata content store policy set" policy=shared Sep 12 17:34:41.127023 containerd[1575]: time="2025-09-12T17:34:41.126977686Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 17:34:41.127121 containerd[1575]: time="2025-09-12T17:34:41.127092768Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 17:34:41.127143 containerd[1575]: time="2025-09-12T17:34:41.127125562Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 17:34:41.127163 containerd[1575]: time="2025-09-12T17:34:41.127148493Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 17:34:41.127182 containerd[1575]: time="2025-09-12T17:34:41.127163488Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 17:34:41.127635 containerd[1575]: time="2025-09-12T17:34:41.127609387Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 17:34:41.130049 containerd[1575]: time="2025-09-12T17:34:41.128456316Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 17:34:41.130049 containerd[1575]: time="2025-09-12T17:34:41.128610532Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 17:34:41.130049 containerd[1575]: time="2025-09-12T17:34:41.128625731Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 17:34:41.130049 containerd[1575]: time="2025-09-12T17:34:41.128637864Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 17:34:41.130049 containerd[1575]: time="2025-09-12T17:34:41.128652206Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 17:34:41.130049 containerd[1575]: time="2025-09-12T17:34:41.128664124Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 17:34:41.130049 containerd[1575]: time="2025-09-12T17:34:41.128678117Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 17:34:41.130049 containerd[1575]: time="2025-09-12T17:34:41.128691759Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 17:34:41.130049 containerd[1575]: time="2025-09-12T17:34:41.128706423Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 17:34:41.130049 containerd[1575]: time="2025-09-12T17:34:41.128721885Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 17:34:41.130049 containerd[1575]: time="2025-09-12T17:34:41.128748048Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 17:34:41.130049 containerd[1575]: time="2025-09-12T17:34:41.128760346Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 17:34:41.130049 containerd[1575]: time="2025-09-12T17:34:41.128787844Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 17:34:41.130049 containerd[1575]: time="2025-09-12T17:34:41.128801592Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 17:34:41.130309 containerd[1575]: time="2025-09-12T17:34:41.128817250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 17:34:41.130309 containerd[1575]: time="2025-09-12T17:34:41.128830132Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 17:34:41.130309 containerd[1575]: time="2025-09-12T17:34:41.128846168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 17:34:41.130309 containerd[1575]: time="2025-09-12T17:34:41.128869849Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 17:34:41.130309 containerd[1575]: time="2025-09-12T17:34:41.128881913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 17:34:41.130309 containerd[1575]: time="2025-09-12T17:34:41.128894299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 17:34:41.130309 containerd[1575]: time="2025-09-12T17:34:41.128907726Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 17:34:41.130309 containerd[1575]: time="2025-09-12T17:34:41.128922770Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 17:34:41.130309 containerd[1575]: time="2025-09-12T17:34:41.128933939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 17:34:41.130309 containerd[1575]: time="2025-09-12T17:34:41.128947532Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 17:34:41.130309 containerd[1575]: time="2025-09-12T17:34:41.128959703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 17:34:41.130309 containerd[1575]: time="2025-09-12T17:34:41.128978145Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 17:34:41.130309 containerd[1575]: time="2025-09-12T17:34:41.129002098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 17:34:41.130309 containerd[1575]: time="2025-09-12T17:34:41.129013490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 17:34:41.130309 containerd[1575]: time="2025-09-12T17:34:41.129025798Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 17:34:41.130600 containerd[1575]: time="2025-09-12T17:34:41.129107326Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 17:34:41.130600 containerd[1575]: time="2025-09-12T17:34:41.129125633Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 17:34:41.130600 containerd[1575]: time="2025-09-12T17:34:41.129135467Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 17:34:41.130600 containerd[1575]: time="2025-09-12T17:34:41.129147862Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 17:34:41.130600 containerd[1575]: time="2025-09-12T17:34:41.129158018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 17:34:41.130600 containerd[1575]: time="2025-09-12T17:34:41.129173987Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 17:34:41.130600 containerd[1575]: time="2025-09-12T17:34:41.129186362Z" level=info msg="NRI interface is disabled by configuration." Sep 12 17:34:41.130600 containerd[1575]: time="2025-09-12T17:34:41.129197755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 17:34:41.130762 containerd[1575]: time="2025-09-12T17:34:41.129537364Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 17:34:41.130762 containerd[1575]: time="2025-09-12T17:34:41.129591443Z" level=info msg="Connect containerd service" Sep 12 17:34:41.130762 containerd[1575]: time="2025-09-12T17:34:41.129630917Z" level=info msg="using legacy CRI server" Sep 12 17:34:41.130762 containerd[1575]: time="2025-09-12T17:34:41.129640596Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 17:34:41.130762 containerd[1575]: time="2025-09-12T17:34:41.129743449Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 17:34:41.131464 containerd[1575]: time="2025-09-12T17:34:41.131442576Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:34:41.131694 containerd[1575]: time="2025-09-12T17:34:41.131649575Z" level=info msg="Start subscribing containerd event" Sep 12 17:34:41.131740 containerd[1575]: time="2025-09-12T17:34:41.131726216Z" level=info msg="Start recovering state" Sep 12 17:34:41.131834 containerd[1575]: time="2025-09-12T17:34:41.131811698Z" level=info msg="Start event monitor" Sep 12 17:34:41.131834 containerd[1575]: time="2025-09-12T17:34:41.131833041Z" level=info msg="Start snapshots syncer" Sep 12 17:34:41.131887 containerd[1575]: time="2025-09-12T17:34:41.131848883Z" level=info msg="Start cni network conf syncer for default" Sep 12 17:34:41.131887 containerd[1575]: time="2025-09-12T17:34:41.131862457Z" level=info msg="Start streaming server" Sep 12 17:34:41.132099 containerd[1575]: time="2025-09-12T17:34:41.132081122Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 17:34:41.132208 containerd[1575]: time="2025-09-12T17:34:41.132193439Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 17:34:41.136561 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 17:34:41.137350 containerd[1575]: time="2025-09-12T17:34:41.137329903Z" level=info msg="containerd successfully booted in 0.069399s" Sep 12 17:34:41.369771 tar[1570]: linux-amd64/LICENSE Sep 12 17:34:41.369903 tar[1570]: linux-amd64/README.md Sep 12 17:34:41.385614 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 17:34:42.381185 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:34:42.383237 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 17:34:42.384806 systemd[1]: Startup finished in 7.732s (kernel) + 5.865s (userspace) = 13.598s. Sep 12 17:34:42.396865 (kubelet)[1662]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:34:42.606358 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 17:34:42.614862 systemd[1]: Started sshd@0-10.0.0.90:22-10.0.0.1:59284.service - OpenSSH per-connection server daemon (10.0.0.1:59284). Sep 12 17:34:42.685283 sshd[1672]: Accepted publickey for core from 10.0.0.1 port 59284 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:34:42.690127 sshd[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:34:42.704586 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 17:34:42.710786 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 17:34:42.713930 systemd-logind[1556]: New session 1 of user core. Sep 12 17:34:42.727891 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 17:34:42.743906 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 17:34:42.747622 (systemd)[1677]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 17:34:42.897369 systemd[1677]: Queued start job for default target default.target. Sep 12 17:34:42.897882 systemd[1677]: Created slice app.slice - User Application Slice. Sep 12 17:34:42.897907 systemd[1677]: Reached target paths.target - Paths. Sep 12 17:34:42.897919 systemd[1677]: Reached target timers.target - Timers. Sep 12 17:34:42.909638 systemd[1677]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 17:34:42.917738 systemd[1677]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 17:34:42.917928 systemd[1677]: Reached target sockets.target - Sockets. Sep 12 17:34:42.918039 systemd[1677]: Reached target basic.target - Basic System. Sep 12 17:34:42.918094 systemd[1677]: Reached target default.target - Main User Target. Sep 12 17:34:42.918136 systemd[1677]: Startup finished in 161ms. Sep 12 17:34:42.918393 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 17:34:42.921898 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 17:34:42.936267 kubelet[1662]: E0912 17:34:42.936123 1662 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:34:42.941062 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:34:42.941386 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:34:42.997731 systemd[1]: Started sshd@1-10.0.0.90:22-10.0.0.1:59294.service - OpenSSH per-connection server daemon (10.0.0.1:59294). Sep 12 17:34:43.033772 sshd[1693]: Accepted publickey for core from 10.0.0.1 port 59294 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:34:43.035583 sshd[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:34:43.041057 systemd-logind[1556]: New session 2 of user core. Sep 12 17:34:43.050719 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 17:34:43.105349 sshd[1693]: pam_unix(sshd:session): session closed for user core Sep 12 17:34:43.113727 systemd[1]: Started sshd@2-10.0.0.90:22-10.0.0.1:59308.service - OpenSSH per-connection server daemon (10.0.0.1:59308). Sep 12 17:34:43.114206 systemd[1]: sshd@1-10.0.0.90:22-10.0.0.1:59294.service: Deactivated successfully. Sep 12 17:34:43.116932 systemd-logind[1556]: Session 2 logged out. Waiting for processes to exit. Sep 12 17:34:43.117636 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 17:34:43.118895 systemd-logind[1556]: Removed session 2. Sep 12 17:34:43.148615 sshd[1698]: Accepted publickey for core from 10.0.0.1 port 59308 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:34:43.150258 sshd[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:34:43.154253 systemd-logind[1556]: New session 3 of user core. Sep 12 17:34:43.163738 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 17:34:43.219416 sshd[1698]: pam_unix(sshd:session): session closed for user core Sep 12 17:34:43.229657 systemd[1]: Started sshd@3-10.0.0.90:22-10.0.0.1:59324.service - OpenSSH per-connection server daemon (10.0.0.1:59324). Sep 12 17:34:43.230189 systemd[1]: sshd@2-10.0.0.90:22-10.0.0.1:59308.service: Deactivated successfully. Sep 12 17:34:43.232591 systemd-logind[1556]: Session 3 logged out. Waiting for processes to exit. Sep 12 17:34:43.233457 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 17:34:43.235120 systemd-logind[1556]: Removed session 3. Sep 12 17:34:43.262592 sshd[1706]: Accepted publickey for core from 10.0.0.1 port 59324 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:34:43.264134 sshd[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:34:43.267905 systemd-logind[1556]: New session 4 of user core. Sep 12 17:34:43.277675 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 17:34:43.330539 sshd[1706]: pam_unix(sshd:session): session closed for user core Sep 12 17:34:43.347853 systemd[1]: Started sshd@4-10.0.0.90:22-10.0.0.1:59330.service - OpenSSH per-connection server daemon (10.0.0.1:59330). Sep 12 17:34:43.348394 systemd[1]: sshd@3-10.0.0.90:22-10.0.0.1:59324.service: Deactivated successfully. Sep 12 17:34:43.350581 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 17:34:43.351355 systemd-logind[1556]: Session 4 logged out. Waiting for processes to exit. Sep 12 17:34:43.352466 systemd-logind[1556]: Removed session 4. Sep 12 17:34:43.380337 sshd[1714]: Accepted publickey for core from 10.0.0.1 port 59330 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:34:43.382068 sshd[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:34:43.386163 systemd-logind[1556]: New session 5 of user core. Sep 12 17:34:43.395713 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 17:34:43.452383 sudo[1721]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 17:34:43.452740 sudo[1721]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:34:43.472967 sudo[1721]: pam_unix(sudo:session): session closed for user root Sep 12 17:34:43.475210 sshd[1714]: pam_unix(sshd:session): session closed for user core Sep 12 17:34:43.484680 systemd[1]: Started sshd@5-10.0.0.90:22-10.0.0.1:59332.service - OpenSSH per-connection server daemon (10.0.0.1:59332). Sep 12 17:34:43.485155 systemd[1]: sshd@4-10.0.0.90:22-10.0.0.1:59330.service: Deactivated successfully. Sep 12 17:34:43.487793 systemd-logind[1556]: Session 5 logged out. Waiting for processes to exit. Sep 12 17:34:43.488647 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 17:34:43.490154 systemd-logind[1556]: Removed session 5. Sep 12 17:34:43.517844 sshd[1723]: Accepted publickey for core from 10.0.0.1 port 59332 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:34:43.519637 sshd[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:34:43.523863 systemd-logind[1556]: New session 6 of user core. Sep 12 17:34:43.534695 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 17:34:43.587841 sudo[1731]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 17:34:43.588163 sudo[1731]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:34:43.591793 sudo[1731]: pam_unix(sudo:session): session closed for user root Sep 12 17:34:43.598191 sudo[1730]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 12 17:34:43.598562 sudo[1730]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:34:43.616664 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 12 17:34:43.618594 auditctl[1734]: No rules Sep 12 17:34:43.619844 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:34:43.620183 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 12 17:34:43.622194 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:34:43.653550 augenrules[1753]: No rules Sep 12 17:34:43.655380 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:34:43.656841 sudo[1730]: pam_unix(sudo:session): session closed for user root Sep 12 17:34:43.658748 sshd[1723]: pam_unix(sshd:session): session closed for user core Sep 12 17:34:43.668689 systemd[1]: Started sshd@6-10.0.0.90:22-10.0.0.1:59348.service - OpenSSH per-connection server daemon (10.0.0.1:59348). Sep 12 17:34:43.669261 systemd[1]: sshd@5-10.0.0.90:22-10.0.0.1:59332.service: Deactivated successfully. Sep 12 17:34:43.671701 systemd-logind[1556]: Session 6 logged out. Waiting for processes to exit. Sep 12 17:34:43.672643 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 17:34:43.674276 systemd-logind[1556]: Removed session 6. Sep 12 17:34:43.704348 sshd[1759]: Accepted publickey for core from 10.0.0.1 port 59348 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:34:43.705960 sshd[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:34:43.709990 systemd-logind[1556]: New session 7 of user core. Sep 12 17:34:43.719718 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 17:34:43.771782 sudo[1766]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 17:34:43.772124 sudo[1766]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:34:44.546709 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 17:34:44.548100 (dockerd)[1784]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 17:34:45.295218 dockerd[1784]: time="2025-09-12T17:34:45.295133708Z" level=info msg="Starting up" Sep 12 17:34:46.046560 dockerd[1784]: time="2025-09-12T17:34:46.046490264Z" level=info msg="Loading containers: start." Sep 12 17:34:46.195462 kernel: Initializing XFRM netlink socket Sep 12 17:34:46.288604 systemd-networkd[1243]: docker0: Link UP Sep 12 17:34:46.311156 dockerd[1784]: time="2025-09-12T17:34:46.310982765Z" level=info msg="Loading containers: done." Sep 12 17:34:46.336692 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4160717037-merged.mount: Deactivated successfully. Sep 12 17:34:46.338811 dockerd[1784]: time="2025-09-12T17:34:46.338741337Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 17:34:46.338957 dockerd[1784]: time="2025-09-12T17:34:46.338891692Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 12 17:34:46.339025 dockerd[1784]: time="2025-09-12T17:34:46.339007705Z" level=info msg="Daemon has completed initialization" Sep 12 17:34:46.378608 dockerd[1784]: time="2025-09-12T17:34:46.378518522Z" level=info msg="API listen on /run/docker.sock" Sep 12 17:34:46.378748 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 17:34:47.439973 containerd[1575]: time="2025-09-12T17:34:47.439907783Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 12 17:34:48.238465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount734821178.mount: Deactivated successfully. Sep 12 17:34:49.500875 containerd[1575]: time="2025-09-12T17:34:49.500810602Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:49.501509 containerd[1575]: time="2025-09-12T17:34:49.501452287Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=28117124" Sep 12 17:34:49.502709 containerd[1575]: time="2025-09-12T17:34:49.502672304Z" level=info msg="ImageCreate event name:\"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:49.505756 containerd[1575]: time="2025-09-12T17:34:49.505708133Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:49.507043 containerd[1575]: time="2025-09-12T17:34:49.506983118Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"28113723\" in 2.067007901s" Sep 12 17:34:49.507043 containerd[1575]: time="2025-09-12T17:34:49.507028897Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 12 17:34:49.507719 containerd[1575]: time="2025-09-12T17:34:49.507685683Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 12 17:34:51.397724 containerd[1575]: time="2025-09-12T17:34:51.397654135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:51.398495 containerd[1575]: time="2025-09-12T17:34:51.398422400Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=24716632" Sep 12 17:34:51.399639 containerd[1575]: time="2025-09-12T17:34:51.399604493Z" level=info msg="ImageCreate event name:\"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:51.403321 containerd[1575]: time="2025-09-12T17:34:51.403279417Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:51.404368 containerd[1575]: time="2025-09-12T17:34:51.404309397Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"26351311\" in 1.896592881s" Sep 12 17:34:51.404368 containerd[1575]: time="2025-09-12T17:34:51.404359190Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 12 17:34:51.405463 containerd[1575]: time="2025-09-12T17:34:51.405410271Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 12 17:34:53.191595 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 17:34:53.200348 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:34:53.487202 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:34:53.493778 (kubelet)[2011]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:34:53.693501 kubelet[2011]: E0912 17:34:53.693327 2011 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:34:53.700379 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:34:53.700692 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:34:53.799557 containerd[1575]: time="2025-09-12T17:34:53.799360220Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:53.799557 containerd[1575]: time="2025-09-12T17:34:53.799492487Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=18787698" Sep 12 17:34:53.801632 containerd[1575]: time="2025-09-12T17:34:53.801576306Z" level=info msg="ImageCreate event name:\"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:53.805305 containerd[1575]: time="2025-09-12T17:34:53.805251980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:53.806834 containerd[1575]: time="2025-09-12T17:34:53.806799563Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"20422395\" in 2.401323189s" Sep 12 17:34:53.806891 containerd[1575]: time="2025-09-12T17:34:53.806836393Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 12 17:34:53.807449 containerd[1575]: time="2025-09-12T17:34:53.807380280Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 12 17:34:55.131765 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1234346191.mount: Deactivated successfully. Sep 12 17:34:56.017107 containerd[1575]: time="2025-09-12T17:34:56.017002046Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:56.018076 containerd[1575]: time="2025-09-12T17:34:56.017966297Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=30410252" Sep 12 17:34:56.019206 containerd[1575]: time="2025-09-12T17:34:56.019164438Z" level=info msg="ImageCreate event name:\"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:56.021678 containerd[1575]: time="2025-09-12T17:34:56.021651767Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:56.024759 containerd[1575]: time="2025-09-12T17:34:56.023406376Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"30409271\" in 2.215987901s" Sep 12 17:34:56.024759 containerd[1575]: time="2025-09-12T17:34:56.023479036Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 12 17:34:56.024988 containerd[1575]: time="2025-09-12T17:34:56.024928394Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 17:34:56.566865 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2229241065.mount: Deactivated successfully. Sep 12 17:34:57.602240 containerd[1575]: time="2025-09-12T17:34:57.602181931Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:57.603100 containerd[1575]: time="2025-09-12T17:34:57.602999655Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 12 17:34:57.604418 containerd[1575]: time="2025-09-12T17:34:57.604371705Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:57.608025 containerd[1575]: time="2025-09-12T17:34:57.607980015Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:57.609522 containerd[1575]: time="2025-09-12T17:34:57.609472771Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.584471968s" Sep 12 17:34:57.609522 containerd[1575]: time="2025-09-12T17:34:57.609517146Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 12 17:34:57.610192 containerd[1575]: time="2025-09-12T17:34:57.610152375Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 17:34:58.020843 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2544520841.mount: Deactivated successfully. Sep 12 17:34:58.028187 containerd[1575]: time="2025-09-12T17:34:58.028129794Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:58.029089 containerd[1575]: time="2025-09-12T17:34:58.029039093Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 12 17:34:58.030245 containerd[1575]: time="2025-09-12T17:34:58.030214230Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:58.032756 containerd[1575]: time="2025-09-12T17:34:58.032729911Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:58.033849 containerd[1575]: time="2025-09-12T17:34:58.033798168Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 423.61164ms" Sep 12 17:34:58.033909 containerd[1575]: time="2025-09-12T17:34:58.033848366Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 17:34:58.034397 containerd[1575]: time="2025-09-12T17:34:58.034377809Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 12 17:34:58.484824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount362035133.mount: Deactivated successfully. Sep 12 17:35:00.684418 containerd[1575]: time="2025-09-12T17:35:00.684330525Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:00.685273 containerd[1575]: time="2025-09-12T17:35:00.685212836Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 12 17:35:00.686865 containerd[1575]: time="2025-09-12T17:35:00.686813347Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:00.690845 containerd[1575]: time="2025-09-12T17:35:00.690800512Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:00.692570 containerd[1575]: time="2025-09-12T17:35:00.692506714Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.658099048s" Sep 12 17:35:00.692621 containerd[1575]: time="2025-09-12T17:35:00.692572022Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 12 17:35:03.606375 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:35:03.622901 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:35:03.652301 systemd[1]: Reloading requested from client PID 2169 ('systemctl') (unit session-7.scope)... Sep 12 17:35:03.652318 systemd[1]: Reloading... Sep 12 17:35:03.734795 zram_generator::config[2208]: No configuration found. Sep 12 17:35:04.042111 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:35:04.123121 systemd[1]: Reloading finished in 470 ms. Sep 12 17:35:04.172518 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 17:35:04.172630 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 17:35:04.172996 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:35:04.183787 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:35:04.346126 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:35:04.350794 (kubelet)[2268]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:35:04.466073 kubelet[2268]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:35:04.466073 kubelet[2268]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 17:35:04.466073 kubelet[2268]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:35:04.466573 kubelet[2268]: I0912 17:35:04.466157 2268 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:35:04.786776 kubelet[2268]: I0912 17:35:04.786616 2268 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 17:35:04.786776 kubelet[2268]: I0912 17:35:04.786651 2268 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:35:04.786925 kubelet[2268]: I0912 17:35:04.786905 2268 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 17:35:04.810831 kubelet[2268]: E0912 17:35:04.810767 2268 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.90:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:35:04.811929 kubelet[2268]: I0912 17:35:04.811904 2268 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:35:04.819989 kubelet[2268]: E0912 17:35:04.819934 2268 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:35:04.819989 kubelet[2268]: I0912 17:35:04.819984 2268 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:35:04.826979 kubelet[2268]: I0912 17:35:04.826945 2268 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:35:04.828170 kubelet[2268]: I0912 17:35:04.828143 2268 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 17:35:04.828365 kubelet[2268]: I0912 17:35:04.828323 2268 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:35:04.828578 kubelet[2268]: I0912 17:35:04.828358 2268 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 12 17:35:04.828761 kubelet[2268]: I0912 17:35:04.828597 2268 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:35:04.828761 kubelet[2268]: I0912 17:35:04.828608 2268 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 17:35:04.828802 kubelet[2268]: I0912 17:35:04.828770 2268 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:35:04.832386 kubelet[2268]: I0912 17:35:04.832355 2268 kubelet.go:408] "Attempting to sync node with API server" Sep 12 17:35:04.832386 kubelet[2268]: I0912 17:35:04.832384 2268 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:35:04.833552 kubelet[2268]: I0912 17:35:04.832476 2268 kubelet.go:314] "Adding apiserver pod source" Sep 12 17:35:04.833552 kubelet[2268]: I0912 17:35:04.832537 2268 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:35:04.834463 kubelet[2268]: W0912 17:35:04.834131 2268 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.90:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused Sep 12 17:35:04.834463 kubelet[2268]: E0912 17:35:04.834232 2268 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.90:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:35:04.835122 kubelet[2268]: W0912 17:35:04.835072 2268 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.90:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused Sep 12 17:35:04.835171 kubelet[2268]: E0912 17:35:04.835131 2268 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.90:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:35:04.836603 kubelet[2268]: I0912 17:35:04.836572 2268 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:35:04.837013 kubelet[2268]: I0912 17:35:04.836986 2268 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:35:04.837074 kubelet[2268]: W0912 17:35:04.837057 2268 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 17:35:04.839773 kubelet[2268]: I0912 17:35:04.839734 2268 server.go:1274] "Started kubelet" Sep 12 17:35:04.840631 kubelet[2268]: I0912 17:35:04.840031 2268 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:35:04.840631 kubelet[2268]: I0912 17:35:04.840386 2268 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:35:04.840631 kubelet[2268]: I0912 17:35:04.840501 2268 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:35:04.841184 kubelet[2268]: I0912 17:35:04.841163 2268 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:35:04.843475 kubelet[2268]: I0912 17:35:04.841412 2268 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:35:04.843475 kubelet[2268]: I0912 17:35:04.841619 2268 server.go:449] "Adding debug handlers to kubelet server" Sep 12 17:35:04.850571 kubelet[2268]: I0912 17:35:04.850548 2268 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 17:35:04.851195 kubelet[2268]: E0912 17:35:04.850941 2268 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:35:04.853967 kubelet[2268]: I0912 17:35:04.853225 2268 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 17:35:04.854547 kubelet[2268]: I0912 17:35:04.854143 2268 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:35:04.854547 kubelet[2268]: I0912 17:35:04.853352 2268 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:35:04.854547 kubelet[2268]: I0912 17:35:04.854307 2268 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:35:04.854547 kubelet[2268]: E0912 17:35:04.851424 2268 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.90:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.90:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864997a3b34b805 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-12 17:35:04.839706629 +0000 UTC m=+0.432830645,LastTimestamp:2025-09-12 17:35:04.839706629 +0000 UTC m=+0.432830645,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 12 17:35:04.854547 kubelet[2268]: W0912 17:35:04.854344 2268 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.90:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused Sep 12 17:35:04.854547 kubelet[2268]: E0912 17:35:04.854390 2268 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.90:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:35:04.854893 kubelet[2268]: E0912 17:35:04.854844 2268 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.90:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.90:6443: connect: connection refused" interval="200ms" Sep 12 17:35:04.857552 kubelet[2268]: E0912 17:35:04.857516 2268 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:35:04.876262 kubelet[2268]: I0912 17:35:04.876208 2268 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:35:04.889345 kubelet[2268]: I0912 17:35:04.889266 2268 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:35:04.891227 kubelet[2268]: I0912 17:35:04.891181 2268 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:35:04.891227 kubelet[2268]: I0912 17:35:04.891225 2268 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 17:35:04.891387 kubelet[2268]: I0912 17:35:04.891254 2268 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 17:35:04.891387 kubelet[2268]: E0912 17:35:04.891304 2268 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:35:04.893177 kubelet[2268]: W0912 17:35:04.893117 2268 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.90:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused Sep 12 17:35:04.893230 kubelet[2268]: E0912 17:35:04.893191 2268 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.90:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:35:04.902568 kubelet[2268]: I0912 17:35:04.902547 2268 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 17:35:04.902568 kubelet[2268]: I0912 17:35:04.902565 2268 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 17:35:04.902653 kubelet[2268]: I0912 17:35:04.902586 2268 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:35:04.954136 kubelet[2268]: E0912 17:35:04.954104 2268 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:35:04.992496 kubelet[2268]: E0912 17:35:04.992407 2268 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 17:35:05.054899 kubelet[2268]: E0912 17:35:05.054719 2268 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:35:05.056359 kubelet[2268]: E0912 17:35:05.056292 2268 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.90:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.90:6443: connect: connection refused" interval="400ms" Sep 12 17:35:05.155688 kubelet[2268]: E0912 17:35:05.155627 2268 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:35:05.193040 kubelet[2268]: E0912 17:35:05.192935 2268 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 17:35:05.256422 kubelet[2268]: E0912 17:35:05.256342 2268 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:35:05.357397 kubelet[2268]: E0912 17:35:05.357345 2268 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:35:05.457506 kubelet[2268]: E0912 17:35:05.457460 2268 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:35:05.457664 kubelet[2268]: E0912 17:35:05.457508 2268 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.90:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.90:6443: connect: connection refused" interval="800ms" Sep 12 17:35:05.557996 kubelet[2268]: E0912 17:35:05.557948 2268 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:35:05.593415 kubelet[2268]: E0912 17:35:05.593302 2268 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 17:35:05.658872 kubelet[2268]: E0912 17:35:05.658727 2268 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:35:05.683095 kubelet[2268]: I0912 17:35:05.683044 2268 policy_none.go:49] "None policy: Start" Sep 12 17:35:05.684267 kubelet[2268]: I0912 17:35:05.684227 2268 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 17:35:05.684267 kubelet[2268]: I0912 17:35:05.684264 2268 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:35:05.706365 kubelet[2268]: W0912 17:35:05.706301 2268 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.90:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused Sep 12 17:35:05.706463 kubelet[2268]: E0912 17:35:05.706384 2268 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.90:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:35:05.722447 kubelet[2268]: I0912 17:35:05.719822 2268 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:35:05.722447 kubelet[2268]: I0912 17:35:05.720058 2268 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:35:05.722447 kubelet[2268]: I0912 17:35:05.720071 2268 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:35:05.722447 kubelet[2268]: I0912 17:35:05.720900 2268 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:35:05.728097 kubelet[2268]: E0912 17:35:05.728070 2268 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 12 17:35:05.777673 kubelet[2268]: W0912 17:35:05.777628 2268 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.90:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused Sep 12 17:35:05.777758 kubelet[2268]: E0912 17:35:05.777728 2268 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.90:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:35:05.821844 kubelet[2268]: I0912 17:35:05.821786 2268 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 17:35:05.822332 kubelet[2268]: E0912 17:35:05.822282 2268 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.90:6443/api/v1/nodes\": dial tcp 10.0.0.90:6443: connect: connection refused" node="localhost" Sep 12 17:35:06.024605 kubelet[2268]: I0912 17:35:06.024477 2268 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 17:35:06.024845 kubelet[2268]: E0912 17:35:06.024803 2268 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.90:6443/api/v1/nodes\": dial tcp 10.0.0.90:6443: connect: connection refused" node="localhost" Sep 12 17:35:06.147927 kubelet[2268]: W0912 17:35:06.147875 2268 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.90:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused Sep 12 17:35:06.147981 kubelet[2268]: E0912 17:35:06.147948 2268 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.90:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:35:06.258836 kubelet[2268]: E0912 17:35:06.258758 2268 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.90:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.90:6443: connect: connection refused" interval="1.6s" Sep 12 17:35:06.363645 kubelet[2268]: W0912 17:35:06.363547 2268 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.90:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused Sep 12 17:35:06.363645 kubelet[2268]: E0912 17:35:06.363649 2268 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.90:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:35:06.426921 kubelet[2268]: I0912 17:35:06.426877 2268 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 17:35:06.427338 kubelet[2268]: E0912 17:35:06.427298 2268 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.90:6443/api/v1/nodes\": dial tcp 10.0.0.90:6443: connect: connection refused" node="localhost" Sep 12 17:35:06.466988 kubelet[2268]: I0912 17:35:06.466922 2268 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3d841e15b318572fbdfc416cf77ea9e2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3d841e15b318572fbdfc416cf77ea9e2\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:35:06.466988 kubelet[2268]: I0912 17:35:06.466973 2268 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:35:06.466988 kubelet[2268]: I0912 17:35:06.466993 2268 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:35:06.466988 kubelet[2268]: I0912 17:35:06.467009 2268 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:35:06.467246 kubelet[2268]: I0912 17:35:06.467119 2268 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3d841e15b318572fbdfc416cf77ea9e2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3d841e15b318572fbdfc416cf77ea9e2\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:35:06.467246 kubelet[2268]: I0912 17:35:06.467182 2268 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3d841e15b318572fbdfc416cf77ea9e2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3d841e15b318572fbdfc416cf77ea9e2\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:35:06.467246 kubelet[2268]: I0912 17:35:06.467202 2268 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:35:06.467246 kubelet[2268]: I0912 17:35:06.467222 2268 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:35:06.467246 kubelet[2268]: I0912 17:35:06.467241 2268 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 12 17:35:06.701396 kubelet[2268]: E0912 17:35:06.701230 2268 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:06.702121 containerd[1575]: time="2025-09-12T17:35:06.702071384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3d841e15b318572fbdfc416cf77ea9e2,Namespace:kube-system,Attempt:0,}" Sep 12 17:35:06.704409 kubelet[2268]: E0912 17:35:06.704364 2268 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:06.705034 containerd[1575]: time="2025-09-12T17:35:06.704979946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,}" Sep 12 17:35:06.708204 kubelet[2268]: E0912 17:35:06.708183 2268 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:06.708581 containerd[1575]: time="2025-09-12T17:35:06.708534786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,}" Sep 12 17:35:06.845544 kubelet[2268]: E0912 17:35:06.845490 2268 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.90:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:35:07.229049 kubelet[2268]: I0912 17:35:07.228987 2268 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 17:35:07.229577 kubelet[2268]: E0912 17:35:07.229540 2268 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.90:6443/api/v1/nodes\": dial tcp 10.0.0.90:6443: connect: connection refused" node="localhost" Sep 12 17:35:07.860032 kubelet[2268]: E0912 17:35:07.859977 2268 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.90:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.90:6443: connect: connection refused" interval="3.2s" Sep 12 17:35:08.095638 kubelet[2268]: W0912 17:35:08.095526 2268 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.90:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused Sep 12 17:35:08.095638 kubelet[2268]: E0912 17:35:08.095623 2268 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.90:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:35:08.275473 kubelet[2268]: W0912 17:35:08.275300 2268 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.90:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused Sep 12 17:35:08.275473 kubelet[2268]: E0912 17:35:08.275382 2268 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.90:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:35:08.583914 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3742072353.mount: Deactivated successfully. Sep 12 17:35:08.590987 containerd[1575]: time="2025-09-12T17:35:08.590935157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:35:08.591944 containerd[1575]: time="2025-09-12T17:35:08.591890769Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:35:08.592909 containerd[1575]: time="2025-09-12T17:35:08.592852459Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:35:08.593889 containerd[1575]: time="2025-09-12T17:35:08.593862401Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:35:08.594980 containerd[1575]: time="2025-09-12T17:35:08.594902097Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 12 17:35:08.595828 containerd[1575]: time="2025-09-12T17:35:08.595788722Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:35:08.596845 containerd[1575]: time="2025-09-12T17:35:08.596798145Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:35:08.600991 containerd[1575]: time="2025-09-12T17:35:08.600937636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:35:08.602037 containerd[1575]: time="2025-09-12T17:35:08.602001919Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.893396838s" Sep 12 17:35:08.603732 containerd[1575]: time="2025-09-12T17:35:08.603684920Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.901510332s" Sep 12 17:35:08.606124 containerd[1575]: time="2025-09-12T17:35:08.606072344Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.901002615s" Sep 12 17:35:08.763984 kubelet[2268]: W0912 17:35:08.763896 2268 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.90:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused Sep 12 17:35:08.764132 kubelet[2268]: E0912 17:35:08.763993 2268 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.90:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:35:08.792497 containerd[1575]: time="2025-09-12T17:35:08.792172583Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:35:08.792497 containerd[1575]: time="2025-09-12T17:35:08.792231629Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:35:08.792497 containerd[1575]: time="2025-09-12T17:35:08.792247296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:08.792497 containerd[1575]: time="2025-09-12T17:35:08.792349020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:08.794155 containerd[1575]: time="2025-09-12T17:35:08.793984648Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:35:08.794155 containerd[1575]: time="2025-09-12T17:35:08.794051082Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:35:08.794155 containerd[1575]: time="2025-09-12T17:35:08.794060782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:08.794524 containerd[1575]: time="2025-09-12T17:35:08.794317227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:08.799458 containerd[1575]: time="2025-09-12T17:35:08.798269861Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:35:08.799669 containerd[1575]: time="2025-09-12T17:35:08.799248149Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:35:08.799669 containerd[1575]: time="2025-09-12T17:35:08.799286161Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:08.799669 containerd[1575]: time="2025-09-12T17:35:08.799547942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:08.832461 kubelet[2268]: I0912 17:35:08.832413 2268 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 17:35:08.832981 kubelet[2268]: E0912 17:35:08.832946 2268 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.90:6443/api/v1/nodes\": dial tcp 10.0.0.90:6443: connect: connection refused" node="localhost" Sep 12 17:35:08.886682 containerd[1575]: time="2025-09-12T17:35:08.886541344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3d841e15b318572fbdfc416cf77ea9e2,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d16b82754e69e021cbd101d69e0e64447a7a417035538ffa8766823623a1396\"" Sep 12 17:35:08.888034 kubelet[2268]: E0912 17:35:08.887862 2268 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:08.889918 containerd[1575]: time="2025-09-12T17:35:08.889887895Z" level=info msg="CreateContainer within sandbox \"9d16b82754e69e021cbd101d69e0e64447a7a417035538ffa8766823623a1396\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 17:35:08.896594 containerd[1575]: time="2025-09-12T17:35:08.896514430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"b36a4124bfbec1e21f263e676adb9ea6bf61a68153ba686a61a7b40718521739\"" Sep 12 17:35:08.897338 kubelet[2268]: E0912 17:35:08.897317 2268 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:08.898689 containerd[1575]: time="2025-09-12T17:35:08.898636109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"dff97296a504900c07e791f9c4da3d1d67be3e618408fa352d28f5f009790fb7\"" Sep 12 17:35:08.899970 containerd[1575]: time="2025-09-12T17:35:08.899948077Z" level=info msg="CreateContainer within sandbox \"b36a4124bfbec1e21f263e676adb9ea6bf61a68153ba686a61a7b40718521739\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 17:35:08.900021 kubelet[2268]: E0912 17:35:08.899987 2268 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:08.902016 containerd[1575]: time="2025-09-12T17:35:08.901983180Z" level=info msg="CreateContainer within sandbox \"dff97296a504900c07e791f9c4da3d1d67be3e618408fa352d28f5f009790fb7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 17:35:08.903354 kubelet[2268]: W0912 17:35:08.903301 2268 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.90:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused Sep 12 17:35:08.903459 kubelet[2268]: E0912 17:35:08.903363 2268 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.90:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:35:09.515390 containerd[1575]: time="2025-09-12T17:35:09.515302611Z" level=info msg="CreateContainer within sandbox \"b36a4124bfbec1e21f263e676adb9ea6bf61a68153ba686a61a7b40718521739\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"513189b79c1aa7abd3e46b0de63edb84fd41715efc276a05d5c0d78091dcd826\"" Sep 12 17:35:09.516456 containerd[1575]: time="2025-09-12T17:35:09.516406023Z" level=info msg="StartContainer for \"513189b79c1aa7abd3e46b0de63edb84fd41715efc276a05d5c0d78091dcd826\"" Sep 12 17:35:09.519316 containerd[1575]: time="2025-09-12T17:35:09.519267858Z" level=info msg="CreateContainer within sandbox \"9d16b82754e69e021cbd101d69e0e64447a7a417035538ffa8766823623a1396\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"61b291fa608dfa5afed59242d4197b2b31ea8c266270e95ac0fbbefd0cc5a35c\"" Sep 12 17:35:09.519849 containerd[1575]: time="2025-09-12T17:35:09.519812174Z" level=info msg="StartContainer for \"61b291fa608dfa5afed59242d4197b2b31ea8c266270e95ac0fbbefd0cc5a35c\"" Sep 12 17:35:09.524259 containerd[1575]: time="2025-09-12T17:35:09.524176823Z" level=info msg="CreateContainer within sandbox \"dff97296a504900c07e791f9c4da3d1d67be3e618408fa352d28f5f009790fb7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f0052b7c609a01733baea788f1c6415f74531f27db3e50a38a44310b9f86b4aa\"" Sep 12 17:35:09.525274 containerd[1575]: time="2025-09-12T17:35:09.525240126Z" level=info msg="StartContainer for \"f0052b7c609a01733baea788f1c6415f74531f27db3e50a38a44310b9f86b4aa\"" Sep 12 17:35:09.678305 containerd[1575]: time="2025-09-12T17:35:09.676167861Z" level=info msg="StartContainer for \"61b291fa608dfa5afed59242d4197b2b31ea8c266270e95ac0fbbefd0cc5a35c\" returns successfully" Sep 12 17:35:09.678305 containerd[1575]: time="2025-09-12T17:35:09.676319234Z" level=info msg="StartContainer for \"513189b79c1aa7abd3e46b0de63edb84fd41715efc276a05d5c0d78091dcd826\" returns successfully" Sep 12 17:35:09.694675 containerd[1575]: time="2025-09-12T17:35:09.694614948Z" level=info msg="StartContainer for \"f0052b7c609a01733baea788f1c6415f74531f27db3e50a38a44310b9f86b4aa\" returns successfully" Sep 12 17:35:09.919180 kubelet[2268]: E0912 17:35:09.919132 2268 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:09.921710 kubelet[2268]: E0912 17:35:09.921668 2268 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:09.921998 kubelet[2268]: E0912 17:35:09.921971 2268 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:10.928473 kubelet[2268]: E0912 17:35:10.923746 2268 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:10.928473 kubelet[2268]: E0912 17:35:10.923747 2268 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:10.928473 kubelet[2268]: E0912 17:35:10.924322 2268 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:11.068795 kubelet[2268]: E0912 17:35:11.068703 2268 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 12 17:35:11.526958 kubelet[2268]: E0912 17:35:11.526911 2268 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Sep 12 17:35:11.925167 kubelet[2268]: E0912 17:35:11.925129 2268 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:12.034799 kubelet[2268]: I0912 17:35:12.034760 2268 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 17:35:12.183315 kubelet[2268]: I0912 17:35:12.183154 2268 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 12 17:35:12.183315 kubelet[2268]: E0912 17:35:12.183202 2268 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 12 17:35:12.207482 kubelet[2268]: E0912 17:35:12.207375 2268 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:35:12.307982 kubelet[2268]: E0912 17:35:12.307932 2268 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:35:12.408242 kubelet[2268]: E0912 17:35:12.408170 2268 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:35:12.508811 kubelet[2268]: E0912 17:35:12.508672 2268 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:35:12.609235 kubelet[2268]: E0912 17:35:12.609163 2268 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:35:12.709825 kubelet[2268]: E0912 17:35:12.709767 2268 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:35:12.810240 kubelet[2268]: E0912 17:35:12.810189 2268 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:35:12.910729 kubelet[2268]: E0912 17:35:12.910689 2268 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:35:13.011478 kubelet[2268]: E0912 17:35:13.011418 2268 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:35:13.111958 kubelet[2268]: E0912 17:35:13.111820 2268 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:35:13.212306 kubelet[2268]: E0912 17:35:13.212263 2268 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:35:13.312854 kubelet[2268]: E0912 17:35:13.312800 2268 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:35:13.413863 kubelet[2268]: E0912 17:35:13.413749 2268 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:35:13.514266 kubelet[2268]: E0912 17:35:13.514211 2268 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:35:13.614802 kubelet[2268]: E0912 17:35:13.614734 2268 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:35:13.715973 kubelet[2268]: E0912 17:35:13.715839 2268 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:35:13.816834 kubelet[2268]: E0912 17:35:13.816780 2268 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:35:13.880354 systemd[1]: Reloading requested from client PID 2549 ('systemctl') (unit session-7.scope)... Sep 12 17:35:13.880369 systemd[1]: Reloading... Sep 12 17:35:13.917080 kubelet[2268]: E0912 17:35:13.917033 2268 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:35:13.966518 zram_generator::config[2591]: No configuration found. Sep 12 17:35:14.018004 kubelet[2268]: E0912 17:35:14.017938 2268 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:35:14.100631 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:35:14.118607 kubelet[2268]: E0912 17:35:14.118558 2268 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:35:14.213036 systemd[1]: Reloading finished in 332 ms. Sep 12 17:35:14.219536 kubelet[2268]: E0912 17:35:14.219494 2268 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:35:14.251072 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:35:14.272949 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:35:14.273518 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:35:14.283675 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:35:14.447219 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:35:14.452368 (kubelet)[2643]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:35:14.502905 kubelet[2643]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:35:14.503357 kubelet[2643]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 17:35:14.503357 kubelet[2643]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:35:14.503548 kubelet[2643]: I0912 17:35:14.503402 2643 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:35:14.510593 kubelet[2643]: I0912 17:35:14.510547 2643 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 17:35:14.510593 kubelet[2643]: I0912 17:35:14.510578 2643 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:35:14.510876 kubelet[2643]: I0912 17:35:14.510847 2643 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 17:35:14.512098 kubelet[2643]: I0912 17:35:14.512068 2643 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 17:35:14.514169 kubelet[2643]: I0912 17:35:14.514132 2643 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:35:14.518585 kubelet[2643]: E0912 17:35:14.518538 2643 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:35:14.518585 kubelet[2643]: I0912 17:35:14.518573 2643 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:35:14.523193 kubelet[2643]: I0912 17:35:14.523103 2643 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:35:14.523887 kubelet[2643]: I0912 17:35:14.523869 2643 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 17:35:14.524038 kubelet[2643]: I0912 17:35:14.524007 2643 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:35:14.524207 kubelet[2643]: I0912 17:35:14.524037 2643 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 12 17:35:14.524299 kubelet[2643]: I0912 17:35:14.524221 2643 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:35:14.524299 kubelet[2643]: I0912 17:35:14.524231 2643 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 17:35:14.524299 kubelet[2643]: I0912 17:35:14.524259 2643 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:35:14.524389 kubelet[2643]: I0912 17:35:14.524374 2643 kubelet.go:408] "Attempting to sync node with API server" Sep 12 17:35:14.524411 kubelet[2643]: I0912 17:35:14.524392 2643 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:35:14.524462 kubelet[2643]: I0912 17:35:14.524425 2643 kubelet.go:314] "Adding apiserver pod source" Sep 12 17:35:14.524462 kubelet[2643]: I0912 17:35:14.524450 2643 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:35:14.526582 kubelet[2643]: I0912 17:35:14.526544 2643 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:35:14.528473 kubelet[2643]: I0912 17:35:14.526923 2643 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:35:14.528473 kubelet[2643]: I0912 17:35:14.527349 2643 server.go:1274] "Started kubelet" Sep 12 17:35:14.528473 kubelet[2643]: I0912 17:35:14.527631 2643 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:35:14.528473 kubelet[2643]: I0912 17:35:14.527632 2643 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:35:14.528473 kubelet[2643]: I0912 17:35:14.527919 2643 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:35:14.531691 kubelet[2643]: I0912 17:35:14.531663 2643 server.go:449] "Adding debug handlers to kubelet server" Sep 12 17:35:14.538676 kubelet[2643]: I0912 17:35:14.538646 2643 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:35:14.539470 kubelet[2643]: I0912 17:35:14.539233 2643 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:35:14.540094 kubelet[2643]: I0912 17:35:14.540072 2643 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 17:35:14.540369 kubelet[2643]: I0912 17:35:14.540350 2643 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 17:35:14.540585 kubelet[2643]: I0912 17:35:14.540500 2643 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:35:14.540620 kubelet[2643]: E0912 17:35:14.540595 2643 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:35:14.543271 kubelet[2643]: I0912 17:35:14.542519 2643 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:35:14.543271 kubelet[2643]: I0912 17:35:14.542685 2643 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:35:14.544904 kubelet[2643]: I0912 17:35:14.544457 2643 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:35:14.553951 kubelet[2643]: I0912 17:35:14.553877 2643 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:35:14.555418 kubelet[2643]: I0912 17:35:14.555286 2643 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:35:14.555418 kubelet[2643]: I0912 17:35:14.555309 2643 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 17:35:14.555418 kubelet[2643]: I0912 17:35:14.555336 2643 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 17:35:14.555418 kubelet[2643]: E0912 17:35:14.555381 2643 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:35:14.617115 kubelet[2643]: I0912 17:35:14.617078 2643 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 17:35:14.617115 kubelet[2643]: I0912 17:35:14.617100 2643 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 17:35:14.617115 kubelet[2643]: I0912 17:35:14.617126 2643 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:35:14.617337 kubelet[2643]: I0912 17:35:14.617301 2643 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 17:35:14.617337 kubelet[2643]: I0912 17:35:14.617312 2643 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 17:35:14.617391 kubelet[2643]: I0912 17:35:14.617340 2643 policy_none.go:49] "None policy: Start" Sep 12 17:35:14.618132 kubelet[2643]: I0912 17:35:14.618096 2643 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 17:35:14.618132 kubelet[2643]: I0912 17:35:14.618127 2643 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:35:14.618316 kubelet[2643]: I0912 17:35:14.618300 2643 state_mem.go:75] "Updated machine memory state" Sep 12 17:35:14.621343 kubelet[2643]: I0912 17:35:14.620006 2643 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:35:14.621343 kubelet[2643]: I0912 17:35:14.620184 2643 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:35:14.621343 kubelet[2643]: I0912 17:35:14.620194 2643 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:35:14.621343 kubelet[2643]: I0912 17:35:14.620702 2643 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:35:14.729132 kubelet[2643]: I0912 17:35:14.729088 2643 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 17:35:14.741528 kubelet[2643]: I0912 17:35:14.741488 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3d841e15b318572fbdfc416cf77ea9e2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3d841e15b318572fbdfc416cf77ea9e2\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:35:14.741528 kubelet[2643]: I0912 17:35:14.741528 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3d841e15b318572fbdfc416cf77ea9e2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3d841e15b318572fbdfc416cf77ea9e2\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:35:14.741528 kubelet[2643]: I0912 17:35:14.741546 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:35:14.741773 kubelet[2643]: I0912 17:35:14.741626 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3d841e15b318572fbdfc416cf77ea9e2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3d841e15b318572fbdfc416cf77ea9e2\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:35:14.741773 kubelet[2643]: I0912 17:35:14.741644 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:35:14.741773 kubelet[2643]: I0912 17:35:14.741660 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:35:14.741773 kubelet[2643]: I0912 17:35:14.741674 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:35:14.741773 kubelet[2643]: I0912 17:35:14.741689 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:35:14.741888 kubelet[2643]: I0912 17:35:14.741703 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 12 17:35:14.964463 kubelet[2643]: E0912 17:35:14.964398 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:14.964463 kubelet[2643]: E0912 17:35:14.964423 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:14.964675 kubelet[2643]: E0912 17:35:14.964596 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:15.071221 kubelet[2643]: I0912 17:35:15.071158 2643 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 12 17:35:15.071221 kubelet[2643]: I0912 17:35:15.071242 2643 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 12 17:35:15.525067 kubelet[2643]: I0912 17:35:15.525030 2643 apiserver.go:52] "Watching apiserver" Sep 12 17:35:15.541460 kubelet[2643]: I0912 17:35:15.540744 2643 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 17:35:15.558462 kubelet[2643]: I0912 17:35:15.558299 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.558280468 podStartE2EDuration="1.558280468s" podCreationTimestamp="2025-09-12 17:35:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:35:15.549605181 +0000 UTC m=+1.092398289" watchObservedRunningTime="2025-09-12 17:35:15.558280468 +0000 UTC m=+1.101073576" Sep 12 17:35:15.565227 kubelet[2643]: I0912 17:35:15.565111 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.5650890240000002 podStartE2EDuration="1.565089024s" podCreationTimestamp="2025-09-12 17:35:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:35:15.558629943 +0000 UTC m=+1.101423051" watchObservedRunningTime="2025-09-12 17:35:15.565089024 +0000 UTC m=+1.107882132" Sep 12 17:35:15.565349 kubelet[2643]: I0912 17:35:15.565258 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.5652523390000002 podStartE2EDuration="1.565252339s" podCreationTimestamp="2025-09-12 17:35:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:35:15.565060438 +0000 UTC m=+1.107853546" watchObservedRunningTime="2025-09-12 17:35:15.565252339 +0000 UTC m=+1.108045457" Sep 12 17:35:15.577272 kubelet[2643]: E0912 17:35:15.577231 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:15.577512 kubelet[2643]: E0912 17:35:15.577299 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:15.577895 kubelet[2643]: E0912 17:35:15.577877 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:16.578790 kubelet[2643]: E0912 17:35:16.578749 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:16.579340 kubelet[2643]: E0912 17:35:16.578761 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:19.170693 kubelet[2643]: I0912 17:35:19.170647 2643 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 17:35:19.171167 containerd[1575]: time="2025-09-12T17:35:19.171018032Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 17:35:19.171454 kubelet[2643]: I0912 17:35:19.171185 2643 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 17:35:20.173461 kubelet[2643]: I0912 17:35:20.173395 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bd62039b-c6a0-4109-8ea0-ff4724a3b04c-kube-proxy\") pod \"kube-proxy-tkc7r\" (UID: \"bd62039b-c6a0-4109-8ea0-ff4724a3b04c\") " pod="kube-system/kube-proxy-tkc7r" Sep 12 17:35:20.173461 kubelet[2643]: I0912 17:35:20.173449 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bd62039b-c6a0-4109-8ea0-ff4724a3b04c-xtables-lock\") pod \"kube-proxy-tkc7r\" (UID: \"bd62039b-c6a0-4109-8ea0-ff4724a3b04c\") " pod="kube-system/kube-proxy-tkc7r" Sep 12 17:35:20.173461 kubelet[2643]: I0912 17:35:20.173469 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd62039b-c6a0-4109-8ea0-ff4724a3b04c-lib-modules\") pod \"kube-proxy-tkc7r\" (UID: \"bd62039b-c6a0-4109-8ea0-ff4724a3b04c\") " pod="kube-system/kube-proxy-tkc7r" Sep 12 17:35:20.174014 kubelet[2643]: I0912 17:35:20.173498 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqzrm\" (UniqueName: \"kubernetes.io/projected/bd62039b-c6a0-4109-8ea0-ff4724a3b04c-kube-api-access-pqzrm\") pod \"kube-proxy-tkc7r\" (UID: \"bd62039b-c6a0-4109-8ea0-ff4724a3b04c\") " pod="kube-system/kube-proxy-tkc7r" Sep 12 17:35:20.448600 kubelet[2643]: E0912 17:35:20.448420 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:20.449235 containerd[1575]: time="2025-09-12T17:35:20.449177478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tkc7r,Uid:bd62039b-c6a0-4109-8ea0-ff4724a3b04c,Namespace:kube-system,Attempt:0,}" Sep 12 17:35:20.576186 kubelet[2643]: I0912 17:35:20.576122 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e7190b26-3c9c-41bb-ad05-7edc0460d80b-var-lib-calico\") pod \"tigera-operator-58fc44c59b-h4ktw\" (UID: \"e7190b26-3c9c-41bb-ad05-7edc0460d80b\") " pod="tigera-operator/tigera-operator-58fc44c59b-h4ktw" Sep 12 17:35:20.576186 kubelet[2643]: I0912 17:35:20.576164 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z59q\" (UniqueName: \"kubernetes.io/projected/e7190b26-3c9c-41bb-ad05-7edc0460d80b-kube-api-access-2z59q\") pod \"tigera-operator-58fc44c59b-h4ktw\" (UID: \"e7190b26-3c9c-41bb-ad05-7edc0460d80b\") " pod="tigera-operator/tigera-operator-58fc44c59b-h4ktw" Sep 12 17:35:20.794213 containerd[1575]: time="2025-09-12T17:35:20.793879501Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:35:20.794213 containerd[1575]: time="2025-09-12T17:35:20.793943024Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:35:20.794213 containerd[1575]: time="2025-09-12T17:35:20.793955458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:20.794213 containerd[1575]: time="2025-09-12T17:35:20.794062272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:20.827179 kubelet[2643]: E0912 17:35:20.827140 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:20.846336 containerd[1575]: time="2025-09-12T17:35:20.846289816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tkc7r,Uid:bd62039b-c6a0-4109-8ea0-ff4724a3b04c,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f1d3fb24a58ab2ea10a0ccb38e7ef5fba2fbd677fc42d8755f1a454a88784fc\"" Sep 12 17:35:20.847346 kubelet[2643]: E0912 17:35:20.847109 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:20.849063 containerd[1575]: time="2025-09-12T17:35:20.849016951Z" level=info msg="CreateContainer within sandbox \"0f1d3fb24a58ab2ea10a0ccb38e7ef5fba2fbd677fc42d8755f1a454a88784fc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 17:35:20.872841 containerd[1575]: time="2025-09-12T17:35:20.872798751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-h4ktw,Uid:e7190b26-3c9c-41bb-ad05-7edc0460d80b,Namespace:tigera-operator,Attempt:0,}" Sep 12 17:35:21.341192 containerd[1575]: time="2025-09-12T17:35:21.341112537Z" level=info msg="CreateContainer within sandbox \"0f1d3fb24a58ab2ea10a0ccb38e7ef5fba2fbd677fc42d8755f1a454a88784fc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f064f4f1767357b9c2996159b33e902b2699bbe1ac38f7e738b78c7e4eea8e81\"" Sep 12 17:35:21.341772 containerd[1575]: time="2025-09-12T17:35:21.341750560Z" level=info msg="StartContainer for \"f064f4f1767357b9c2996159b33e902b2699bbe1ac38f7e738b78c7e4eea8e81\"" Sep 12 17:35:21.392653 containerd[1575]: time="2025-09-12T17:35:21.387645562Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:35:21.392653 containerd[1575]: time="2025-09-12T17:35:21.387714152Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:35:21.392653 containerd[1575]: time="2025-09-12T17:35:21.387725585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:21.392653 containerd[1575]: time="2025-09-12T17:35:21.387870042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:21.429085 containerd[1575]: time="2025-09-12T17:35:21.429023973Z" level=info msg="StartContainer for \"f064f4f1767357b9c2996159b33e902b2699bbe1ac38f7e738b78c7e4eea8e81\" returns successfully" Sep 12 17:35:21.451453 containerd[1575]: time="2025-09-12T17:35:21.451396867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-h4ktw,Uid:e7190b26-3c9c-41bb-ad05-7edc0460d80b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7a29a26cc4d89a55baae90c363f7e2a3df1009adb0218a3a4f004b147a596fec\"" Sep 12 17:35:21.453421 containerd[1575]: time="2025-09-12T17:35:21.453369409Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 12 17:35:21.592113 kubelet[2643]: E0912 17:35:21.591972 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:21.592113 kubelet[2643]: E0912 17:35:21.592040 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:21.621064 kubelet[2643]: I0912 17:35:21.620984 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tkc7r" podStartSLOduration=1.6209605790000001 podStartE2EDuration="1.620960579s" podCreationTimestamp="2025-09-12 17:35:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:35:21.619207608 +0000 UTC m=+7.162000716" watchObservedRunningTime="2025-09-12 17:35:21.620960579 +0000 UTC m=+7.163753687" Sep 12 17:35:23.380117 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount131538981.mount: Deactivated successfully. Sep 12 17:35:23.941378 containerd[1575]: time="2025-09-12T17:35:23.941296412Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:23.942120 containerd[1575]: time="2025-09-12T17:35:23.942043161Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 12 17:35:23.943387 containerd[1575]: time="2025-09-12T17:35:23.943347746Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:23.945726 containerd[1575]: time="2025-09-12T17:35:23.945672885Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:23.946258 containerd[1575]: time="2025-09-12T17:35:23.946225211Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.492799065s" Sep 12 17:35:23.946294 containerd[1575]: time="2025-09-12T17:35:23.946264056Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 12 17:35:23.948683 containerd[1575]: time="2025-09-12T17:35:23.948649078Z" level=info msg="CreateContainer within sandbox \"7a29a26cc4d89a55baae90c363f7e2a3df1009adb0218a3a4f004b147a596fec\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 12 17:35:23.964681 containerd[1575]: time="2025-09-12T17:35:23.964627045Z" level=info msg="CreateContainer within sandbox \"7a29a26cc4d89a55baae90c363f7e2a3df1009adb0218a3a4f004b147a596fec\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2f86a9668b015f7ffafd68d3467c8036e6fcd7e26afb723b529f86fc9b521e76\"" Sep 12 17:35:23.965229 containerd[1575]: time="2025-09-12T17:35:23.965202314Z" level=info msg="StartContainer for \"2f86a9668b015f7ffafd68d3467c8036e6fcd7e26afb723b529f86fc9b521e76\"" Sep 12 17:35:24.032946 containerd[1575]: time="2025-09-12T17:35:24.032875330Z" level=info msg="StartContainer for \"2f86a9668b015f7ffafd68d3467c8036e6fcd7e26afb723b529f86fc9b521e76\" returns successfully" Sep 12 17:35:24.352412 kubelet[2643]: E0912 17:35:24.352308 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:24.597951 kubelet[2643]: E0912 17:35:24.597914 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:25.084697 kubelet[2643]: I0912 17:35:25.084627 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-h4ktw" podStartSLOduration=2.590265508 podStartE2EDuration="5.084605894s" podCreationTimestamp="2025-09-12 17:35:20 +0000 UTC" firstStartedPulling="2025-09-12 17:35:21.45282792 +0000 UTC m=+6.995621028" lastFinishedPulling="2025-09-12 17:35:23.947168306 +0000 UTC m=+9.489961414" observedRunningTime="2025-09-12 17:35:25.084453403 +0000 UTC m=+10.627246541" watchObservedRunningTime="2025-09-12 17:35:25.084605894 +0000 UTC m=+10.627399002" Sep 12 17:35:25.781663 kubelet[2643]: E0912 17:35:25.781441 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:25.826718 update_engine[1558]: I20250912 17:35:25.826519 1558 update_attempter.cc:509] Updating boot flags... Sep 12 17:35:26.195601 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2992) Sep 12 17:35:26.289483 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2995) Sep 12 17:35:29.716905 sudo[1766]: pam_unix(sudo:session): session closed for user root Sep 12 17:35:29.731849 sshd[1759]: pam_unix(sshd:session): session closed for user core Sep 12 17:35:29.739009 systemd[1]: sshd@6-10.0.0.90:22-10.0.0.1:59348.service: Deactivated successfully. Sep 12 17:35:29.741728 systemd-logind[1556]: Session 7 logged out. Waiting for processes to exit. Sep 12 17:35:29.745084 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 17:35:29.747480 systemd-logind[1556]: Removed session 7. Sep 12 17:35:33.372878 kubelet[2643]: I0912 17:35:33.372811 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/528d6ed9-2f66-4ec5-8ecc-7f658b866172-tigera-ca-bundle\") pod \"calico-typha-757c8ccc98-jcwlv\" (UID: \"528d6ed9-2f66-4ec5-8ecc-7f658b866172\") " pod="calico-system/calico-typha-757c8ccc98-jcwlv" Sep 12 17:35:33.373713 kubelet[2643]: I0912 17:35:33.372942 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vv7rh\" (UniqueName: \"kubernetes.io/projected/528d6ed9-2f66-4ec5-8ecc-7f658b866172-kube-api-access-vv7rh\") pod \"calico-typha-757c8ccc98-jcwlv\" (UID: \"528d6ed9-2f66-4ec5-8ecc-7f658b866172\") " pod="calico-system/calico-typha-757c8ccc98-jcwlv" Sep 12 17:35:33.373713 kubelet[2643]: I0912 17:35:33.373088 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/528d6ed9-2f66-4ec5-8ecc-7f658b866172-typha-certs\") pod \"calico-typha-757c8ccc98-jcwlv\" (UID: \"528d6ed9-2f66-4ec5-8ecc-7f658b866172\") " pod="calico-system/calico-typha-757c8ccc98-jcwlv" Sep 12 17:35:33.474122 kubelet[2643]: I0912 17:35:33.474060 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/bf549a74-29b7-49e1-ad9b-e91e1c5e90d6-cni-log-dir\") pod \"calico-node-x98gm\" (UID: \"bf549a74-29b7-49e1-ad9b-e91e1c5e90d6\") " pod="calico-system/calico-node-x98gm" Sep 12 17:35:33.474122 kubelet[2643]: I0912 17:35:33.474113 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/bf549a74-29b7-49e1-ad9b-e91e1c5e90d6-flexvol-driver-host\") pod \"calico-node-x98gm\" (UID: \"bf549a74-29b7-49e1-ad9b-e91e1c5e90d6\") " pod="calico-system/calico-node-x98gm" Sep 12 17:35:33.474122 kubelet[2643]: I0912 17:35:33.474137 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf549a74-29b7-49e1-ad9b-e91e1c5e90d6-lib-modules\") pod \"calico-node-x98gm\" (UID: \"bf549a74-29b7-49e1-ad9b-e91e1c5e90d6\") " pod="calico-system/calico-node-x98gm" Sep 12 17:35:33.474345 kubelet[2643]: I0912 17:35:33.474153 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/bf549a74-29b7-49e1-ad9b-e91e1c5e90d6-policysync\") pod \"calico-node-x98gm\" (UID: \"bf549a74-29b7-49e1-ad9b-e91e1c5e90d6\") " pod="calico-system/calico-node-x98gm" Sep 12 17:35:33.474345 kubelet[2643]: I0912 17:35:33.474168 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh9gp\" (UniqueName: \"kubernetes.io/projected/bf549a74-29b7-49e1-ad9b-e91e1c5e90d6-kube-api-access-rh9gp\") pod \"calico-node-x98gm\" (UID: \"bf549a74-29b7-49e1-ad9b-e91e1c5e90d6\") " pod="calico-system/calico-node-x98gm" Sep 12 17:35:33.474345 kubelet[2643]: I0912 17:35:33.474184 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/bf549a74-29b7-49e1-ad9b-e91e1c5e90d6-cni-bin-dir\") pod \"calico-node-x98gm\" (UID: \"bf549a74-29b7-49e1-ad9b-e91e1c5e90d6\") " pod="calico-system/calico-node-x98gm" Sep 12 17:35:33.474345 kubelet[2643]: I0912 17:35:33.474197 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/bf549a74-29b7-49e1-ad9b-e91e1c5e90d6-node-certs\") pod \"calico-node-x98gm\" (UID: \"bf549a74-29b7-49e1-ad9b-e91e1c5e90d6\") " pod="calico-system/calico-node-x98gm" Sep 12 17:35:33.474345 kubelet[2643]: I0912 17:35:33.474210 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/bf549a74-29b7-49e1-ad9b-e91e1c5e90d6-var-run-calico\") pod \"calico-node-x98gm\" (UID: \"bf549a74-29b7-49e1-ad9b-e91e1c5e90d6\") " pod="calico-system/calico-node-x98gm" Sep 12 17:35:33.474479 kubelet[2643]: I0912 17:35:33.474225 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/bf549a74-29b7-49e1-ad9b-e91e1c5e90d6-cni-net-dir\") pod \"calico-node-x98gm\" (UID: \"bf549a74-29b7-49e1-ad9b-e91e1c5e90d6\") " pod="calico-system/calico-node-x98gm" Sep 12 17:35:33.474479 kubelet[2643]: I0912 17:35:33.474241 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf549a74-29b7-49e1-ad9b-e91e1c5e90d6-tigera-ca-bundle\") pod \"calico-node-x98gm\" (UID: \"bf549a74-29b7-49e1-ad9b-e91e1c5e90d6\") " pod="calico-system/calico-node-x98gm" Sep 12 17:35:33.474479 kubelet[2643]: I0912 17:35:33.474268 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bf549a74-29b7-49e1-ad9b-e91e1c5e90d6-var-lib-calico\") pod \"calico-node-x98gm\" (UID: \"bf549a74-29b7-49e1-ad9b-e91e1c5e90d6\") " pod="calico-system/calico-node-x98gm" Sep 12 17:35:33.474479 kubelet[2643]: I0912 17:35:33.474285 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bf549a74-29b7-49e1-ad9b-e91e1c5e90d6-xtables-lock\") pod \"calico-node-x98gm\" (UID: \"bf549a74-29b7-49e1-ad9b-e91e1c5e90d6\") " pod="calico-system/calico-node-x98gm" Sep 12 17:35:33.577850 kubelet[2643]: E0912 17:35:33.577809 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.577850 kubelet[2643]: W0912 17:35:33.577835 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.577850 kubelet[2643]: E0912 17:35:33.577858 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.582931 kubelet[2643]: E0912 17:35:33.582833 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.582931 kubelet[2643]: W0912 17:35:33.582861 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.582931 kubelet[2643]: E0912 17:35:33.582884 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.591694 kubelet[2643]: E0912 17:35:33.591658 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.591694 kubelet[2643]: W0912 17:35:33.591700 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.591883 kubelet[2643]: E0912 17:35:33.591729 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.612793 kubelet[2643]: E0912 17:35:33.612720 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hwr5d" podUID="0afbe5bf-287d-4d57-b5ad-630766b8207a" Sep 12 17:35:33.642752 kubelet[2643]: E0912 17:35:33.642558 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:33.643134 containerd[1575]: time="2025-09-12T17:35:33.643098243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-757c8ccc98-jcwlv,Uid:528d6ed9-2f66-4ec5-8ecc-7f658b866172,Namespace:calico-system,Attempt:0,}" Sep 12 17:35:33.660400 kubelet[2643]: E0912 17:35:33.660348 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.660400 kubelet[2643]: W0912 17:35:33.660383 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.660577 kubelet[2643]: E0912 17:35:33.660414 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.661104 kubelet[2643]: E0912 17:35:33.661088 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.661104 kubelet[2643]: W0912 17:35:33.661102 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.661161 kubelet[2643]: E0912 17:35:33.661113 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.661376 kubelet[2643]: E0912 17:35:33.661356 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.661376 kubelet[2643]: W0912 17:35:33.661367 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.661376 kubelet[2643]: E0912 17:35:33.661376 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.661617 kubelet[2643]: E0912 17:35:33.661594 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.661617 kubelet[2643]: W0912 17:35:33.661608 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.661617 kubelet[2643]: E0912 17:35:33.661617 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.661837 kubelet[2643]: E0912 17:35:33.661820 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.661837 kubelet[2643]: W0912 17:35:33.661833 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.661899 kubelet[2643]: E0912 17:35:33.661842 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.662036 kubelet[2643]: E0912 17:35:33.662022 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.662036 kubelet[2643]: W0912 17:35:33.662034 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.662088 kubelet[2643]: E0912 17:35:33.662044 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.662223 kubelet[2643]: E0912 17:35:33.662210 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.662223 kubelet[2643]: W0912 17:35:33.662220 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.662274 kubelet[2643]: E0912 17:35:33.662229 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.662400 kubelet[2643]: E0912 17:35:33.662387 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.662400 kubelet[2643]: W0912 17:35:33.662398 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.662471 kubelet[2643]: E0912 17:35:33.662407 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.662605 kubelet[2643]: E0912 17:35:33.662581 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.662605 kubelet[2643]: W0912 17:35:33.662602 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.662656 kubelet[2643]: E0912 17:35:33.662609 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.662828 kubelet[2643]: E0912 17:35:33.662811 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.662828 kubelet[2643]: W0912 17:35:33.662824 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.662883 kubelet[2643]: E0912 17:35:33.662838 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.663070 kubelet[2643]: E0912 17:35:33.663053 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.663097 kubelet[2643]: W0912 17:35:33.663068 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.663097 kubelet[2643]: E0912 17:35:33.663079 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.663326 kubelet[2643]: E0912 17:35:33.663310 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.663326 kubelet[2643]: W0912 17:35:33.663324 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.663376 kubelet[2643]: E0912 17:35:33.663335 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.663615 kubelet[2643]: E0912 17:35:33.663600 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.663615 kubelet[2643]: W0912 17:35:33.663613 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.663681 kubelet[2643]: E0912 17:35:33.663623 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.663824 kubelet[2643]: E0912 17:35:33.663810 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.663824 kubelet[2643]: W0912 17:35:33.663822 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.663873 kubelet[2643]: E0912 17:35:33.663831 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.664036 kubelet[2643]: E0912 17:35:33.664020 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.664036 kubelet[2643]: W0912 17:35:33.664032 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.664095 kubelet[2643]: E0912 17:35:33.664041 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.664218 kubelet[2643]: E0912 17:35:33.664204 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.664218 kubelet[2643]: W0912 17:35:33.664215 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.664261 kubelet[2643]: E0912 17:35:33.664223 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.664401 kubelet[2643]: E0912 17:35:33.664383 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.664401 kubelet[2643]: W0912 17:35:33.664393 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.664401 kubelet[2643]: E0912 17:35:33.664402 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.664618 kubelet[2643]: E0912 17:35:33.664603 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.664618 kubelet[2643]: W0912 17:35:33.664614 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.664669 kubelet[2643]: E0912 17:35:33.664622 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.664792 kubelet[2643]: E0912 17:35:33.664778 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.664792 kubelet[2643]: W0912 17:35:33.664788 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.664840 kubelet[2643]: E0912 17:35:33.664796 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.665004 kubelet[2643]: E0912 17:35:33.664989 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.665004 kubelet[2643]: W0912 17:35:33.664999 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.665061 kubelet[2643]: E0912 17:35:33.665008 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.676750 kubelet[2643]: E0912 17:35:33.676714 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.676750 kubelet[2643]: W0912 17:35:33.676740 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.676946 kubelet[2643]: E0912 17:35:33.676765 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.676946 kubelet[2643]: I0912 17:35:33.676795 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0afbe5bf-287d-4d57-b5ad-630766b8207a-socket-dir\") pod \"csi-node-driver-hwr5d\" (UID: \"0afbe5bf-287d-4d57-b5ad-630766b8207a\") " pod="calico-system/csi-node-driver-hwr5d" Sep 12 17:35:33.677065 kubelet[2643]: E0912 17:35:33.677051 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.677065 kubelet[2643]: W0912 17:35:33.677063 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.677138 kubelet[2643]: E0912 17:35:33.677077 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.677138 kubelet[2643]: I0912 17:35:33.677093 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcxp4\" (UniqueName: \"kubernetes.io/projected/0afbe5bf-287d-4d57-b5ad-630766b8207a-kube-api-access-zcxp4\") pod \"csi-node-driver-hwr5d\" (UID: \"0afbe5bf-287d-4d57-b5ad-630766b8207a\") " pod="calico-system/csi-node-driver-hwr5d" Sep 12 17:35:33.677333 kubelet[2643]: E0912 17:35:33.677318 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.677333 kubelet[2643]: W0912 17:35:33.677331 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.677450 kubelet[2643]: E0912 17:35:33.677347 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.677450 kubelet[2643]: I0912 17:35:33.677362 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0afbe5bf-287d-4d57-b5ad-630766b8207a-kubelet-dir\") pod \"csi-node-driver-hwr5d\" (UID: \"0afbe5bf-287d-4d57-b5ad-630766b8207a\") " pod="calico-system/csi-node-driver-hwr5d" Sep 12 17:35:33.677811 kubelet[2643]: E0912 17:35:33.677797 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.677811 kubelet[2643]: W0912 17:35:33.677808 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.677878 kubelet[2643]: E0912 17:35:33.677822 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.678135 kubelet[2643]: E0912 17:35:33.678048 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.678135 kubelet[2643]: W0912 17:35:33.678062 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.678135 kubelet[2643]: E0912 17:35:33.678082 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.678497 kubelet[2643]: E0912 17:35:33.678343 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.678497 kubelet[2643]: W0912 17:35:33.678352 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.678497 kubelet[2643]: E0912 17:35:33.678367 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.678624 kubelet[2643]: E0912 17:35:33.678595 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.678624 kubelet[2643]: W0912 17:35:33.678605 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.678691 kubelet[2643]: E0912 17:35:33.678665 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.678901 kubelet[2643]: E0912 17:35:33.678868 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.678962 kubelet[2643]: W0912 17:35:33.678898 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.678962 kubelet[2643]: E0912 17:35:33.678952 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.679048 kubelet[2643]: I0912 17:35:33.678972 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0afbe5bf-287d-4d57-b5ad-630766b8207a-registration-dir\") pod \"csi-node-driver-hwr5d\" (UID: \"0afbe5bf-287d-4d57-b5ad-630766b8207a\") " pod="calico-system/csi-node-driver-hwr5d" Sep 12 17:35:33.679219 kubelet[2643]: E0912 17:35:33.679200 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.679219 kubelet[2643]: W0912 17:35:33.679214 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.679291 kubelet[2643]: E0912 17:35:33.679237 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.679503 kubelet[2643]: E0912 17:35:33.679488 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.679503 kubelet[2643]: W0912 17:35:33.679500 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.679569 kubelet[2643]: E0912 17:35:33.679517 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.679765 kubelet[2643]: E0912 17:35:33.679732 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.679765 kubelet[2643]: W0912 17:35:33.679744 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.679765 kubelet[2643]: E0912 17:35:33.679754 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.679765 kubelet[2643]: I0912 17:35:33.679773 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0afbe5bf-287d-4d57-b5ad-630766b8207a-varrun\") pod \"csi-node-driver-hwr5d\" (UID: \"0afbe5bf-287d-4d57-b5ad-630766b8207a\") " pod="calico-system/csi-node-driver-hwr5d" Sep 12 17:35:33.680015 kubelet[2643]: E0912 17:35:33.679981 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.680015 kubelet[2643]: W0912 17:35:33.680008 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.680068 kubelet[2643]: E0912 17:35:33.680020 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.680231 kubelet[2643]: E0912 17:35:33.680208 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.680231 kubelet[2643]: W0912 17:35:33.680221 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.680356 kubelet[2643]: E0912 17:35:33.680266 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.680490 kubelet[2643]: E0912 17:35:33.680472 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.680490 kubelet[2643]: W0912 17:35:33.680485 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.680598 kubelet[2643]: E0912 17:35:33.680496 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.680709 kubelet[2643]: E0912 17:35:33.680693 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.680709 kubelet[2643]: W0912 17:35:33.680705 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.680780 kubelet[2643]: E0912 17:35:33.680716 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.724829 containerd[1575]: time="2025-09-12T17:35:33.724773296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-x98gm,Uid:bf549a74-29b7-49e1-ad9b-e91e1c5e90d6,Namespace:calico-system,Attempt:0,}" Sep 12 17:35:33.781468 kubelet[2643]: E0912 17:35:33.781406 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.781468 kubelet[2643]: W0912 17:35:33.781455 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.781468 kubelet[2643]: E0912 17:35:33.781484 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.781918 kubelet[2643]: E0912 17:35:33.781766 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.781918 kubelet[2643]: W0912 17:35:33.781779 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.781918 kubelet[2643]: E0912 17:35:33.781790 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.782423 kubelet[2643]: E0912 17:35:33.782364 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.782423 kubelet[2643]: W0912 17:35:33.782395 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.782423 kubelet[2643]: E0912 17:35:33.782455 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.782816 kubelet[2643]: E0912 17:35:33.782768 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.782816 kubelet[2643]: W0912 17:35:33.782801 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.782908 kubelet[2643]: E0912 17:35:33.782823 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.783467 kubelet[2643]: E0912 17:35:33.783418 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.783724 kubelet[2643]: W0912 17:35:33.783568 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.783724 kubelet[2643]: E0912 17:35:33.783711 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.784017 kubelet[2643]: E0912 17:35:33.783997 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.784017 kubelet[2643]: W0912 17:35:33.784013 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.784116 kubelet[2643]: E0912 17:35:33.784068 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.784310 kubelet[2643]: E0912 17:35:33.784288 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.784310 kubelet[2643]: W0912 17:35:33.784302 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.784442 kubelet[2643]: E0912 17:35:33.784352 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.784669 kubelet[2643]: E0912 17:35:33.784647 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.784669 kubelet[2643]: W0912 17:35:33.784664 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.784772 kubelet[2643]: E0912 17:35:33.784689 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.785030 kubelet[2643]: E0912 17:35:33.784999 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.785030 kubelet[2643]: W0912 17:35:33.785013 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.785030 kubelet[2643]: E0912 17:35:33.785030 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.785331 kubelet[2643]: E0912 17:35:33.785313 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.785331 kubelet[2643]: W0912 17:35:33.785328 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.785416 kubelet[2643]: E0912 17:35:33.785343 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.785624 kubelet[2643]: E0912 17:35:33.785603 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.785624 kubelet[2643]: W0912 17:35:33.785616 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.785731 kubelet[2643]: E0912 17:35:33.785650 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.785855 kubelet[2643]: E0912 17:35:33.785838 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.785893 kubelet[2643]: W0912 17:35:33.785854 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.785893 kubelet[2643]: E0912 17:35:33.785885 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.786131 kubelet[2643]: E0912 17:35:33.786114 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.786131 kubelet[2643]: W0912 17:35:33.786127 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.786211 kubelet[2643]: E0912 17:35:33.786160 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.786409 kubelet[2643]: E0912 17:35:33.786393 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.786409 kubelet[2643]: W0912 17:35:33.786404 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.786496 kubelet[2643]: E0912 17:35:33.786421 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.786738 kubelet[2643]: E0912 17:35:33.786719 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.786738 kubelet[2643]: W0912 17:35:33.786734 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.786849 kubelet[2643]: E0912 17:35:33.786752 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.787060 kubelet[2643]: E0912 17:35:33.787042 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.787060 kubelet[2643]: W0912 17:35:33.787055 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.787153 kubelet[2643]: E0912 17:35:33.787071 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.787324 kubelet[2643]: E0912 17:35:33.787307 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.787324 kubelet[2643]: W0912 17:35:33.787320 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.787410 kubelet[2643]: E0912 17:35:33.787353 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.787611 kubelet[2643]: E0912 17:35:33.787582 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.787611 kubelet[2643]: W0912 17:35:33.787606 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.787698 kubelet[2643]: E0912 17:35:33.787640 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.787857 kubelet[2643]: E0912 17:35:33.787836 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.787857 kubelet[2643]: W0912 17:35:33.787850 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.787951 kubelet[2643]: E0912 17:35:33.787887 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.788100 kubelet[2643]: E0912 17:35:33.788077 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.788100 kubelet[2643]: W0912 17:35:33.788091 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.788224 kubelet[2643]: E0912 17:35:33.788133 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.788472 kubelet[2643]: E0912 17:35:33.788451 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.788472 kubelet[2643]: W0912 17:35:33.788470 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.788573 kubelet[2643]: E0912 17:35:33.788489 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.788783 kubelet[2643]: E0912 17:35:33.788764 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.788783 kubelet[2643]: W0912 17:35:33.788779 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.788881 kubelet[2643]: E0912 17:35:33.788795 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.789038 kubelet[2643]: E0912 17:35:33.789019 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.789038 kubelet[2643]: W0912 17:35:33.789035 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.789131 kubelet[2643]: E0912 17:35:33.789052 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.789355 kubelet[2643]: E0912 17:35:33.789337 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.789355 kubelet[2643]: W0912 17:35:33.789351 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.789528 kubelet[2643]: E0912 17:35:33.789388 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.789663 kubelet[2643]: E0912 17:35:33.789643 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.789663 kubelet[2643]: W0912 17:35:33.789658 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.789737 kubelet[2643]: E0912 17:35:33.789672 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.829131 kubelet[2643]: E0912 17:35:33.829083 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:33.829131 kubelet[2643]: W0912 17:35:33.829114 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:33.829131 kubelet[2643]: E0912 17:35:33.829138 2643 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:33.853624 containerd[1575]: time="2025-09-12T17:35:33.853451340Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:35:33.853624 containerd[1575]: time="2025-09-12T17:35:33.853567510Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:35:33.853624 containerd[1575]: time="2025-09-12T17:35:33.853594511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:33.854479 containerd[1575]: time="2025-09-12T17:35:33.853718046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:33.868539 containerd[1575]: time="2025-09-12T17:35:33.868382174Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:35:33.869179 containerd[1575]: time="2025-09-12T17:35:33.869123622Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:35:33.869179 containerd[1575]: time="2025-09-12T17:35:33.869142207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:33.869338 containerd[1575]: time="2025-09-12T17:35:33.869240013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:33.919870 containerd[1575]: time="2025-09-12T17:35:33.919739601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-x98gm,Uid:bf549a74-29b7-49e1-ad9b-e91e1c5e90d6,Namespace:calico-system,Attempt:0,} returns sandbox id \"d428456fdddd3008923b8dd40e292b0e5e47253eeaaed9c696829758bb2509e8\"" Sep 12 17:35:33.934123 containerd[1575]: time="2025-09-12T17:35:33.933471099Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 12 17:35:33.939711 containerd[1575]: time="2025-09-12T17:35:33.939677943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-757c8ccc98-jcwlv,Uid:528d6ed9-2f66-4ec5-8ecc-7f658b866172,Namespace:calico-system,Attempt:0,} returns sandbox id \"cf854c44da1868c0a5207143a6061e912446b6c91d8a71cd53e65efa90510094\"" Sep 12 17:35:33.940460 kubelet[2643]: E0912 17:35:33.940441 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:35.556447 kubelet[2643]: E0912 17:35:35.556354 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hwr5d" podUID="0afbe5bf-287d-4d57-b5ad-630766b8207a" Sep 12 17:35:35.976266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1354919238.mount: Deactivated successfully. Sep 12 17:35:36.243345 containerd[1575]: time="2025-09-12T17:35:36.243191920Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:36.244110 containerd[1575]: time="2025-09-12T17:35:36.244066117Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=5939501" Sep 12 17:35:36.245234 containerd[1575]: time="2025-09-12T17:35:36.245200167Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:36.247678 containerd[1575]: time="2025-09-12T17:35:36.247651223Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:36.248308 containerd[1575]: time="2025-09-12T17:35:36.248258234Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 2.314548601s" Sep 12 17:35:36.248397 containerd[1575]: time="2025-09-12T17:35:36.248311114Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 12 17:35:36.250202 containerd[1575]: time="2025-09-12T17:35:36.250173504Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 12 17:35:36.252853 containerd[1575]: time="2025-09-12T17:35:36.252789372Z" level=info msg="CreateContainer within sandbox \"d428456fdddd3008923b8dd40e292b0e5e47253eeaaed9c696829758bb2509e8\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 12 17:35:36.271339 containerd[1575]: time="2025-09-12T17:35:36.271284471Z" level=info msg="CreateContainer within sandbox \"d428456fdddd3008923b8dd40e292b0e5e47253eeaaed9c696829758bb2509e8\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7a9500ab2c39ce43caab0a326d0ce5e94f5039eff7c8744c10526f0992b8193a\"" Sep 12 17:35:36.272229 containerd[1575]: time="2025-09-12T17:35:36.272193264Z" level=info msg="StartContainer for \"7a9500ab2c39ce43caab0a326d0ce5e94f5039eff7c8744c10526f0992b8193a\"" Sep 12 17:35:36.383903 containerd[1575]: time="2025-09-12T17:35:36.383845689Z" level=info msg="StartContainer for \"7a9500ab2c39ce43caab0a326d0ce5e94f5039eff7c8744c10526f0992b8193a\" returns successfully" Sep 12 17:35:36.421763 containerd[1575]: time="2025-09-12T17:35:36.421664778Z" level=info msg="shim disconnected" id=7a9500ab2c39ce43caab0a326d0ce5e94f5039eff7c8744c10526f0992b8193a namespace=k8s.io Sep 12 17:35:36.421763 containerd[1575]: time="2025-09-12T17:35:36.421757594Z" level=warning msg="cleaning up after shim disconnected" id=7a9500ab2c39ce43caab0a326d0ce5e94f5039eff7c8744c10526f0992b8193a namespace=k8s.io Sep 12 17:35:36.421763 containerd[1575]: time="2025-09-12T17:35:36.421769346Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:35:36.955513 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a9500ab2c39ce43caab0a326d0ce5e94f5039eff7c8744c10526f0992b8193a-rootfs.mount: Deactivated successfully. Sep 12 17:35:37.557106 kubelet[2643]: E0912 17:35:37.557029 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hwr5d" podUID="0afbe5bf-287d-4d57-b5ad-630766b8207a" Sep 12 17:35:39.557183 kubelet[2643]: E0912 17:35:39.557129 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hwr5d" podUID="0afbe5bf-287d-4d57-b5ad-630766b8207a" Sep 12 17:35:40.823058 containerd[1575]: time="2025-09-12T17:35:40.822961626Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:40.877807 containerd[1575]: time="2025-09-12T17:35:40.877590104Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33744548" Sep 12 17:35:40.936887 containerd[1575]: time="2025-09-12T17:35:40.936812967Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:41.444689 containerd[1575]: time="2025-09-12T17:35:41.444562489Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:41.446649 containerd[1575]: time="2025-09-12T17:35:41.445834646Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 5.195620504s" Sep 12 17:35:41.446649 containerd[1575]: time="2025-09-12T17:35:41.445885672Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 12 17:35:41.449216 containerd[1575]: time="2025-09-12T17:35:41.448714764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 12 17:35:41.482752 containerd[1575]: time="2025-09-12T17:35:41.482697106Z" level=info msg="CreateContainer within sandbox \"cf854c44da1868c0a5207143a6061e912446b6c91d8a71cd53e65efa90510094\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 12 17:35:41.556658 kubelet[2643]: E0912 17:35:41.556567 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hwr5d" podUID="0afbe5bf-287d-4d57-b5ad-630766b8207a" Sep 12 17:35:42.627557 containerd[1575]: time="2025-09-12T17:35:42.627483933Z" level=info msg="CreateContainer within sandbox \"cf854c44da1868c0a5207143a6061e912446b6c91d8a71cd53e65efa90510094\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a9986990c3adc06cf1f380227390e5cc9c6d3404c8f2e8c86bb8a318bf73deda\"" Sep 12 17:35:42.628154 containerd[1575]: time="2025-09-12T17:35:42.628103715Z" level=info msg="StartContainer for \"a9986990c3adc06cf1f380227390e5cc9c6d3404c8f2e8c86bb8a318bf73deda\"" Sep 12 17:35:42.657807 systemd[1]: run-containerd-runc-k8s.io-a9986990c3adc06cf1f380227390e5cc9c6d3404c8f2e8c86bb8a318bf73deda-runc.DGxBnr.mount: Deactivated successfully. Sep 12 17:35:42.787598 containerd[1575]: time="2025-09-12T17:35:42.787371051Z" level=info msg="StartContainer for \"a9986990c3adc06cf1f380227390e5cc9c6d3404c8f2e8c86bb8a318bf73deda\" returns successfully" Sep 12 17:35:43.555808 kubelet[2643]: E0912 17:35:43.555744 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hwr5d" podUID="0afbe5bf-287d-4d57-b5ad-630766b8207a" Sep 12 17:35:43.655088 kubelet[2643]: E0912 17:35:43.654343 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:44.655670 kubelet[2643]: I0912 17:35:44.655620 2643 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:35:44.656179 kubelet[2643]: E0912 17:35:44.656059 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:45.556071 kubelet[2643]: E0912 17:35:45.556014 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hwr5d" podUID="0afbe5bf-287d-4d57-b5ad-630766b8207a" Sep 12 17:35:47.556358 kubelet[2643]: E0912 17:35:47.556260 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hwr5d" podUID="0afbe5bf-287d-4d57-b5ad-630766b8207a" Sep 12 17:35:49.556734 kubelet[2643]: E0912 17:35:49.556649 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hwr5d" podUID="0afbe5bf-287d-4d57-b5ad-630766b8207a" Sep 12 17:35:51.555952 kubelet[2643]: E0912 17:35:51.555877 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hwr5d" podUID="0afbe5bf-287d-4d57-b5ad-630766b8207a" Sep 12 17:35:51.615820 containerd[1575]: time="2025-09-12T17:35:51.615757818Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:51.749624 kubelet[2643]: I0912 17:35:51.749562 2643 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:35:51.750099 kubelet[2643]: E0912 17:35:51.750013 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:51.785858 containerd[1575]: time="2025-09-12T17:35:51.785750865Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 12 17:35:51.950009 containerd[1575]: time="2025-09-12T17:35:51.949918056Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:52.013027 containerd[1575]: time="2025-09-12T17:35:52.012967511Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:52.014083 containerd[1575]: time="2025-09-12T17:35:52.014026199Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 10.565249458s" Sep 12 17:35:52.014163 containerd[1575]: time="2025-09-12T17:35:52.014091543Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 12 17:35:52.016089 containerd[1575]: time="2025-09-12T17:35:52.016053865Z" level=info msg="CreateContainer within sandbox \"d428456fdddd3008923b8dd40e292b0e5e47253eeaaed9c696829758bb2509e8\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 12 17:35:52.046850 kubelet[2643]: I0912 17:35:52.045726 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-757c8ccc98-jcwlv" podStartSLOduration=11.538996392 podStartE2EDuration="19.045702554s" podCreationTimestamp="2025-09-12 17:35:33 +0000 UTC" firstStartedPulling="2025-09-12 17:35:33.941551859 +0000 UTC m=+19.484344977" lastFinishedPulling="2025-09-12 17:35:41.448258031 +0000 UTC m=+26.991051139" observedRunningTime="2025-09-12 17:35:44.423477273 +0000 UTC m=+29.966270401" watchObservedRunningTime="2025-09-12 17:35:52.045702554 +0000 UTC m=+37.588495662" Sep 12 17:35:52.670220 kubelet[2643]: E0912 17:35:52.670180 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:52.911809 containerd[1575]: time="2025-09-12T17:35:52.911661619Z" level=info msg="CreateContainer within sandbox \"d428456fdddd3008923b8dd40e292b0e5e47253eeaaed9c696829758bb2509e8\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4e44939cc61641f6b792bd8c3c6a9c9021f17f718b201bfcaa00c062c406e959\"" Sep 12 17:35:52.912415 containerd[1575]: time="2025-09-12T17:35:52.912363133Z" level=info msg="StartContainer for \"4e44939cc61641f6b792bd8c3c6a9c9021f17f718b201bfcaa00c062c406e959\"" Sep 12 17:35:53.105954 containerd[1575]: time="2025-09-12T17:35:53.105878232Z" level=info msg="StartContainer for \"4e44939cc61641f6b792bd8c3c6a9c9021f17f718b201bfcaa00c062c406e959\" returns successfully" Sep 12 17:35:53.556229 kubelet[2643]: E0912 17:35:53.556047 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hwr5d" podUID="0afbe5bf-287d-4d57-b5ad-630766b8207a" Sep 12 17:35:55.556099 kubelet[2643]: E0912 17:35:55.556046 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hwr5d" podUID="0afbe5bf-287d-4d57-b5ad-630766b8207a" Sep 12 17:35:57.556847 kubelet[2643]: E0912 17:35:57.556716 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hwr5d" podUID="0afbe5bf-287d-4d57-b5ad-630766b8207a" Sep 12 17:35:58.120827 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e44939cc61641f6b792bd8c3c6a9c9021f17f718b201bfcaa00c062c406e959-rootfs.mount: Deactivated successfully. Sep 12 17:35:58.179637 kubelet[2643]: I0912 17:35:58.179584 2643 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 12 17:35:58.452080 containerd[1575]: time="2025-09-12T17:35:58.451865766Z" level=info msg="shim disconnected" id=4e44939cc61641f6b792bd8c3c6a9c9021f17f718b201bfcaa00c062c406e959 namespace=k8s.io Sep 12 17:35:58.452080 containerd[1575]: time="2025-09-12T17:35:58.451980873Z" level=warning msg="cleaning up after shim disconnected" id=4e44939cc61641f6b792bd8c3c6a9c9021f17f718b201bfcaa00c062c406e959 namespace=k8s.io Sep 12 17:35:58.452080 containerd[1575]: time="2025-09-12T17:35:58.451992575Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:35:58.798927 containerd[1575]: time="2025-09-12T17:35:58.798782885Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 12 17:35:59.049882 kubelet[2643]: I0912 17:35:59.049710 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b17846b-f3a4-4894-aba8-8a48d931dcb0-config-volume\") pod \"coredns-7c65d6cfc9-7nrf5\" (UID: \"9b17846b-f3a4-4894-aba8-8a48d931dcb0\") " pod="kube-system/coredns-7c65d6cfc9-7nrf5" Sep 12 17:35:59.049882 kubelet[2643]: I0912 17:35:59.049761 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5jr9\" (UniqueName: \"kubernetes.io/projected/9b17846b-f3a4-4894-aba8-8a48d931dcb0-kube-api-access-r5jr9\") pod \"coredns-7c65d6cfc9-7nrf5\" (UID: \"9b17846b-f3a4-4894-aba8-8a48d931dcb0\") " pod="kube-system/coredns-7c65d6cfc9-7nrf5" Sep 12 17:35:59.352725 kubelet[2643]: I0912 17:35:59.352656 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a331b72f-6ff9-42b5-a548-9c65ebf3a6da-calico-apiserver-certs\") pod \"calico-apiserver-6df87d7bb7-nmz5p\" (UID: \"a331b72f-6ff9-42b5-a548-9c65ebf3a6da\") " pod="calico-apiserver/calico-apiserver-6df87d7bb7-nmz5p" Sep 12 17:35:59.352725 kubelet[2643]: I0912 17:35:59.352714 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl22f\" (UniqueName: \"kubernetes.io/projected/002eb908-eead-44c8-b785-c0b17d959030-kube-api-access-vl22f\") pod \"goldmane-7988f88666-ngmvz\" (UID: \"002eb908-eead-44c8-b785-c0b17d959030\") " pod="calico-system/goldmane-7988f88666-ngmvz" Sep 12 17:35:59.352928 kubelet[2643]: I0912 17:35:59.352738 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r67qt\" (UniqueName: \"kubernetes.io/projected/668d9640-9d0d-41dc-9d50-7ca43eccf073-kube-api-access-r67qt\") pod \"calico-kube-controllers-577f47b55c-knc26\" (UID: \"668d9640-9d0d-41dc-9d50-7ca43eccf073\") " pod="calico-system/calico-kube-controllers-577f47b55c-knc26" Sep 12 17:35:59.352928 kubelet[2643]: I0912 17:35:59.352810 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/002eb908-eead-44c8-b785-c0b17d959030-config\") pod \"goldmane-7988f88666-ngmvz\" (UID: \"002eb908-eead-44c8-b785-c0b17d959030\") " pod="calico-system/goldmane-7988f88666-ngmvz" Sep 12 17:35:59.352928 kubelet[2643]: I0912 17:35:59.352851 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fdbb1e09-6ae8-4cec-b2fc-9195acd01c9c-calico-apiserver-certs\") pod \"calico-apiserver-6df87d7bb7-ns7sf\" (UID: \"fdbb1e09-6ae8-4cec-b2fc-9195acd01c9c\") " pod="calico-apiserver/calico-apiserver-6df87d7bb7-ns7sf" Sep 12 17:35:59.352928 kubelet[2643]: I0912 17:35:59.352877 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/668d9640-9d0d-41dc-9d50-7ca43eccf073-tigera-ca-bundle\") pod \"calico-kube-controllers-577f47b55c-knc26\" (UID: \"668d9640-9d0d-41dc-9d50-7ca43eccf073\") " pod="calico-system/calico-kube-controllers-577f47b55c-knc26" Sep 12 17:35:59.352928 kubelet[2643]: I0912 17:35:59.352910 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/278ec31e-0508-4a97-a4e5-53b8c7a1b7e7-whisker-ca-bundle\") pod \"whisker-788597d95c-vt25c\" (UID: \"278ec31e-0508-4a97-a4e5-53b8c7a1b7e7\") " pod="calico-system/whisker-788597d95c-vt25c" Sep 12 17:35:59.353054 kubelet[2643]: I0912 17:35:59.352943 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/002eb908-eead-44c8-b785-c0b17d959030-goldmane-key-pair\") pod \"goldmane-7988f88666-ngmvz\" (UID: \"002eb908-eead-44c8-b785-c0b17d959030\") " pod="calico-system/goldmane-7988f88666-ngmvz" Sep 12 17:35:59.353054 kubelet[2643]: I0912 17:35:59.352966 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qwcp\" (UniqueName: \"kubernetes.io/projected/fdbb1e09-6ae8-4cec-b2fc-9195acd01c9c-kube-api-access-2qwcp\") pod \"calico-apiserver-6df87d7bb7-ns7sf\" (UID: \"fdbb1e09-6ae8-4cec-b2fc-9195acd01c9c\") " pod="calico-apiserver/calico-apiserver-6df87d7bb7-ns7sf" Sep 12 17:35:59.353054 kubelet[2643]: I0912 17:35:59.352990 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/caab70bd-65e3-454e-b4f6-312204583e4c-config-volume\") pod \"coredns-7c65d6cfc9-7qbhp\" (UID: \"caab70bd-65e3-454e-b4f6-312204583e4c\") " pod="kube-system/coredns-7c65d6cfc9-7qbhp" Sep 12 17:35:59.353054 kubelet[2643]: I0912 17:35:59.353011 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82nl6\" (UniqueName: \"kubernetes.io/projected/caab70bd-65e3-454e-b4f6-312204583e4c-kube-api-access-82nl6\") pod \"coredns-7c65d6cfc9-7qbhp\" (UID: \"caab70bd-65e3-454e-b4f6-312204583e4c\") " pod="kube-system/coredns-7c65d6cfc9-7qbhp" Sep 12 17:35:59.353054 kubelet[2643]: I0912 17:35:59.353029 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6xlc\" (UniqueName: \"kubernetes.io/projected/a331b72f-6ff9-42b5-a548-9c65ebf3a6da-kube-api-access-z6xlc\") pod \"calico-apiserver-6df87d7bb7-nmz5p\" (UID: \"a331b72f-6ff9-42b5-a548-9c65ebf3a6da\") " pod="calico-apiserver/calico-apiserver-6df87d7bb7-nmz5p" Sep 12 17:35:59.353188 kubelet[2643]: I0912 17:35:59.353052 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/278ec31e-0508-4a97-a4e5-53b8c7a1b7e7-whisker-backend-key-pair\") pod \"whisker-788597d95c-vt25c\" (UID: \"278ec31e-0508-4a97-a4e5-53b8c7a1b7e7\") " pod="calico-system/whisker-788597d95c-vt25c" Sep 12 17:35:59.353188 kubelet[2643]: I0912 17:35:59.353128 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/002eb908-eead-44c8-b785-c0b17d959030-goldmane-ca-bundle\") pod \"goldmane-7988f88666-ngmvz\" (UID: \"002eb908-eead-44c8-b785-c0b17d959030\") " pod="calico-system/goldmane-7988f88666-ngmvz" Sep 12 17:35:59.353188 kubelet[2643]: I0912 17:35:59.353166 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sst2g\" (UniqueName: \"kubernetes.io/projected/278ec31e-0508-4a97-a4e5-53b8c7a1b7e7-kube-api-access-sst2g\") pod \"whisker-788597d95c-vt25c\" (UID: \"278ec31e-0508-4a97-a4e5-53b8c7a1b7e7\") " pod="calico-system/whisker-788597d95c-vt25c" Sep 12 17:35:59.550015 kubelet[2643]: E0912 17:35:59.549970 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:59.686859 containerd[1575]: time="2025-09-12T17:35:59.686714089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hwr5d,Uid:0afbe5bf-287d-4d57-b5ad-630766b8207a,Namespace:calico-system,Attempt:0,}" Sep 12 17:35:59.687360 containerd[1575]: time="2025-09-12T17:35:59.686952909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7nrf5,Uid:9b17846b-f3a4-4894-aba8-8a48d931dcb0,Namespace:kube-system,Attempt:0,}" Sep 12 17:36:00.465795 kubelet[2643]: E0912 17:36:00.465745 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:36:00.466917 containerd[1575]: time="2025-09-12T17:36:00.465916988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-577f47b55c-knc26,Uid:668d9640-9d0d-41dc-9d50-7ca43eccf073,Namespace:calico-system,Attempt:0,}" Sep 12 17:36:00.466917 containerd[1575]: time="2025-09-12T17:36:00.466172059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6df87d7bb7-ns7sf,Uid:fdbb1e09-6ae8-4cec-b2fc-9195acd01c9c,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:36:00.466917 containerd[1575]: time="2025-09-12T17:36:00.466199450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7qbhp,Uid:caab70bd-65e3-454e-b4f6-312204583e4c,Namespace:kube-system,Attempt:0,}" Sep 12 17:36:00.467443 containerd[1575]: time="2025-09-12T17:36:00.467389744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-788597d95c-vt25c,Uid:278ec31e-0508-4a97-a4e5-53b8c7a1b7e7,Namespace:calico-system,Attempt:0,}" Sep 12 17:36:00.467650 containerd[1575]: time="2025-09-12T17:36:00.467614367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6df87d7bb7-nmz5p,Uid:a331b72f-6ff9-42b5-a548-9c65ebf3a6da,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:36:00.468106 containerd[1575]: time="2025-09-12T17:36:00.468079234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-ngmvz,Uid:002eb908-eead-44c8-b785-c0b17d959030,Namespace:calico-system,Attempt:0,}" Sep 12 17:36:02.560069 containerd[1575]: time="2025-09-12T17:36:02.559995043Z" level=error msg="Failed to destroy network for sandbox \"02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:02.560572 containerd[1575]: time="2025-09-12T17:36:02.560420104Z" level=error msg="encountered an error cleaning up failed sandbox \"02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:02.560572 containerd[1575]: time="2025-09-12T17:36:02.560487080Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hwr5d,Uid:0afbe5bf-287d-4d57-b5ad-630766b8207a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:02.563491 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4-shm.mount: Deactivated successfully. Sep 12 17:36:02.571316 kubelet[2643]: E0912 17:36:02.571235 2643 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:02.571817 kubelet[2643]: E0912 17:36:02.571350 2643 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hwr5d" Sep 12 17:36:02.571817 kubelet[2643]: E0912 17:36:02.571374 2643 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hwr5d" Sep 12 17:36:02.571817 kubelet[2643]: E0912 17:36:02.571423 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hwr5d_calico-system(0afbe5bf-287d-4d57-b5ad-630766b8207a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hwr5d_calico-system(0afbe5bf-287d-4d57-b5ad-630766b8207a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hwr5d" podUID="0afbe5bf-287d-4d57-b5ad-630766b8207a" Sep 12 17:36:02.757486 containerd[1575]: time="2025-09-12T17:36:02.756234753Z" level=error msg="Failed to destroy network for sandbox \"9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:02.757486 containerd[1575]: time="2025-09-12T17:36:02.756818654Z" level=error msg="encountered an error cleaning up failed sandbox \"9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:02.757486 containerd[1575]: time="2025-09-12T17:36:02.757062444Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7nrf5,Uid:9b17846b-f3a4-4894-aba8-8a48d931dcb0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:02.768425 kubelet[2643]: E0912 17:36:02.768366 2643 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:02.768425 kubelet[2643]: E0912 17:36:02.768453 2643 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-7nrf5" Sep 12 17:36:02.768655 kubelet[2643]: E0912 17:36:02.768475 2643 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-7nrf5" Sep 12 17:36:02.768655 kubelet[2643]: E0912 17:36:02.768519 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-7nrf5_kube-system(9b17846b-f3a4-4894-aba8-8a48d931dcb0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-7nrf5_kube-system(9b17846b-f3a4-4894-aba8-8a48d931dcb0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-7nrf5" podUID="9b17846b-f3a4-4894-aba8-8a48d931dcb0" Sep 12 17:36:02.794582 containerd[1575]: time="2025-09-12T17:36:02.794515334Z" level=error msg="Failed to destroy network for sandbox \"fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:02.796401 containerd[1575]: time="2025-09-12T17:36:02.796340525Z" level=error msg="encountered an error cleaning up failed sandbox \"fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:02.796467 containerd[1575]: time="2025-09-12T17:36:02.796403382Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6df87d7bb7-nmz5p,Uid:a331b72f-6ff9-42b5-a548-9c65ebf3a6da,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:02.797123 kubelet[2643]: E0912 17:36:02.797006 2643 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:02.797123 kubelet[2643]: E0912 17:36:02.797078 2643 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6df87d7bb7-nmz5p" Sep 12 17:36:02.797123 kubelet[2643]: E0912 17:36:02.797099 2643 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6df87d7bb7-nmz5p" Sep 12 17:36:02.797336 kubelet[2643]: E0912 17:36:02.797159 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6df87d7bb7-nmz5p_calico-apiserver(a331b72f-6ff9-42b5-a548-9c65ebf3a6da)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6df87d7bb7-nmz5p_calico-apiserver(a331b72f-6ff9-42b5-a548-9c65ebf3a6da)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6df87d7bb7-nmz5p" podUID="a331b72f-6ff9-42b5-a548-9c65ebf3a6da" Sep 12 17:36:02.814778 containerd[1575]: time="2025-09-12T17:36:02.813469028Z" level=error msg="Failed to destroy network for sandbox \"c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:02.814778 containerd[1575]: time="2025-09-12T17:36:02.813824618Z" level=error msg="encountered an error cleaning up failed sandbox \"c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:02.814778 containerd[1575]: time="2025-09-12T17:36:02.813863121Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-577f47b55c-knc26,Uid:668d9640-9d0d-41dc-9d50-7ca43eccf073,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:02.815082 kubelet[2643]: E0912 17:36:02.814104 2643 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:02.815082 kubelet[2643]: E0912 17:36:02.814517 2643 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-577f47b55c-knc26" Sep 12 17:36:02.815082 kubelet[2643]: E0912 17:36:02.814547 2643 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-577f47b55c-knc26" Sep 12 17:36:02.815237 kubelet[2643]: E0912 17:36:02.814603 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-577f47b55c-knc26_calico-system(668d9640-9d0d-41dc-9d50-7ca43eccf073)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-577f47b55c-knc26_calico-system(668d9640-9d0d-41dc-9d50-7ca43eccf073)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-577f47b55c-knc26" podUID="668d9640-9d0d-41dc-9d50-7ca43eccf073" Sep 12 17:36:02.835075 kubelet[2643]: I0912 17:36:02.835033 2643 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678" Sep 12 17:36:02.840403 kubelet[2643]: I0912 17:36:02.840369 2643 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4" Sep 12 17:36:02.857660 containerd[1575]: time="2025-09-12T17:36:02.857565130Z" level=error msg="Failed to destroy network for sandbox \"865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:02.859844 containerd[1575]: time="2025-09-12T17:36:02.859800332Z" level=error msg="encountered an error cleaning up failed sandbox \"865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:02.859887 containerd[1575]: time="2025-09-12T17:36:02.859855767Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-788597d95c-vt25c,Uid:278ec31e-0508-4a97-a4e5-53b8c7a1b7e7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:02.860169 kubelet[2643]: E0912 17:36:02.860126 2643 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:02.860221 kubelet[2643]: E0912 17:36:02.860196 2643 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-788597d95c-vt25c" Sep 12 17:36:02.860251 kubelet[2643]: E0912 17:36:02.860220 2643 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-788597d95c-vt25c" Sep 12 17:36:02.860310 kubelet[2643]: E0912 17:36:02.860266 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-788597d95c-vt25c_calico-system(278ec31e-0508-4a97-a4e5-53b8c7a1b7e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-788597d95c-vt25c_calico-system(278ec31e-0508-4a97-a4e5-53b8c7a1b7e7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-788597d95c-vt25c" podUID="278ec31e-0508-4a97-a4e5-53b8c7a1b7e7" Sep 12 17:36:02.863224 containerd[1575]: time="2025-09-12T17:36:02.863171406Z" level=error msg="Failed to destroy network for sandbox \"21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:02.864629 containerd[1575]: time="2025-09-12T17:36:02.864594128Z" level=error msg="encountered an error cleaning up failed sandbox \"21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:02.864766 containerd[1575]: time="2025-09-12T17:36:02.864642309Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6df87d7bb7-ns7sf,Uid:fdbb1e09-6ae8-4cec-b2fc-9195acd01c9c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:02.864766 containerd[1575]: time="2025-09-12T17:36:02.864739372Z" level=error msg="Failed to destroy network for sandbox \"d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:02.865029 containerd[1575]: time="2025-09-12T17:36:02.865009831Z" level=error msg="encountered an error cleaning up failed sandbox \"d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:02.865063 containerd[1575]: time="2025-09-12T17:36:02.865039277Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7qbhp,Uid:caab70bd-65e3-454e-b4f6-312204583e4c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:02.865427 kubelet[2643]: E0912 17:36:02.865202 2643 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:02.865499 kubelet[2643]: E0912 17:36:02.865380 2643 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:02.865499 kubelet[2643]: E0912 17:36:02.865479 2643 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6df87d7bb7-ns7sf" Sep 12 17:36:02.865565 kubelet[2643]: E0912 17:36:02.865496 2643 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6df87d7bb7-ns7sf" Sep 12 17:36:02.865592 kubelet[2643]: E0912 17:36:02.865559 2643 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-7qbhp" Sep 12 17:36:02.865592 kubelet[2643]: E0912 17:36:02.865577 2643 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-7qbhp" Sep 12 17:36:02.870163 kubelet[2643]: E0912 17:36:02.865824 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-7qbhp_kube-system(caab70bd-65e3-454e-b4f6-312204583e4c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-7qbhp_kube-system(caab70bd-65e3-454e-b4f6-312204583e4c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-7qbhp" podUID="caab70bd-65e3-454e-b4f6-312204583e4c" Sep 12 17:36:02.870163 kubelet[2643]: E0912 17:36:02.865971 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6df87d7bb7-ns7sf_calico-apiserver(fdbb1e09-6ae8-4cec-b2fc-9195acd01c9c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6df87d7bb7-ns7sf_calico-apiserver(fdbb1e09-6ae8-4cec-b2fc-9195acd01c9c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6df87d7bb7-ns7sf" podUID="fdbb1e09-6ae8-4cec-b2fc-9195acd01c9c" Sep 12 17:36:02.872048 containerd[1575]: time="2025-09-12T17:36:02.871992442Z" level=error msg="Failed to destroy network for sandbox \"310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:02.872485 containerd[1575]: time="2025-09-12T17:36:02.872453351Z" level=error msg="encountered an error cleaning up failed sandbox \"310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:02.872548 containerd[1575]: time="2025-09-12T17:36:02.872519656Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-ngmvz,Uid:002eb908-eead-44c8-b785-c0b17d959030,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:02.872619 containerd[1575]: time="2025-09-12T17:36:02.872580822Z" level=info msg="StopPodSandbox for \"9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678\"" Sep 12 17:36:02.872852 kubelet[2643]: E0912 17:36:02.872798 2643 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:02.873022 kubelet[2643]: E0912 17:36:02.872865 2643 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-ngmvz" Sep 12 17:36:02.873022 kubelet[2643]: E0912 17:36:02.872885 2643 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-ngmvz" Sep 12 17:36:02.873022 kubelet[2643]: E0912 17:36:02.872918 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-ngmvz_calico-system(002eb908-eead-44c8-b785-c0b17d959030)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-ngmvz_calico-system(002eb908-eead-44c8-b785-c0b17d959030)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-ngmvz" podUID="002eb908-eead-44c8-b785-c0b17d959030" Sep 12 17:36:02.873494 containerd[1575]: time="2025-09-12T17:36:02.873445291Z" level=info msg="StopPodSandbox for \"02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4\"" Sep 12 17:36:02.874545 containerd[1575]: time="2025-09-12T17:36:02.874518824Z" level=info msg="Ensure that sandbox 9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678 in task-service has been cleanup successfully" Sep 12 17:36:02.874646 containerd[1575]: time="2025-09-12T17:36:02.874523653Z" level=info msg="Ensure that sandbox 02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4 in task-service has been cleanup successfully" Sep 12 17:36:02.904368 containerd[1575]: time="2025-09-12T17:36:02.904303282Z" level=error msg="StopPodSandbox for \"9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678\" failed" error="failed to destroy network for sandbox \"9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:02.904641 kubelet[2643]: E0912 17:36:02.904589 2643 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678" Sep 12 17:36:02.904723 kubelet[2643]: E0912 17:36:02.904664 2643 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678"} Sep 12 17:36:02.904790 kubelet[2643]: E0912 17:36:02.904737 2643 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9b17846b-f3a4-4894-aba8-8a48d931dcb0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:36:02.904856 kubelet[2643]: E0912 17:36:02.904762 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9b17846b-f3a4-4894-aba8-8a48d931dcb0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-7nrf5" podUID="9b17846b-f3a4-4894-aba8-8a48d931dcb0" Sep 12 17:36:02.907454 containerd[1575]: time="2025-09-12T17:36:02.907393255Z" level=error msg="StopPodSandbox for \"02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4\" failed" error="failed to destroy network for sandbox \"02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:02.907717 kubelet[2643]: E0912 17:36:02.907668 2643 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4" Sep 12 17:36:02.907717 kubelet[2643]: E0912 17:36:02.907700 2643 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4"} Sep 12 17:36:02.907717 kubelet[2643]: E0912 17:36:02.907719 2643 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0afbe5bf-287d-4d57-b5ad-630766b8207a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:36:02.907995 kubelet[2643]: E0912 17:36:02.907740 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0afbe5bf-287d-4d57-b5ad-630766b8207a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hwr5d" podUID="0afbe5bf-287d-4d57-b5ad-630766b8207a" Sep 12 17:36:02.908890 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678-shm.mount: Deactivated successfully. Sep 12 17:36:03.232788 systemd[1]: Started sshd@7-10.0.0.90:22-10.0.0.1:42198.service - OpenSSH per-connection server daemon (10.0.0.1:42198). Sep 12 17:36:03.284775 sshd[3739]: Accepted publickey for core from 10.0.0.1 port 42198 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:36:03.286561 sshd[3739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:03.291202 systemd-logind[1556]: New session 8 of user core. Sep 12 17:36:03.300194 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 17:36:03.438422 sshd[3739]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:03.443568 systemd[1]: sshd@7-10.0.0.90:22-10.0.0.1:42198.service: Deactivated successfully. Sep 12 17:36:03.447186 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 17:36:03.448241 systemd-logind[1556]: Session 8 logged out. Waiting for processes to exit. Sep 12 17:36:03.449585 systemd-logind[1556]: Removed session 8. Sep 12 17:36:03.844577 kubelet[2643]: I0912 17:36:03.843744 2643 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb" Sep 12 17:36:03.845109 containerd[1575]: time="2025-09-12T17:36:03.844399805Z" level=info msg="StopPodSandbox for \"310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb\"" Sep 12 17:36:03.845109 containerd[1575]: time="2025-09-12T17:36:03.844670245Z" level=info msg="Ensure that sandbox 310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb in task-service has been cleanup successfully" Sep 12 17:36:03.845538 kubelet[2643]: I0912 17:36:03.845413 2643 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d" Sep 12 17:36:03.846452 containerd[1575]: time="2025-09-12T17:36:03.845962550Z" level=info msg="StopPodSandbox for \"865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d\"" Sep 12 17:36:03.846452 containerd[1575]: time="2025-09-12T17:36:03.846156997Z" level=info msg="Ensure that sandbox 865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d in task-service has been cleanup successfully" Sep 12 17:36:03.848257 kubelet[2643]: I0912 17:36:03.848229 2643 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1" Sep 12 17:36:03.849704 containerd[1575]: time="2025-09-12T17:36:03.849672813Z" level=info msg="StopPodSandbox for \"d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1\"" Sep 12 17:36:03.850396 containerd[1575]: time="2025-09-12T17:36:03.850142729Z" level=info msg="Ensure that sandbox d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1 in task-service has been cleanup successfully" Sep 12 17:36:03.850848 kubelet[2643]: I0912 17:36:03.850828 2643 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4" Sep 12 17:36:03.851761 containerd[1575]: time="2025-09-12T17:36:03.851728828Z" level=info msg="StopPodSandbox for \"c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4\"" Sep 12 17:36:03.852316 containerd[1575]: time="2025-09-12T17:36:03.852291158Z" level=info msg="Ensure that sandbox c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4 in task-service has been cleanup successfully" Sep 12 17:36:03.853942 kubelet[2643]: I0912 17:36:03.853123 2643 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab" Sep 12 17:36:03.854043 containerd[1575]: time="2025-09-12T17:36:03.853999357Z" level=info msg="StopPodSandbox for \"21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab\"" Sep 12 17:36:03.855113 containerd[1575]: time="2025-09-12T17:36:03.854904433Z" level=info msg="Ensure that sandbox 21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab in task-service has been cleanup successfully" Sep 12 17:36:03.858080 kubelet[2643]: I0912 17:36:03.858043 2643 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf" Sep 12 17:36:03.858955 containerd[1575]: time="2025-09-12T17:36:03.858886157Z" level=info msg="StopPodSandbox for \"fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf\"" Sep 12 17:36:03.859782 containerd[1575]: time="2025-09-12T17:36:03.859314634Z" level=info msg="Ensure that sandbox fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf in task-service has been cleanup successfully" Sep 12 17:36:03.924559 containerd[1575]: time="2025-09-12T17:36:03.924492591Z" level=error msg="StopPodSandbox for \"865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d\" failed" error="failed to destroy network for sandbox \"865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:03.924830 kubelet[2643]: E0912 17:36:03.924777 2643 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d" Sep 12 17:36:03.924933 kubelet[2643]: E0912 17:36:03.924840 2643 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d"} Sep 12 17:36:03.924933 kubelet[2643]: E0912 17:36:03.924878 2643 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"278ec31e-0508-4a97-a4e5-53b8c7a1b7e7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:36:03.924933 kubelet[2643]: E0912 17:36:03.924901 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"278ec31e-0508-4a97-a4e5-53b8c7a1b7e7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-788597d95c-vt25c" podUID="278ec31e-0508-4a97-a4e5-53b8c7a1b7e7" Sep 12 17:36:03.937782 containerd[1575]: time="2025-09-12T17:36:03.937720531Z" level=error msg="StopPodSandbox for \"d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1\" failed" error="failed to destroy network for sandbox \"d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:03.938289 kubelet[2643]: E0912 17:36:03.938231 2643 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1" Sep 12 17:36:03.938425 kubelet[2643]: E0912 17:36:03.938403 2643 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1"} Sep 12 17:36:03.938530 kubelet[2643]: E0912 17:36:03.938515 2643 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"caab70bd-65e3-454e-b4f6-312204583e4c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:36:03.938664 kubelet[2643]: E0912 17:36:03.938646 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"caab70bd-65e3-454e-b4f6-312204583e4c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-7qbhp" podUID="caab70bd-65e3-454e-b4f6-312204583e4c" Sep 12 17:36:03.940779 containerd[1575]: time="2025-09-12T17:36:03.940739130Z" level=error msg="StopPodSandbox for \"c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4\" failed" error="failed to destroy network for sandbox \"c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:03.940934 kubelet[2643]: E0912 17:36:03.940883 2643 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4" Sep 12 17:36:03.940934 kubelet[2643]: E0912 17:36:03.940914 2643 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4"} Sep 12 17:36:03.941027 kubelet[2643]: E0912 17:36:03.940940 2643 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"668d9640-9d0d-41dc-9d50-7ca43eccf073\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:36:03.941027 kubelet[2643]: E0912 17:36:03.940966 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"668d9640-9d0d-41dc-9d50-7ca43eccf073\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-577f47b55c-knc26" podUID="668d9640-9d0d-41dc-9d50-7ca43eccf073" Sep 12 17:36:03.946280 containerd[1575]: time="2025-09-12T17:36:03.946210872Z" level=error msg="StopPodSandbox for \"21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab\" failed" error="failed to destroy network for sandbox \"21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:03.946675 kubelet[2643]: E0912 17:36:03.946628 2643 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab" Sep 12 17:36:03.946986 kubelet[2643]: E0912 17:36:03.946945 2643 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab"} Sep 12 17:36:03.947047 kubelet[2643]: E0912 17:36:03.947006 2643 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fdbb1e09-6ae8-4cec-b2fc-9195acd01c9c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:36:03.947117 kubelet[2643]: E0912 17:36:03.947034 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fdbb1e09-6ae8-4cec-b2fc-9195acd01c9c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6df87d7bb7-ns7sf" podUID="fdbb1e09-6ae8-4cec-b2fc-9195acd01c9c" Sep 12 17:36:03.947167 containerd[1575]: time="2025-09-12T17:36:03.946979210Z" level=error msg="StopPodSandbox for \"310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb\" failed" error="failed to destroy network for sandbox \"310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:03.947220 kubelet[2643]: E0912 17:36:03.947189 2643 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb" Sep 12 17:36:03.947262 kubelet[2643]: E0912 17:36:03.947235 2643 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb"} Sep 12 17:36:03.947306 kubelet[2643]: E0912 17:36:03.947277 2643 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"002eb908-eead-44c8-b785-c0b17d959030\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:36:03.947357 kubelet[2643]: E0912 17:36:03.947302 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"002eb908-eead-44c8-b785-c0b17d959030\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-ngmvz" podUID="002eb908-eead-44c8-b785-c0b17d959030" Sep 12 17:36:03.957796 containerd[1575]: time="2025-09-12T17:36:03.957749628Z" level=error msg="StopPodSandbox for \"fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf\" failed" error="failed to destroy network for sandbox \"fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:36:03.958065 kubelet[2643]: E0912 17:36:03.958014 2643 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf" Sep 12 17:36:03.958106 kubelet[2643]: E0912 17:36:03.958078 2643 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf"} Sep 12 17:36:03.958167 kubelet[2643]: E0912 17:36:03.958127 2643 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a331b72f-6ff9-42b5-a548-9c65ebf3a6da\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:36:03.958167 kubelet[2643]: E0912 17:36:03.958160 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a331b72f-6ff9-42b5-a548-9c65ebf3a6da\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6df87d7bb7-nmz5p" podUID="a331b72f-6ff9-42b5-a548-9c65ebf3a6da" Sep 12 17:36:08.049013 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1388824206.mount: Deactivated successfully. Sep 12 17:36:08.458738 systemd[1]: Started sshd@8-10.0.0.90:22-10.0.0.1:42202.service - OpenSSH per-connection server daemon (10.0.0.1:42202). Sep 12 17:36:09.723756 sshd[3868]: Accepted publickey for core from 10.0.0.1 port 42202 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:36:09.728977 sshd[3868]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:09.738958 systemd-logind[1556]: New session 9 of user core. Sep 12 17:36:09.749261 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 17:36:09.784595 containerd[1575]: time="2025-09-12T17:36:09.784256510Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:09.803208 containerd[1575]: time="2025-09-12T17:36:09.801862427Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 12 17:36:09.832774 containerd[1575]: time="2025-09-12T17:36:09.831418288Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:09.876001 containerd[1575]: time="2025-09-12T17:36:09.875306198Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:09.876377 containerd[1575]: time="2025-09-12T17:36:09.876326370Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 11.077489722s" Sep 12 17:36:09.876377 containerd[1575]: time="2025-09-12T17:36:09.876369861Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 12 17:36:09.888492 containerd[1575]: time="2025-09-12T17:36:09.888335875Z" level=info msg="CreateContainer within sandbox \"d428456fdddd3008923b8dd40e292b0e5e47253eeaaed9c696829758bb2509e8\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 12 17:36:09.940779 sshd[3868]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:09.950728 systemd[1]: sshd@8-10.0.0.90:22-10.0.0.1:42202.service: Deactivated successfully. Sep 12 17:36:09.955262 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 17:36:09.958594 systemd-logind[1556]: Session 9 logged out. Waiting for processes to exit. Sep 12 17:36:09.961282 systemd-logind[1556]: Removed session 9. Sep 12 17:36:10.431467 containerd[1575]: time="2025-09-12T17:36:10.430323126Z" level=info msg="CreateContainer within sandbox \"d428456fdddd3008923b8dd40e292b0e5e47253eeaaed9c696829758bb2509e8\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2367d7374f887fe362ee3cc2525ec97515e1fc422aaff03e72f0630060d4e571\"" Sep 12 17:36:10.431467 containerd[1575]: time="2025-09-12T17:36:10.431183326Z" level=info msg="StartContainer for \"2367d7374f887fe362ee3cc2525ec97515e1fc422aaff03e72f0630060d4e571\"" Sep 12 17:36:10.891365 containerd[1575]: time="2025-09-12T17:36:10.891272359Z" level=info msg="StartContainer for \"2367d7374f887fe362ee3cc2525ec97515e1fc422aaff03e72f0630060d4e571\" returns successfully" Sep 12 17:36:10.920490 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 12 17:36:10.923057 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 12 17:36:12.254146 kubelet[2643]: I0912 17:36:12.253184 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-x98gm" podStartSLOduration=3.308191504 podStartE2EDuration="39.253161506s" podCreationTimestamp="2025-09-12 17:35:33 +0000 UTC" firstStartedPulling="2025-09-12 17:35:33.932857274 +0000 UTC m=+19.475650383" lastFinishedPulling="2025-09-12 17:36:09.877827277 +0000 UTC m=+55.420620385" observedRunningTime="2025-09-12 17:36:11.018629511 +0000 UTC m=+56.561422639" watchObservedRunningTime="2025-09-12 17:36:12.253161506 +0000 UTC m=+57.795954624" Sep 12 17:36:12.255755 containerd[1575]: time="2025-09-12T17:36:12.255300735Z" level=info msg="StopPodSandbox for \"865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d\"" Sep 12 17:36:13.397958 containerd[1575]: 2025-09-12 17:36:12.654 [INFO][3996] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d" Sep 12 17:36:13.397958 containerd[1575]: 2025-09-12 17:36:12.654 [INFO][3996] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d" iface="eth0" netns="/var/run/netns/cni-1876c373-31a2-717b-92e7-85e778f22ff9" Sep 12 17:36:13.397958 containerd[1575]: 2025-09-12 17:36:12.656 [INFO][3996] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d" iface="eth0" netns="/var/run/netns/cni-1876c373-31a2-717b-92e7-85e778f22ff9" Sep 12 17:36:13.397958 containerd[1575]: 2025-09-12 17:36:12.656 [INFO][3996] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d" iface="eth0" netns="/var/run/netns/cni-1876c373-31a2-717b-92e7-85e778f22ff9" Sep 12 17:36:13.397958 containerd[1575]: 2025-09-12 17:36:12.657 [INFO][3996] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d" Sep 12 17:36:13.397958 containerd[1575]: 2025-09-12 17:36:12.657 [INFO][3996] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d" Sep 12 17:36:13.397958 containerd[1575]: 2025-09-12 17:36:13.308 [INFO][4004] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d" HandleID="k8s-pod-network.865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d" Workload="localhost-k8s-whisker--788597d95c--vt25c-eth0" Sep 12 17:36:13.397958 containerd[1575]: 2025-09-12 17:36:13.309 [INFO][4004] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:13.397958 containerd[1575]: 2025-09-12 17:36:13.310 [INFO][4004] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:13.397958 containerd[1575]: 2025-09-12 17:36:13.364 [WARNING][4004] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d" HandleID="k8s-pod-network.865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d" Workload="localhost-k8s-whisker--788597d95c--vt25c-eth0" Sep 12 17:36:13.397958 containerd[1575]: 2025-09-12 17:36:13.364 [INFO][4004] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d" HandleID="k8s-pod-network.865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d" Workload="localhost-k8s-whisker--788597d95c--vt25c-eth0" Sep 12 17:36:13.397958 containerd[1575]: 2025-09-12 17:36:13.391 [INFO][4004] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:13.397958 containerd[1575]: 2025-09-12 17:36:13.395 [INFO][3996] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d" Sep 12 17:36:13.400422 containerd[1575]: time="2025-09-12T17:36:13.398128578Z" level=info msg="TearDown network for sandbox \"865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d\" successfully" Sep 12 17:36:13.400422 containerd[1575]: time="2025-09-12T17:36:13.398161500Z" level=info msg="StopPodSandbox for \"865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d\" returns successfully" Sep 12 17:36:13.402520 systemd[1]: run-netns-cni\x2d1876c373\x2d31a2\x2d717b\x2d92e7\x2d85e778f22ff9.mount: Deactivated successfully. Sep 12 17:36:13.460649 kubelet[2643]: I0912 17:36:13.460584 2643 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/278ec31e-0508-4a97-a4e5-53b8c7a1b7e7-whisker-ca-bundle\") pod \"278ec31e-0508-4a97-a4e5-53b8c7a1b7e7\" (UID: \"278ec31e-0508-4a97-a4e5-53b8c7a1b7e7\") " Sep 12 17:36:13.460649 kubelet[2643]: I0912 17:36:13.460658 2643 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/278ec31e-0508-4a97-a4e5-53b8c7a1b7e7-whisker-backend-key-pair\") pod \"278ec31e-0508-4a97-a4e5-53b8c7a1b7e7\" (UID: \"278ec31e-0508-4a97-a4e5-53b8c7a1b7e7\") " Sep 12 17:36:13.461286 kubelet[2643]: I0912 17:36:13.460678 2643 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sst2g\" (UniqueName: \"kubernetes.io/projected/278ec31e-0508-4a97-a4e5-53b8c7a1b7e7-kube-api-access-sst2g\") pod \"278ec31e-0508-4a97-a4e5-53b8c7a1b7e7\" (UID: \"278ec31e-0508-4a97-a4e5-53b8c7a1b7e7\") " Sep 12 17:36:13.461286 kubelet[2643]: I0912 17:36:13.461259 2643 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/278ec31e-0508-4a97-a4e5-53b8c7a1b7e7-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "278ec31e-0508-4a97-a4e5-53b8c7a1b7e7" (UID: "278ec31e-0508-4a97-a4e5-53b8c7a1b7e7"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 12 17:36:13.466049 kubelet[2643]: I0912 17:36:13.465963 2643 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/278ec31e-0508-4a97-a4e5-53b8c7a1b7e7-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "278ec31e-0508-4a97-a4e5-53b8c7a1b7e7" (UID: "278ec31e-0508-4a97-a4e5-53b8c7a1b7e7"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 12 17:36:13.466234 kubelet[2643]: I0912 17:36:13.466166 2643 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/278ec31e-0508-4a97-a4e5-53b8c7a1b7e7-kube-api-access-sst2g" (OuterVolumeSpecName: "kube-api-access-sst2g") pod "278ec31e-0508-4a97-a4e5-53b8c7a1b7e7" (UID: "278ec31e-0508-4a97-a4e5-53b8c7a1b7e7"). InnerVolumeSpecName "kube-api-access-sst2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 17:36:13.469788 systemd[1]: var-lib-kubelet-pods-278ec31e\x2d0508\x2d4a97\x2da4e5\x2d53b8c7a1b7e7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsst2g.mount: Deactivated successfully. Sep 12 17:36:13.470061 systemd[1]: var-lib-kubelet-pods-278ec31e\x2d0508\x2d4a97\x2da4e5\x2d53b8c7a1b7e7-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 12 17:36:13.557064 containerd[1575]: time="2025-09-12T17:36:13.557001911Z" level=info msg="StopPodSandbox for \"02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4\"" Sep 12 17:36:13.561713 kubelet[2643]: I0912 17:36:13.561664 2643 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/278ec31e-0508-4a97-a4e5-53b8c7a1b7e7-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 12 17:36:13.561713 kubelet[2643]: I0912 17:36:13.561701 2643 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sst2g\" (UniqueName: \"kubernetes.io/projected/278ec31e-0508-4a97-a4e5-53b8c7a1b7e7-kube-api-access-sst2g\") on node \"localhost\" DevicePath \"\"" Sep 12 17:36:13.561713 kubelet[2643]: I0912 17:36:13.561713 2643 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/278ec31e-0508-4a97-a4e5-53b8c7a1b7e7-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 12 17:36:14.127962 containerd[1575]: 2025-09-12 17:36:13.983 [INFO][4025] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4" Sep 12 17:36:14.127962 containerd[1575]: 2025-09-12 17:36:13.984 [INFO][4025] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4" iface="eth0" netns="/var/run/netns/cni-326bebca-3022-0e09-516d-6da21d5303d5" Sep 12 17:36:14.127962 containerd[1575]: 2025-09-12 17:36:13.984 [INFO][4025] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4" iface="eth0" netns="/var/run/netns/cni-326bebca-3022-0e09-516d-6da21d5303d5" Sep 12 17:36:14.127962 containerd[1575]: 2025-09-12 17:36:13.984 [INFO][4025] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4" iface="eth0" netns="/var/run/netns/cni-326bebca-3022-0e09-516d-6da21d5303d5" Sep 12 17:36:14.127962 containerd[1575]: 2025-09-12 17:36:13.984 [INFO][4025] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4" Sep 12 17:36:14.127962 containerd[1575]: 2025-09-12 17:36:13.984 [INFO][4025] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4" Sep 12 17:36:14.127962 containerd[1575]: 2025-09-12 17:36:14.005 [INFO][4045] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4" HandleID="k8s-pod-network.02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4" Workload="localhost-k8s-csi--node--driver--hwr5d-eth0" Sep 12 17:36:14.127962 containerd[1575]: 2025-09-12 17:36:14.005 [INFO][4045] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:14.127962 containerd[1575]: 2025-09-12 17:36:14.005 [INFO][4045] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:14.127962 containerd[1575]: 2025-09-12 17:36:14.119 [WARNING][4045] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4" HandleID="k8s-pod-network.02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4" Workload="localhost-k8s-csi--node--driver--hwr5d-eth0" Sep 12 17:36:14.127962 containerd[1575]: 2025-09-12 17:36:14.119 [INFO][4045] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4" HandleID="k8s-pod-network.02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4" Workload="localhost-k8s-csi--node--driver--hwr5d-eth0" Sep 12 17:36:14.127962 containerd[1575]: 2025-09-12 17:36:14.121 [INFO][4045] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:14.127962 containerd[1575]: 2025-09-12 17:36:14.124 [INFO][4025] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4" Sep 12 17:36:14.128848 containerd[1575]: time="2025-09-12T17:36:14.128161047Z" level=info msg="TearDown network for sandbox \"02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4\" successfully" Sep 12 17:36:14.128848 containerd[1575]: time="2025-09-12T17:36:14.128200842Z" level=info msg="StopPodSandbox for \"02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4\" returns successfully" Sep 12 17:36:14.129141 containerd[1575]: time="2025-09-12T17:36:14.129095437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hwr5d,Uid:0afbe5bf-287d-4d57-b5ad-630766b8207a,Namespace:calico-system,Attempt:1,}" Sep 12 17:36:14.131239 systemd[1]: run-netns-cni\x2d326bebca\x2d3022\x2d0e09\x2d516d\x2d6da21d5303d5.mount: Deactivated successfully. Sep 12 17:36:14.541539 containerd[1575]: time="2025-09-12T17:36:14.541409522Z" level=info msg="StopPodSandbox for \"865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d\"" Sep 12 17:36:14.561911 containerd[1575]: time="2025-09-12T17:36:14.561856163Z" level=info msg="StopPodSandbox for \"310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb\"" Sep 12 17:36:14.563948 kubelet[2643]: I0912 17:36:14.563899 2643 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="278ec31e-0508-4a97-a4e5-53b8c7a1b7e7" path="/var/lib/kubelet/pods/278ec31e-0508-4a97-a4e5-53b8c7a1b7e7/volumes" Sep 12 17:36:14.774726 kubelet[2643]: I0912 17:36:14.774681 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b0060ac-0de4-41bc-8db5-824a8670c52d-whisker-ca-bundle\") pod \"whisker-784fd4cb5b-84mxg\" (UID: \"6b0060ac-0de4-41bc-8db5-824a8670c52d\") " pod="calico-system/whisker-784fd4cb5b-84mxg" Sep 12 17:36:14.774726 kubelet[2643]: I0912 17:36:14.774725 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6b0060ac-0de4-41bc-8db5-824a8670c52d-whisker-backend-key-pair\") pod \"whisker-784fd4cb5b-84mxg\" (UID: \"6b0060ac-0de4-41bc-8db5-824a8670c52d\") " pod="calico-system/whisker-784fd4cb5b-84mxg" Sep 12 17:36:14.774927 kubelet[2643]: I0912 17:36:14.774744 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g5h4\" (UniqueName: \"kubernetes.io/projected/6b0060ac-0de4-41bc-8db5-824a8670c52d-kube-api-access-6g5h4\") pod \"whisker-784fd4cb5b-84mxg\" (UID: \"6b0060ac-0de4-41bc-8db5-824a8670c52d\") " pod="calico-system/whisker-784fd4cb5b-84mxg" Sep 12 17:36:14.959410 systemd[1]: Started sshd@9-10.0.0.90:22-10.0.0.1:41340.service - OpenSSH per-connection server daemon (10.0.0.1:41340). Sep 12 17:36:15.011742 sshd[4214]: Accepted publickey for core from 10.0.0.1 port 41340 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:36:15.013738 sshd[4214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:15.020147 systemd-logind[1556]: New session 10 of user core. Sep 12 17:36:15.032039 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 17:36:15.237721 containerd[1575]: 2025-09-12 17:36:14.729 [WARNING][4069] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d" WorkloadEndpoint="localhost-k8s-whisker--788597d95c--vt25c-eth0" Sep 12 17:36:15.237721 containerd[1575]: 2025-09-12 17:36:14.729 [INFO][4069] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d" Sep 12 17:36:15.237721 containerd[1575]: 2025-09-12 17:36:14.729 [INFO][4069] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d" iface="eth0" netns="" Sep 12 17:36:15.237721 containerd[1575]: 2025-09-12 17:36:14.734 [INFO][4069] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d" Sep 12 17:36:15.237721 containerd[1575]: 2025-09-12 17:36:14.734 [INFO][4069] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d" Sep 12 17:36:15.237721 containerd[1575]: 2025-09-12 17:36:14.794 [INFO][4114] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d" HandleID="k8s-pod-network.865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d" Workload="localhost-k8s-whisker--788597d95c--vt25c-eth0" Sep 12 17:36:15.237721 containerd[1575]: 2025-09-12 17:36:14.795 [INFO][4114] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:15.237721 containerd[1575]: 2025-09-12 17:36:14.795 [INFO][4114] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:15.237721 containerd[1575]: 2025-09-12 17:36:15.045 [WARNING][4114] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d" HandleID="k8s-pod-network.865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d" Workload="localhost-k8s-whisker--788597d95c--vt25c-eth0" Sep 12 17:36:15.237721 containerd[1575]: 2025-09-12 17:36:15.045 [INFO][4114] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d" HandleID="k8s-pod-network.865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d" Workload="localhost-k8s-whisker--788597d95c--vt25c-eth0" Sep 12 17:36:15.237721 containerd[1575]: 2025-09-12 17:36:15.217 [INFO][4114] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:15.237721 containerd[1575]: 2025-09-12 17:36:15.226 [INFO][4069] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d" Sep 12 17:36:15.237721 containerd[1575]: time="2025-09-12T17:36:15.237595290Z" level=info msg="TearDown network for sandbox \"865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d\" successfully" Sep 12 17:36:15.237721 containerd[1575]: time="2025-09-12T17:36:15.237662716Z" level=info msg="StopPodSandbox for \"865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d\" returns successfully" Sep 12 17:36:15.241364 containerd[1575]: time="2025-09-12T17:36:15.241322670Z" level=info msg="RemovePodSandbox for \"865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d\"" Sep 12 17:36:15.245786 sshd[4214]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:15.248386 containerd[1575]: time="2025-09-12T17:36:15.248327703Z" level=info msg="Forcibly stopping sandbox \"865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d\"" Sep 12 17:36:15.249947 containerd[1575]: 2025-09-12 17:36:15.044 [INFO][4093] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb" Sep 12 17:36:15.249947 containerd[1575]: 2025-09-12 17:36:15.045 [INFO][4093] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb" iface="eth0" netns="/var/run/netns/cni-1f27e41b-a7a9-61d6-dbf6-4c433f21fd04" Sep 12 17:36:15.249947 containerd[1575]: 2025-09-12 17:36:15.046 [INFO][4093] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb" iface="eth0" netns="/var/run/netns/cni-1f27e41b-a7a9-61d6-dbf6-4c433f21fd04" Sep 12 17:36:15.249947 containerd[1575]: 2025-09-12 17:36:15.046 [INFO][4093] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb" iface="eth0" netns="/var/run/netns/cni-1f27e41b-a7a9-61d6-dbf6-4c433f21fd04" Sep 12 17:36:15.249947 containerd[1575]: 2025-09-12 17:36:15.046 [INFO][4093] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb" Sep 12 17:36:15.249947 containerd[1575]: 2025-09-12 17:36:15.046 [INFO][4093] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb" Sep 12 17:36:15.249947 containerd[1575]: 2025-09-12 17:36:15.111 [INFO][4221] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb" HandleID="k8s-pod-network.310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb" Workload="localhost-k8s-goldmane--7988f88666--ngmvz-eth0" Sep 12 17:36:15.249947 containerd[1575]: 2025-09-12 17:36:15.111 [INFO][4221] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:15.249947 containerd[1575]: 2025-09-12 17:36:15.218 [INFO][4221] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:15.249947 containerd[1575]: 2025-09-12 17:36:15.231 [WARNING][4221] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb" HandleID="k8s-pod-network.310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb" Workload="localhost-k8s-goldmane--7988f88666--ngmvz-eth0" Sep 12 17:36:15.249947 containerd[1575]: 2025-09-12 17:36:15.231 [INFO][4221] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb" HandleID="k8s-pod-network.310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb" Workload="localhost-k8s-goldmane--7988f88666--ngmvz-eth0" Sep 12 17:36:15.249947 containerd[1575]: 2025-09-12 17:36:15.235 [INFO][4221] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:15.249947 containerd[1575]: 2025-09-12 17:36:15.246 [INFO][4093] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb" Sep 12 17:36:15.249947 containerd[1575]: time="2025-09-12T17:36:15.249071504Z" level=info msg="TearDown network for sandbox \"310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb\" successfully" Sep 12 17:36:15.249947 containerd[1575]: time="2025-09-12T17:36:15.249101390Z" level=info msg="StopPodSandbox for \"310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb\" returns successfully" Sep 12 17:36:15.249947 containerd[1575]: time="2025-09-12T17:36:15.249837477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-ngmvz,Uid:002eb908-eead-44c8-b785-c0b17d959030,Namespace:calico-system,Attempt:1,}" Sep 12 17:36:15.256813 systemd[1]: run-netns-cni\x2d1f27e41b\x2da7a9\x2d61d6\x2ddbf6\x2d4c433f21fd04.mount: Deactivated successfully. Sep 12 17:36:15.262651 systemd-logind[1556]: Session 10 logged out. Waiting for processes to exit. Sep 12 17:36:15.263386 systemd[1]: sshd@9-10.0.0.90:22-10.0.0.1:41340.service: Deactivated successfully. Sep 12 17:36:15.269270 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 17:36:15.275040 systemd-logind[1556]: Removed session 10. Sep 12 17:36:15.336451 containerd[1575]: time="2025-09-12T17:36:15.336364911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-784fd4cb5b-84mxg,Uid:6b0060ac-0de4-41bc-8db5-824a8670c52d,Namespace:calico-system,Attempt:0,}" Sep 12 17:36:15.357477 kernel: bpftool[4304]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 12 17:36:15.556697 containerd[1575]: time="2025-09-12T17:36:15.556536669Z" level=info msg="StopPodSandbox for \"21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab\"" Sep 12 17:36:15.685963 systemd-networkd[1243]: cali0f7c47a91fe: Link UP Sep 12 17:36:15.688480 systemd-networkd[1243]: cali0f7c47a91fe: Gained carrier Sep 12 17:36:15.720723 systemd-networkd[1243]: vxlan.calico: Link UP Sep 12 17:36:15.720737 systemd-networkd[1243]: vxlan.calico: Gained carrier Sep 12 17:36:15.737608 containerd[1575]: 2025-09-12 17:36:15.348 [WARNING][4262] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d" WorkloadEndpoint="localhost-k8s-whisker--788597d95c--vt25c-eth0" Sep 12 17:36:15.737608 containerd[1575]: 2025-09-12 17:36:15.348 [INFO][4262] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d" Sep 12 17:36:15.737608 containerd[1575]: 2025-09-12 17:36:15.348 [INFO][4262] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d" iface="eth0" netns="" Sep 12 17:36:15.737608 containerd[1575]: 2025-09-12 17:36:15.348 [INFO][4262] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d" Sep 12 17:36:15.737608 containerd[1575]: 2025-09-12 17:36:15.348 [INFO][4262] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d" Sep 12 17:36:15.737608 containerd[1575]: 2025-09-12 17:36:15.390 [INFO][4296] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d" HandleID="k8s-pod-network.865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d" Workload="localhost-k8s-whisker--788597d95c--vt25c-eth0" Sep 12 17:36:15.737608 containerd[1575]: 2025-09-12 17:36:15.417 [INFO][4296] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:15.737608 containerd[1575]: 2025-09-12 17:36:15.647 [INFO][4296] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:15.737608 containerd[1575]: 2025-09-12 17:36:15.690 [WARNING][4296] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d" HandleID="k8s-pod-network.865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d" Workload="localhost-k8s-whisker--788597d95c--vt25c-eth0" Sep 12 17:36:15.737608 containerd[1575]: 2025-09-12 17:36:15.690 [INFO][4296] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d" HandleID="k8s-pod-network.865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d" Workload="localhost-k8s-whisker--788597d95c--vt25c-eth0" Sep 12 17:36:15.737608 containerd[1575]: 2025-09-12 17:36:15.715 [INFO][4296] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:15.737608 containerd[1575]: 2025-09-12 17:36:15.728 [INFO][4262] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d" Sep 12 17:36:15.790389 containerd[1575]: time="2025-09-12T17:36:15.737649714Z" level=info msg="TearDown network for sandbox \"865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d\" successfully" Sep 12 17:36:15.810849 containerd[1575]: 2025-09-12 17:36:14.581 [INFO][4062] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:36:15.810849 containerd[1575]: 2025-09-12 17:36:14.732 [INFO][4062] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--hwr5d-eth0 csi-node-driver- calico-system 0afbe5bf-287d-4d57-b5ad-630766b8207a 1030 0 2025-09-12 17:35:33 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-hwr5d eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0f7c47a91fe [] [] }} ContainerID="0bc7cdc33f110c5fd02081fdba7d53eaf7489d7dda7922c9c24931d5f6226aac" Namespace="calico-system" Pod="csi-node-driver-hwr5d" WorkloadEndpoint="localhost-k8s-csi--node--driver--hwr5d-" Sep 12 17:36:15.810849 containerd[1575]: 2025-09-12 17:36:14.732 [INFO][4062] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0bc7cdc33f110c5fd02081fdba7d53eaf7489d7dda7922c9c24931d5f6226aac" Namespace="calico-system" Pod="csi-node-driver-hwr5d" WorkloadEndpoint="localhost-k8s-csi--node--driver--hwr5d-eth0" Sep 12 17:36:15.810849 containerd[1575]: 2025-09-12 17:36:15.317 [INFO][4243] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0bc7cdc33f110c5fd02081fdba7d53eaf7489d7dda7922c9c24931d5f6226aac" HandleID="k8s-pod-network.0bc7cdc33f110c5fd02081fdba7d53eaf7489d7dda7922c9c24931d5f6226aac" Workload="localhost-k8s-csi--node--driver--hwr5d-eth0" Sep 12 17:36:15.810849 containerd[1575]: 2025-09-12 17:36:15.320 [INFO][4243] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0bc7cdc33f110c5fd02081fdba7d53eaf7489d7dda7922c9c24931d5f6226aac" HandleID="k8s-pod-network.0bc7cdc33f110c5fd02081fdba7d53eaf7489d7dda7922c9c24931d5f6226aac" Workload="localhost-k8s-csi--node--driver--hwr5d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7990), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-hwr5d", "timestamp":"2025-09-12 17:36:15.317188404 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:36:15.810849 containerd[1575]: 2025-09-12 17:36:15.323 [INFO][4243] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:15.810849 containerd[1575]: 2025-09-12 17:36:15.323 [INFO][4243] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:15.810849 containerd[1575]: 2025-09-12 17:36:15.323 [INFO][4243] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:36:15.810849 containerd[1575]: 2025-09-12 17:36:15.341 [INFO][4243] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0bc7cdc33f110c5fd02081fdba7d53eaf7489d7dda7922c9c24931d5f6226aac" host="localhost" Sep 12 17:36:15.810849 containerd[1575]: 2025-09-12 17:36:15.361 [INFO][4243] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:36:15.810849 containerd[1575]: 2025-09-12 17:36:15.368 [INFO][4243] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:36:15.810849 containerd[1575]: 2025-09-12 17:36:15.371 [INFO][4243] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:36:15.810849 containerd[1575]: 2025-09-12 17:36:15.374 [INFO][4243] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:36:15.810849 containerd[1575]: 2025-09-12 17:36:15.374 [INFO][4243] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0bc7cdc33f110c5fd02081fdba7d53eaf7489d7dda7922c9c24931d5f6226aac" host="localhost" Sep 12 17:36:15.810849 containerd[1575]: 2025-09-12 17:36:15.376 [INFO][4243] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0bc7cdc33f110c5fd02081fdba7d53eaf7489d7dda7922c9c24931d5f6226aac Sep 12 17:36:15.810849 containerd[1575]: 2025-09-12 17:36:15.423 [INFO][4243] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0bc7cdc33f110c5fd02081fdba7d53eaf7489d7dda7922c9c24931d5f6226aac" host="localhost" Sep 12 17:36:15.810849 containerd[1575]: 2025-09-12 17:36:15.643 [INFO][4243] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.0bc7cdc33f110c5fd02081fdba7d53eaf7489d7dda7922c9c24931d5f6226aac" host="localhost" Sep 12 17:36:15.810849 containerd[1575]: 2025-09-12 17:36:15.647 [INFO][4243] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.0bc7cdc33f110c5fd02081fdba7d53eaf7489d7dda7922c9c24931d5f6226aac" host="localhost" Sep 12 17:36:15.810849 containerd[1575]: 2025-09-12 17:36:15.647 [INFO][4243] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:15.810849 containerd[1575]: 2025-09-12 17:36:15.647 [INFO][4243] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="0bc7cdc33f110c5fd02081fdba7d53eaf7489d7dda7922c9c24931d5f6226aac" HandleID="k8s-pod-network.0bc7cdc33f110c5fd02081fdba7d53eaf7489d7dda7922c9c24931d5f6226aac" Workload="localhost-k8s-csi--node--driver--hwr5d-eth0" Sep 12 17:36:15.812284 containerd[1575]: 2025-09-12 17:36:15.658 [INFO][4062] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0bc7cdc33f110c5fd02081fdba7d53eaf7489d7dda7922c9c24931d5f6226aac" Namespace="calico-system" Pod="csi-node-driver-hwr5d" WorkloadEndpoint="localhost-k8s-csi--node--driver--hwr5d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--hwr5d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0afbe5bf-287d-4d57-b5ad-630766b8207a", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-hwr5d", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0f7c47a91fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:15.812284 containerd[1575]: 2025-09-12 17:36:15.659 [INFO][4062] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="0bc7cdc33f110c5fd02081fdba7d53eaf7489d7dda7922c9c24931d5f6226aac" Namespace="calico-system" Pod="csi-node-driver-hwr5d" WorkloadEndpoint="localhost-k8s-csi--node--driver--hwr5d-eth0" Sep 12 17:36:15.812284 containerd[1575]: 2025-09-12 17:36:15.659 [INFO][4062] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0f7c47a91fe ContainerID="0bc7cdc33f110c5fd02081fdba7d53eaf7489d7dda7922c9c24931d5f6226aac" Namespace="calico-system" Pod="csi-node-driver-hwr5d" WorkloadEndpoint="localhost-k8s-csi--node--driver--hwr5d-eth0" Sep 12 17:36:15.812284 containerd[1575]: 2025-09-12 17:36:15.690 [INFO][4062] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0bc7cdc33f110c5fd02081fdba7d53eaf7489d7dda7922c9c24931d5f6226aac" Namespace="calico-system" Pod="csi-node-driver-hwr5d" WorkloadEndpoint="localhost-k8s-csi--node--driver--hwr5d-eth0" Sep 12 17:36:15.812284 containerd[1575]: 2025-09-12 17:36:15.691 [INFO][4062] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0bc7cdc33f110c5fd02081fdba7d53eaf7489d7dda7922c9c24931d5f6226aac" Namespace="calico-system" Pod="csi-node-driver-hwr5d" WorkloadEndpoint="localhost-k8s-csi--node--driver--hwr5d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--hwr5d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0afbe5bf-287d-4d57-b5ad-630766b8207a", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0bc7cdc33f110c5fd02081fdba7d53eaf7489d7dda7922c9c24931d5f6226aac", Pod:"csi-node-driver-hwr5d", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0f7c47a91fe", MAC:"56:3c:27:39:a7:c5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:15.812284 containerd[1575]: 2025-09-12 17:36:15.807 [INFO][4062] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0bc7cdc33f110c5fd02081fdba7d53eaf7489d7dda7922c9c24931d5f6226aac" Namespace="calico-system" Pod="csi-node-driver-hwr5d" WorkloadEndpoint="localhost-k8s-csi--node--driver--hwr5d-eth0" Sep 12 17:36:15.817520 containerd[1575]: 2025-09-12 17:36:15.734 [INFO][4318] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab" Sep 12 17:36:15.817520 containerd[1575]: 2025-09-12 17:36:15.734 [INFO][4318] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab" iface="eth0" netns="/var/run/netns/cni-46cddd60-4913-f928-c0d4-1d8b287f3cf6" Sep 12 17:36:15.817520 containerd[1575]: 2025-09-12 17:36:15.735 [INFO][4318] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab" iface="eth0" netns="/var/run/netns/cni-46cddd60-4913-f928-c0d4-1d8b287f3cf6" Sep 12 17:36:15.817520 containerd[1575]: 2025-09-12 17:36:15.735 [INFO][4318] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab" iface="eth0" netns="/var/run/netns/cni-46cddd60-4913-f928-c0d4-1d8b287f3cf6" Sep 12 17:36:15.817520 containerd[1575]: 2025-09-12 17:36:15.735 [INFO][4318] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab" Sep 12 17:36:15.817520 containerd[1575]: 2025-09-12 17:36:15.735 [INFO][4318] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab" Sep 12 17:36:15.817520 containerd[1575]: 2025-09-12 17:36:15.763 [INFO][4353] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab" HandleID="k8s-pod-network.21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab" Workload="localhost-k8s-calico--apiserver--6df87d7bb7--ns7sf-eth0" Sep 12 17:36:15.817520 containerd[1575]: 2025-09-12 17:36:15.763 [INFO][4353] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:15.817520 containerd[1575]: 2025-09-12 17:36:15.764 [INFO][4353] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:15.817520 containerd[1575]: 2025-09-12 17:36:15.806 [WARNING][4353] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab" HandleID="k8s-pod-network.21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab" Workload="localhost-k8s-calico--apiserver--6df87d7bb7--ns7sf-eth0" Sep 12 17:36:15.817520 containerd[1575]: 2025-09-12 17:36:15.806 [INFO][4353] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab" HandleID="k8s-pod-network.21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab" Workload="localhost-k8s-calico--apiserver--6df87d7bb7--ns7sf-eth0" Sep 12 17:36:15.817520 containerd[1575]: 2025-09-12 17:36:15.808 [INFO][4353] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:15.817520 containerd[1575]: 2025-09-12 17:36:15.811 [INFO][4318] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab" Sep 12 17:36:15.817520 containerd[1575]: time="2025-09-12T17:36:15.817187234Z" level=info msg="TearDown network for sandbox \"21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab\" successfully" Sep 12 17:36:15.817520 containerd[1575]: time="2025-09-12T17:36:15.817219766Z" level=info msg="StopPodSandbox for \"21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab\" returns successfully" Sep 12 17:36:15.821252 containerd[1575]: time="2025-09-12T17:36:15.821202135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6df87d7bb7-ns7sf,Uid:fdbb1e09-6ae8-4cec-b2fc-9195acd01c9c,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:36:15.822209 systemd[1]: run-netns-cni\x2d46cddd60\x2d4913\x2df928\x2dc0d4\x2d1d8b287f3cf6.mount: Deactivated successfully. Sep 12 17:36:16.036038 containerd[1575]: time="2025-09-12T17:36:16.034401370Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:36:16.036038 containerd[1575]: time="2025-09-12T17:36:16.034522508Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:36:16.036038 containerd[1575]: time="2025-09-12T17:36:16.034533869Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:36:16.036038 containerd[1575]: time="2025-09-12T17:36:16.034804368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:36:16.045973 containerd[1575]: time="2025-09-12T17:36:16.045906737Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:36:16.046141 containerd[1575]: time="2025-09-12T17:36:16.046010402Z" level=info msg="RemovePodSandbox \"865be5042c651c33ac0f84317838e189fc99a64b214720ac6a2b8ffe334c138d\" returns successfully" Sep 12 17:36:16.075835 systemd-resolved[1465]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:36:16.088027 systemd-networkd[1243]: calic8ecd6454fd: Link UP Sep 12 17:36:16.090837 systemd-networkd[1243]: calic8ecd6454fd: Gained carrier Sep 12 17:36:16.109057 containerd[1575]: time="2025-09-12T17:36:16.108995413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hwr5d,Uid:0afbe5bf-287d-4d57-b5ad-630766b8207a,Namespace:calico-system,Attempt:1,} returns sandbox id \"0bc7cdc33f110c5fd02081fdba7d53eaf7489d7dda7922c9c24931d5f6226aac\"" Sep 12 17:36:16.114472 containerd[1575]: time="2025-09-12T17:36:16.113730511Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 12 17:36:16.156184 containerd[1575]: 2025-09-12 17:36:15.878 [INFO][4383] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7988f88666--ngmvz-eth0 goldmane-7988f88666- calico-system 002eb908-eead-44c8-b785-c0b17d959030 1046 0 2025-09-12 17:35:33 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7988f88666-ngmvz eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic8ecd6454fd [] [] }} ContainerID="72c8f78d18dee40748a7ad0261d7d18a4f849e681f7983b08c301b7af8f3f5b5" Namespace="calico-system" Pod="goldmane-7988f88666-ngmvz" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--ngmvz-" Sep 12 17:36:16.156184 containerd[1575]: 2025-09-12 17:36:15.878 [INFO][4383] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="72c8f78d18dee40748a7ad0261d7d18a4f849e681f7983b08c301b7af8f3f5b5" Namespace="calico-system" Pod="goldmane-7988f88666-ngmvz" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--ngmvz-eth0" Sep 12 17:36:16.156184 containerd[1575]: 2025-09-12 17:36:15.905 [INFO][4398] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="72c8f78d18dee40748a7ad0261d7d18a4f849e681f7983b08c301b7af8f3f5b5" HandleID="k8s-pod-network.72c8f78d18dee40748a7ad0261d7d18a4f849e681f7983b08c301b7af8f3f5b5" Workload="localhost-k8s-goldmane--7988f88666--ngmvz-eth0" Sep 12 17:36:16.156184 containerd[1575]: 2025-09-12 17:36:15.905 [INFO][4398] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="72c8f78d18dee40748a7ad0261d7d18a4f849e681f7983b08c301b7af8f3f5b5" HandleID="k8s-pod-network.72c8f78d18dee40748a7ad0261d7d18a4f849e681f7983b08c301b7af8f3f5b5" Workload="localhost-k8s-goldmane--7988f88666--ngmvz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f6f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7988f88666-ngmvz", "timestamp":"2025-09-12 17:36:15.905714124 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:36:16.156184 containerd[1575]: 2025-09-12 17:36:15.906 [INFO][4398] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:16.156184 containerd[1575]: 2025-09-12 17:36:15.906 [INFO][4398] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:16.156184 containerd[1575]: 2025-09-12 17:36:15.906 [INFO][4398] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:36:16.156184 containerd[1575]: 2025-09-12 17:36:15.933 [INFO][4398] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.72c8f78d18dee40748a7ad0261d7d18a4f849e681f7983b08c301b7af8f3f5b5" host="localhost" Sep 12 17:36:16.156184 containerd[1575]: 2025-09-12 17:36:15.940 [INFO][4398] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:36:16.156184 containerd[1575]: 2025-09-12 17:36:15.945 [INFO][4398] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:36:16.156184 containerd[1575]: 2025-09-12 17:36:15.948 [INFO][4398] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:36:16.156184 containerd[1575]: 2025-09-12 17:36:15.951 [INFO][4398] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:36:16.156184 containerd[1575]: 2025-09-12 17:36:15.951 [INFO][4398] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.72c8f78d18dee40748a7ad0261d7d18a4f849e681f7983b08c301b7af8f3f5b5" host="localhost" Sep 12 17:36:16.156184 containerd[1575]: 2025-09-12 17:36:15.952 [INFO][4398] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.72c8f78d18dee40748a7ad0261d7d18a4f849e681f7983b08c301b7af8f3f5b5 Sep 12 17:36:16.156184 containerd[1575]: 2025-09-12 17:36:15.968 [INFO][4398] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.72c8f78d18dee40748a7ad0261d7d18a4f849e681f7983b08c301b7af8f3f5b5" host="localhost" Sep 12 17:36:16.156184 containerd[1575]: 2025-09-12 17:36:16.070 [INFO][4398] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.72c8f78d18dee40748a7ad0261d7d18a4f849e681f7983b08c301b7af8f3f5b5" host="localhost" Sep 12 17:36:16.156184 containerd[1575]: 2025-09-12 17:36:16.070 [INFO][4398] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.72c8f78d18dee40748a7ad0261d7d18a4f849e681f7983b08c301b7af8f3f5b5" host="localhost" Sep 12 17:36:16.156184 containerd[1575]: 2025-09-12 17:36:16.070 [INFO][4398] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:16.156184 containerd[1575]: 2025-09-12 17:36:16.070 [INFO][4398] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="72c8f78d18dee40748a7ad0261d7d18a4f849e681f7983b08c301b7af8f3f5b5" HandleID="k8s-pod-network.72c8f78d18dee40748a7ad0261d7d18a4f849e681f7983b08c301b7af8f3f5b5" Workload="localhost-k8s-goldmane--7988f88666--ngmvz-eth0" Sep 12 17:36:16.158911 containerd[1575]: 2025-09-12 17:36:16.076 [INFO][4383] cni-plugin/k8s.go 418: Populated endpoint ContainerID="72c8f78d18dee40748a7ad0261d7d18a4f849e681f7983b08c301b7af8f3f5b5" Namespace="calico-system" Pod="goldmane-7988f88666-ngmvz" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--ngmvz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--ngmvz-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"002eb908-eead-44c8-b785-c0b17d959030", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7988f88666-ngmvz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic8ecd6454fd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:16.158911 containerd[1575]: 2025-09-12 17:36:16.077 [INFO][4383] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="72c8f78d18dee40748a7ad0261d7d18a4f849e681f7983b08c301b7af8f3f5b5" Namespace="calico-system" Pod="goldmane-7988f88666-ngmvz" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--ngmvz-eth0" Sep 12 17:36:16.158911 containerd[1575]: 2025-09-12 17:36:16.077 [INFO][4383] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic8ecd6454fd ContainerID="72c8f78d18dee40748a7ad0261d7d18a4f849e681f7983b08c301b7af8f3f5b5" Namespace="calico-system" Pod="goldmane-7988f88666-ngmvz" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--ngmvz-eth0" Sep 12 17:36:16.158911 containerd[1575]: 2025-09-12 17:36:16.094 [INFO][4383] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="72c8f78d18dee40748a7ad0261d7d18a4f849e681f7983b08c301b7af8f3f5b5" Namespace="calico-system" Pod="goldmane-7988f88666-ngmvz" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--ngmvz-eth0" Sep 12 17:36:16.158911 containerd[1575]: 2025-09-12 17:36:16.104 [INFO][4383] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="72c8f78d18dee40748a7ad0261d7d18a4f849e681f7983b08c301b7af8f3f5b5" Namespace="calico-system" Pod="goldmane-7988f88666-ngmvz" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--ngmvz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--ngmvz-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"002eb908-eead-44c8-b785-c0b17d959030", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"72c8f78d18dee40748a7ad0261d7d18a4f849e681f7983b08c301b7af8f3f5b5", Pod:"goldmane-7988f88666-ngmvz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic8ecd6454fd", MAC:"62:82:7c:1c:49:db", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:16.158911 containerd[1575]: 2025-09-12 17:36:16.151 [INFO][4383] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="72c8f78d18dee40748a7ad0261d7d18a4f849e681f7983b08c301b7af8f3f5b5" Namespace="calico-system" Pod="goldmane-7988f88666-ngmvz" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--ngmvz-eth0" Sep 12 17:36:16.297029 containerd[1575]: time="2025-09-12T17:36:16.296676596Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:36:16.297029 containerd[1575]: time="2025-09-12T17:36:16.296752440Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:36:16.297029 containerd[1575]: time="2025-09-12T17:36:16.296773018Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:36:16.297029 containerd[1575]: time="2025-09-12T17:36:16.296901460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:36:16.329180 systemd-resolved[1465]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:36:16.361963 containerd[1575]: time="2025-09-12T17:36:16.361895094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-ngmvz,Uid:002eb908-eead-44c8-b785-c0b17d959030,Namespace:calico-system,Attempt:1,} returns sandbox id \"72c8f78d18dee40748a7ad0261d7d18a4f849e681f7983b08c301b7af8f3f5b5\"" Sep 12 17:36:16.388398 systemd-networkd[1243]: calib26ce9a1af9: Link UP Sep 12 17:36:16.389243 systemd-networkd[1243]: calib26ce9a1af9: Gained carrier Sep 12 17:36:16.557583 containerd[1575]: time="2025-09-12T17:36:16.557528593Z" level=info msg="StopPodSandbox for \"d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1\"" Sep 12 17:36:16.558793 containerd[1575]: time="2025-09-12T17:36:16.557651564Z" level=info msg="StopPodSandbox for \"c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4\"" Sep 12 17:36:16.558793 containerd[1575]: time="2025-09-12T17:36:16.558597726Z" level=info msg="StopPodSandbox for \"9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678\"" Sep 12 17:36:16.583901 containerd[1575]: 2025-09-12 17:36:16.240 [INFO][4481] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--784fd4cb5b--84mxg-eth0 whisker-784fd4cb5b- calico-system 6b0060ac-0de4-41bc-8db5-824a8670c52d 1049 0 2025-09-12 17:36:14 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:784fd4cb5b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-784fd4cb5b-84mxg eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calib26ce9a1af9 [] [] }} ContainerID="5f74ae58193fa3ad603250ff2d5b8392d9b2cf1c2f602c5c2cb45f595bf31de2" Namespace="calico-system" Pod="whisker-784fd4cb5b-84mxg" WorkloadEndpoint="localhost-k8s-whisker--784fd4cb5b--84mxg-" Sep 12 17:36:16.583901 containerd[1575]: 2025-09-12 17:36:16.240 [INFO][4481] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5f74ae58193fa3ad603250ff2d5b8392d9b2cf1c2f602c5c2cb45f595bf31de2" Namespace="calico-system" Pod="whisker-784fd4cb5b-84mxg" WorkloadEndpoint="localhost-k8s-whisker--784fd4cb5b--84mxg-eth0" Sep 12 17:36:16.583901 containerd[1575]: 2025-09-12 17:36:16.275 [INFO][4507] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5f74ae58193fa3ad603250ff2d5b8392d9b2cf1c2f602c5c2cb45f595bf31de2" HandleID="k8s-pod-network.5f74ae58193fa3ad603250ff2d5b8392d9b2cf1c2f602c5c2cb45f595bf31de2" Workload="localhost-k8s-whisker--784fd4cb5b--84mxg-eth0" Sep 12 17:36:16.583901 containerd[1575]: 2025-09-12 17:36:16.275 [INFO][4507] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5f74ae58193fa3ad603250ff2d5b8392d9b2cf1c2f602c5c2cb45f595bf31de2" HandleID="k8s-pod-network.5f74ae58193fa3ad603250ff2d5b8392d9b2cf1c2f602c5c2cb45f595bf31de2" Workload="localhost-k8s-whisker--784fd4cb5b--84mxg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c6fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-784fd4cb5b-84mxg", "timestamp":"2025-09-12 17:36:16.275422609 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:36:16.583901 containerd[1575]: 2025-09-12 17:36:16.275 [INFO][4507] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:16.583901 containerd[1575]: 2025-09-12 17:36:16.275 [INFO][4507] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:16.583901 containerd[1575]: 2025-09-12 17:36:16.275 [INFO][4507] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:36:16.583901 containerd[1575]: 2025-09-12 17:36:16.287 [INFO][4507] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5f74ae58193fa3ad603250ff2d5b8392d9b2cf1c2f602c5c2cb45f595bf31de2" host="localhost" Sep 12 17:36:16.583901 containerd[1575]: 2025-09-12 17:36:16.306 [INFO][4507] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:36:16.583901 containerd[1575]: 2025-09-12 17:36:16.311 [INFO][4507] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:36:16.583901 containerd[1575]: 2025-09-12 17:36:16.313 [INFO][4507] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:36:16.583901 containerd[1575]: 2025-09-12 17:36:16.316 [INFO][4507] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:36:16.583901 containerd[1575]: 2025-09-12 17:36:16.316 [INFO][4507] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5f74ae58193fa3ad603250ff2d5b8392d9b2cf1c2f602c5c2cb45f595bf31de2" host="localhost" Sep 12 17:36:16.583901 containerd[1575]: 2025-09-12 17:36:16.318 [INFO][4507] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5f74ae58193fa3ad603250ff2d5b8392d9b2cf1c2f602c5c2cb45f595bf31de2 Sep 12 17:36:16.583901 containerd[1575]: 2025-09-12 17:36:16.342 [INFO][4507] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5f74ae58193fa3ad603250ff2d5b8392d9b2cf1c2f602c5c2cb45f595bf31de2" host="localhost" Sep 12 17:36:16.583901 containerd[1575]: 2025-09-12 17:36:16.379 [INFO][4507] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.5f74ae58193fa3ad603250ff2d5b8392d9b2cf1c2f602c5c2cb45f595bf31de2" host="localhost" Sep 12 17:36:16.583901 containerd[1575]: 2025-09-12 17:36:16.379 [INFO][4507] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.5f74ae58193fa3ad603250ff2d5b8392d9b2cf1c2f602c5c2cb45f595bf31de2" host="localhost" Sep 12 17:36:16.583901 containerd[1575]: 2025-09-12 17:36:16.379 [INFO][4507] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:16.583901 containerd[1575]: 2025-09-12 17:36:16.379 [INFO][4507] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="5f74ae58193fa3ad603250ff2d5b8392d9b2cf1c2f602c5c2cb45f595bf31de2" HandleID="k8s-pod-network.5f74ae58193fa3ad603250ff2d5b8392d9b2cf1c2f602c5c2cb45f595bf31de2" Workload="localhost-k8s-whisker--784fd4cb5b--84mxg-eth0" Sep 12 17:36:16.584907 containerd[1575]: 2025-09-12 17:36:16.382 [INFO][4481] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5f74ae58193fa3ad603250ff2d5b8392d9b2cf1c2f602c5c2cb45f595bf31de2" Namespace="calico-system" Pod="whisker-784fd4cb5b-84mxg" WorkloadEndpoint="localhost-k8s-whisker--784fd4cb5b--84mxg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--784fd4cb5b--84mxg-eth0", GenerateName:"whisker-784fd4cb5b-", Namespace:"calico-system", SelfLink:"", UID:"6b0060ac-0de4-41bc-8db5-824a8670c52d", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 36, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"784fd4cb5b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-784fd4cb5b-84mxg", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib26ce9a1af9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:16.584907 containerd[1575]: 2025-09-12 17:36:16.383 [INFO][4481] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="5f74ae58193fa3ad603250ff2d5b8392d9b2cf1c2f602c5c2cb45f595bf31de2" Namespace="calico-system" Pod="whisker-784fd4cb5b-84mxg" WorkloadEndpoint="localhost-k8s-whisker--784fd4cb5b--84mxg-eth0" Sep 12 17:36:16.584907 containerd[1575]: 2025-09-12 17:36:16.383 [INFO][4481] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib26ce9a1af9 ContainerID="5f74ae58193fa3ad603250ff2d5b8392d9b2cf1c2f602c5c2cb45f595bf31de2" Namespace="calico-system" Pod="whisker-784fd4cb5b-84mxg" WorkloadEndpoint="localhost-k8s-whisker--784fd4cb5b--84mxg-eth0" Sep 12 17:36:16.584907 containerd[1575]: 2025-09-12 17:36:16.388 [INFO][4481] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5f74ae58193fa3ad603250ff2d5b8392d9b2cf1c2f602c5c2cb45f595bf31de2" Namespace="calico-system" Pod="whisker-784fd4cb5b-84mxg" WorkloadEndpoint="localhost-k8s-whisker--784fd4cb5b--84mxg-eth0" Sep 12 17:36:16.584907 containerd[1575]: 2025-09-12 17:36:16.389 [INFO][4481] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5f74ae58193fa3ad603250ff2d5b8392d9b2cf1c2f602c5c2cb45f595bf31de2" Namespace="calico-system" Pod="whisker-784fd4cb5b-84mxg" WorkloadEndpoint="localhost-k8s-whisker--784fd4cb5b--84mxg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--784fd4cb5b--84mxg-eth0", GenerateName:"whisker-784fd4cb5b-", Namespace:"calico-system", SelfLink:"", UID:"6b0060ac-0de4-41bc-8db5-824a8670c52d", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 36, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"784fd4cb5b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5f74ae58193fa3ad603250ff2d5b8392d9b2cf1c2f602c5c2cb45f595bf31de2", Pod:"whisker-784fd4cb5b-84mxg", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib26ce9a1af9", MAC:"16:dc:f3:93:90:1d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:16.584907 containerd[1575]: 2025-09-12 17:36:16.576 [INFO][4481] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5f74ae58193fa3ad603250ff2d5b8392d9b2cf1c2f602c5c2cb45f595bf31de2" Namespace="calico-system" Pod="whisker-784fd4cb5b-84mxg" WorkloadEndpoint="localhost-k8s-whisker--784fd4cb5b--84mxg-eth0" Sep 12 17:36:16.799548 containerd[1575]: 2025-09-12 17:36:16.732 [INFO][4591] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1" Sep 12 17:36:16.799548 containerd[1575]: 2025-09-12 17:36:16.738 [INFO][4591] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1" iface="eth0" netns="/var/run/netns/cni-cde7a13f-f988-30fa-efe9-be07288adde1" Sep 12 17:36:16.799548 containerd[1575]: 2025-09-12 17:36:16.740 [INFO][4591] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1" iface="eth0" netns="/var/run/netns/cni-cde7a13f-f988-30fa-efe9-be07288adde1" Sep 12 17:36:16.799548 containerd[1575]: 2025-09-12 17:36:16.742 [INFO][4591] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1" iface="eth0" netns="/var/run/netns/cni-cde7a13f-f988-30fa-efe9-be07288adde1" Sep 12 17:36:16.799548 containerd[1575]: 2025-09-12 17:36:16.742 [INFO][4591] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1" Sep 12 17:36:16.799548 containerd[1575]: 2025-09-12 17:36:16.742 [INFO][4591] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1" Sep 12 17:36:16.799548 containerd[1575]: 2025-09-12 17:36:16.784 [INFO][4639] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1" HandleID="k8s-pod-network.d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1" Workload="localhost-k8s-coredns--7c65d6cfc9--7qbhp-eth0" Sep 12 17:36:16.799548 containerd[1575]: 2025-09-12 17:36:16.784 [INFO][4639] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:16.799548 containerd[1575]: 2025-09-12 17:36:16.784 [INFO][4639] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:16.799548 containerd[1575]: 2025-09-12 17:36:16.791 [WARNING][4639] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1" HandleID="k8s-pod-network.d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1" Workload="localhost-k8s-coredns--7c65d6cfc9--7qbhp-eth0" Sep 12 17:36:16.799548 containerd[1575]: 2025-09-12 17:36:16.791 [INFO][4639] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1" HandleID="k8s-pod-network.d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1" Workload="localhost-k8s-coredns--7c65d6cfc9--7qbhp-eth0" Sep 12 17:36:16.799548 containerd[1575]: 2025-09-12 17:36:16.794 [INFO][4639] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:16.799548 containerd[1575]: 2025-09-12 17:36:16.796 [INFO][4591] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1" Sep 12 17:36:16.853025 containerd[1575]: time="2025-09-12T17:36:16.852741085Z" level=info msg="TearDown network for sandbox \"d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1\" successfully" Sep 12 17:36:16.853025 containerd[1575]: time="2025-09-12T17:36:16.852798253Z" level=info msg="StopPodSandbox for \"d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1\" returns successfully" Sep 12 17:36:16.853227 kubelet[2643]: E0912 17:36:16.853195 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:36:16.855928 containerd[1575]: time="2025-09-12T17:36:16.854375391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7qbhp,Uid:caab70bd-65e3-454e-b4f6-312204583e4c,Namespace:kube-system,Attempt:1,}" Sep 12 17:36:16.854599 systemd[1]: run-netns-cni\x2dcde7a13f\x2df988\x2d30fa\x2defe9\x2dbe07288adde1.mount: Deactivated successfully. Sep 12 17:36:16.998533 containerd[1575]: time="2025-09-12T17:36:16.998399394Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:36:16.998533 containerd[1575]: time="2025-09-12T17:36:16.998490847Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:36:16.998533 containerd[1575]: time="2025-09-12T17:36:16.998504793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:36:16.998785 containerd[1575]: time="2025-09-12T17:36:16.998615312Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:36:17.033716 systemd-resolved[1465]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:36:17.065955 containerd[1575]: time="2025-09-12T17:36:17.065899358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-784fd4cb5b-84mxg,Uid:6b0060ac-0de4-41bc-8db5-824a8670c52d,Namespace:calico-system,Attempt:0,} returns sandbox id \"5f74ae58193fa3ad603250ff2d5b8392d9b2cf1c2f602c5c2cb45f595bf31de2\"" Sep 12 17:36:17.160507 systemd-networkd[1243]: calid39fdd3d3f4: Link UP Sep 12 17:36:17.163565 systemd-networkd[1243]: calid39fdd3d3f4: Gained carrier Sep 12 17:36:17.178055 containerd[1575]: 2025-09-12 17:36:16.735 [INFO][4592] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678" Sep 12 17:36:17.178055 containerd[1575]: 2025-09-12 17:36:16.735 [INFO][4592] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678" iface="eth0" netns="/var/run/netns/cni-0cfac06d-2666-c352-4bcb-e64adb56adc3" Sep 12 17:36:17.178055 containerd[1575]: 2025-09-12 17:36:16.736 [INFO][4592] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678" iface="eth0" netns="/var/run/netns/cni-0cfac06d-2666-c352-4bcb-e64adb56adc3" Sep 12 17:36:17.178055 containerd[1575]: 2025-09-12 17:36:16.744 [INFO][4592] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678" iface="eth0" netns="/var/run/netns/cni-0cfac06d-2666-c352-4bcb-e64adb56adc3" Sep 12 17:36:17.178055 containerd[1575]: 2025-09-12 17:36:16.744 [INFO][4592] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678" Sep 12 17:36:17.178055 containerd[1575]: 2025-09-12 17:36:16.744 [INFO][4592] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678" Sep 12 17:36:17.178055 containerd[1575]: 2025-09-12 17:36:16.788 [INFO][4640] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678" HandleID="k8s-pod-network.9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678" Workload="localhost-k8s-coredns--7c65d6cfc9--7nrf5-eth0" Sep 12 17:36:17.178055 containerd[1575]: 2025-09-12 17:36:16.791 [INFO][4640] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:17.178055 containerd[1575]: 2025-09-12 17:36:17.151 [INFO][4640] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:17.178055 containerd[1575]: 2025-09-12 17:36:17.159 [WARNING][4640] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678" HandleID="k8s-pod-network.9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678" Workload="localhost-k8s-coredns--7c65d6cfc9--7nrf5-eth0" Sep 12 17:36:17.178055 containerd[1575]: 2025-09-12 17:36:17.159 [INFO][4640] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678" HandleID="k8s-pod-network.9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678" Workload="localhost-k8s-coredns--7c65d6cfc9--7nrf5-eth0" Sep 12 17:36:17.178055 containerd[1575]: 2025-09-12 17:36:17.161 [INFO][4640] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:17.178055 containerd[1575]: 2025-09-12 17:36:17.170 [INFO][4592] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678" Sep 12 17:36:17.178515 containerd[1575]: time="2025-09-12T17:36:17.178252553Z" level=info msg="TearDown network for sandbox \"9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678\" successfully" Sep 12 17:36:17.178515 containerd[1575]: time="2025-09-12T17:36:17.178285717Z" level=info msg="StopPodSandbox for \"9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678\" returns successfully" Sep 12 17:36:17.178817 kubelet[2643]: E0912 17:36:17.178779 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:36:17.181828 containerd[1575]: time="2025-09-12T17:36:17.181771383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7nrf5,Uid:9b17846b-f3a4-4894-aba8-8a48d931dcb0,Namespace:kube-system,Attempt:1,}" Sep 12 17:36:17.184131 systemd[1]: run-netns-cni\x2d0cfac06d\x2d2666\x2dc352\x2d4bcb\x2de64adb56adc3.mount: Deactivated successfully. Sep 12 17:36:17.227314 containerd[1575]: 2025-09-12 17:36:16.669 [INFO][4628] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6df87d7bb7--ns7sf-eth0 calico-apiserver-6df87d7bb7- calico-apiserver fdbb1e09-6ae8-4cec-b2fc-9195acd01c9c 1055 0 2025-09-12 17:35:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6df87d7bb7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6df87d7bb7-ns7sf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid39fdd3d3f4 [] [] }} ContainerID="ddadf29d82c896e8e294d6eac9a6f192ab00e02a5653091c71c3508870b0b335" Namespace="calico-apiserver" Pod="calico-apiserver-6df87d7bb7-ns7sf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6df87d7bb7--ns7sf-" Sep 12 17:36:17.227314 containerd[1575]: 2025-09-12 17:36:16.669 [INFO][4628] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ddadf29d82c896e8e294d6eac9a6f192ab00e02a5653091c71c3508870b0b335" Namespace="calico-apiserver" Pod="calico-apiserver-6df87d7bb7-ns7sf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6df87d7bb7--ns7sf-eth0" Sep 12 17:36:17.227314 containerd[1575]: 2025-09-12 17:36:16.788 [INFO][4641] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ddadf29d82c896e8e294d6eac9a6f192ab00e02a5653091c71c3508870b0b335" HandleID="k8s-pod-network.ddadf29d82c896e8e294d6eac9a6f192ab00e02a5653091c71c3508870b0b335" Workload="localhost-k8s-calico--apiserver--6df87d7bb7--ns7sf-eth0" Sep 12 17:36:17.227314 containerd[1575]: 2025-09-12 17:36:16.790 [INFO][4641] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ddadf29d82c896e8e294d6eac9a6f192ab00e02a5653091c71c3508870b0b335" HandleID="k8s-pod-network.ddadf29d82c896e8e294d6eac9a6f192ab00e02a5653091c71c3508870b0b335" Workload="localhost-k8s-calico--apiserver--6df87d7bb7--ns7sf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000344fe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6df87d7bb7-ns7sf", "timestamp":"2025-09-12 17:36:16.788293216 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:36:17.227314 containerd[1575]: 2025-09-12 17:36:16.791 [INFO][4641] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:17.227314 containerd[1575]: 2025-09-12 17:36:16.794 [INFO][4641] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:17.227314 containerd[1575]: 2025-09-12 17:36:16.794 [INFO][4641] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:36:17.227314 containerd[1575]: 2025-09-12 17:36:16.800 [INFO][4641] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ddadf29d82c896e8e294d6eac9a6f192ab00e02a5653091c71c3508870b0b335" host="localhost" Sep 12 17:36:17.227314 containerd[1575]: 2025-09-12 17:36:16.805 [INFO][4641] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:36:17.227314 containerd[1575]: 2025-09-12 17:36:16.809 [INFO][4641] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:36:17.227314 containerd[1575]: 2025-09-12 17:36:16.811 [INFO][4641] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:36:17.227314 containerd[1575]: 2025-09-12 17:36:16.816 [INFO][4641] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:36:17.227314 containerd[1575]: 2025-09-12 17:36:16.816 [INFO][4641] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ddadf29d82c896e8e294d6eac9a6f192ab00e02a5653091c71c3508870b0b335" host="localhost" Sep 12 17:36:17.227314 containerd[1575]: 2025-09-12 17:36:16.945 [INFO][4641] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ddadf29d82c896e8e294d6eac9a6f192ab00e02a5653091c71c3508870b0b335 Sep 12 17:36:17.227314 containerd[1575]: 2025-09-12 17:36:16.989 [INFO][4641] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ddadf29d82c896e8e294d6eac9a6f192ab00e02a5653091c71c3508870b0b335" host="localhost" Sep 12 17:36:17.227314 containerd[1575]: 2025-09-12 17:36:17.151 [INFO][4641] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.ddadf29d82c896e8e294d6eac9a6f192ab00e02a5653091c71c3508870b0b335" host="localhost" Sep 12 17:36:17.227314 containerd[1575]: 2025-09-12 17:36:17.151 [INFO][4641] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.ddadf29d82c896e8e294d6eac9a6f192ab00e02a5653091c71c3508870b0b335" host="localhost" Sep 12 17:36:17.227314 containerd[1575]: 2025-09-12 17:36:17.152 [INFO][4641] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:17.227314 containerd[1575]: 2025-09-12 17:36:17.152 [INFO][4641] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="ddadf29d82c896e8e294d6eac9a6f192ab00e02a5653091c71c3508870b0b335" HandleID="k8s-pod-network.ddadf29d82c896e8e294d6eac9a6f192ab00e02a5653091c71c3508870b0b335" Workload="localhost-k8s-calico--apiserver--6df87d7bb7--ns7sf-eth0" Sep 12 17:36:17.228100 containerd[1575]: 2025-09-12 17:36:17.155 [INFO][4628] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ddadf29d82c896e8e294d6eac9a6f192ab00e02a5653091c71c3508870b0b335" Namespace="calico-apiserver" Pod="calico-apiserver-6df87d7bb7-ns7sf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6df87d7bb7--ns7sf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6df87d7bb7--ns7sf-eth0", GenerateName:"calico-apiserver-6df87d7bb7-", Namespace:"calico-apiserver", SelfLink:"", UID:"fdbb1e09-6ae8-4cec-b2fc-9195acd01c9c", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6df87d7bb7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6df87d7bb7-ns7sf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid39fdd3d3f4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:17.228100 containerd[1575]: 2025-09-12 17:36:17.156 [INFO][4628] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="ddadf29d82c896e8e294d6eac9a6f192ab00e02a5653091c71c3508870b0b335" Namespace="calico-apiserver" Pod="calico-apiserver-6df87d7bb7-ns7sf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6df87d7bb7--ns7sf-eth0" Sep 12 17:36:17.228100 containerd[1575]: 2025-09-12 17:36:17.156 [INFO][4628] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid39fdd3d3f4 ContainerID="ddadf29d82c896e8e294d6eac9a6f192ab00e02a5653091c71c3508870b0b335" Namespace="calico-apiserver" Pod="calico-apiserver-6df87d7bb7-ns7sf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6df87d7bb7--ns7sf-eth0" Sep 12 17:36:17.228100 containerd[1575]: 2025-09-12 17:36:17.161 [INFO][4628] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ddadf29d82c896e8e294d6eac9a6f192ab00e02a5653091c71c3508870b0b335" Namespace="calico-apiserver" Pod="calico-apiserver-6df87d7bb7-ns7sf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6df87d7bb7--ns7sf-eth0" Sep 12 17:36:17.228100 containerd[1575]: 2025-09-12 17:36:17.162 [INFO][4628] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ddadf29d82c896e8e294d6eac9a6f192ab00e02a5653091c71c3508870b0b335" Namespace="calico-apiserver" Pod="calico-apiserver-6df87d7bb7-ns7sf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6df87d7bb7--ns7sf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6df87d7bb7--ns7sf-eth0", GenerateName:"calico-apiserver-6df87d7bb7-", Namespace:"calico-apiserver", SelfLink:"", UID:"fdbb1e09-6ae8-4cec-b2fc-9195acd01c9c", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6df87d7bb7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ddadf29d82c896e8e294d6eac9a6f192ab00e02a5653091c71c3508870b0b335", Pod:"calico-apiserver-6df87d7bb7-ns7sf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid39fdd3d3f4", MAC:"2e:9f:e8:0e:d3:1c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:17.228100 containerd[1575]: 2025-09-12 17:36:17.224 [INFO][4628] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ddadf29d82c896e8e294d6eac9a6f192ab00e02a5653091c71c3508870b0b335" Namespace="calico-apiserver" Pod="calico-apiserver-6df87d7bb7-ns7sf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6df87d7bb7--ns7sf-eth0" Sep 12 17:36:17.316708 containerd[1575]: 2025-09-12 17:36:16.853 [INFO][4593] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4" Sep 12 17:36:17.316708 containerd[1575]: 2025-09-12 17:36:16.853 [INFO][4593] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4" iface="eth0" netns="/var/run/netns/cni-51e05223-c766-3ee4-f633-aa512e70742f" Sep 12 17:36:17.316708 containerd[1575]: 2025-09-12 17:36:16.853 [INFO][4593] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4" iface="eth0" netns="/var/run/netns/cni-51e05223-c766-3ee4-f633-aa512e70742f" Sep 12 17:36:17.316708 containerd[1575]: 2025-09-12 17:36:16.854 [INFO][4593] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4" iface="eth0" netns="/var/run/netns/cni-51e05223-c766-3ee4-f633-aa512e70742f" Sep 12 17:36:17.316708 containerd[1575]: 2025-09-12 17:36:16.854 [INFO][4593] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4" Sep 12 17:36:17.316708 containerd[1575]: 2025-09-12 17:36:16.854 [INFO][4593] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4" Sep 12 17:36:17.316708 containerd[1575]: 2025-09-12 17:36:16.900 [INFO][4666] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4" HandleID="k8s-pod-network.c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4" Workload="localhost-k8s-calico--kube--controllers--577f47b55c--knc26-eth0" Sep 12 17:36:17.316708 containerd[1575]: 2025-09-12 17:36:16.900 [INFO][4666] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:17.316708 containerd[1575]: 2025-09-12 17:36:17.161 [INFO][4666] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:17.316708 containerd[1575]: 2025-09-12 17:36:17.221 [WARNING][4666] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4" HandleID="k8s-pod-network.c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4" Workload="localhost-k8s-calico--kube--controllers--577f47b55c--knc26-eth0" Sep 12 17:36:17.316708 containerd[1575]: 2025-09-12 17:36:17.221 [INFO][4666] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4" HandleID="k8s-pod-network.c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4" Workload="localhost-k8s-calico--kube--controllers--577f47b55c--knc26-eth0" Sep 12 17:36:17.316708 containerd[1575]: 2025-09-12 17:36:17.309 [INFO][4666] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:17.316708 containerd[1575]: 2025-09-12 17:36:17.312 [INFO][4593] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4" Sep 12 17:36:17.326080 containerd[1575]: time="2025-09-12T17:36:17.316999901Z" level=info msg="TearDown network for sandbox \"c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4\" successfully" Sep 12 17:36:17.326080 containerd[1575]: time="2025-09-12T17:36:17.317034838Z" level=info msg="StopPodSandbox for \"c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4\" returns successfully" Sep 12 17:36:17.326080 containerd[1575]: time="2025-09-12T17:36:17.318002626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-577f47b55c-knc26,Uid:668d9640-9d0d-41dc-9d50-7ca43eccf073,Namespace:calico-system,Attempt:1,}" Sep 12 17:36:17.346688 systemd-networkd[1243]: calic8ecd6454fd: Gained IPv6LL Sep 12 17:36:17.418771 containerd[1575]: time="2025-09-12T17:36:17.418549056Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:36:17.418771 containerd[1575]: time="2025-09-12T17:36:17.418632387Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:36:17.418771 containerd[1575]: time="2025-09-12T17:36:17.418647957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:36:17.418965 containerd[1575]: time="2025-09-12T17:36:17.418826932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:36:17.450112 systemd-resolved[1465]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:36:17.483565 containerd[1575]: time="2025-09-12T17:36:17.483498332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6df87d7bb7-ns7sf,Uid:fdbb1e09-6ae8-4cec-b2fc-9195acd01c9c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"ddadf29d82c896e8e294d6eac9a6f192ab00e02a5653091c71c3508870b0b335\"" Sep 12 17:36:17.667673 systemd-networkd[1243]: vxlan.calico: Gained IPv6LL Sep 12 17:36:17.711113 systemd[1]: run-netns-cni\x2d51e05223\x2dc766\x2d3ee4\x2df633\x2daa512e70742f.mount: Deactivated successfully. Sep 12 17:36:17.730690 systemd-networkd[1243]: cali0f7c47a91fe: Gained IPv6LL Sep 12 17:36:18.050724 systemd-networkd[1243]: calib26ce9a1af9: Gained IPv6LL Sep 12 17:36:18.089262 systemd-networkd[1243]: caliaad212b6789: Link UP Sep 12 17:36:18.091362 systemd-networkd[1243]: caliaad212b6789: Gained carrier Sep 12 17:36:18.187235 containerd[1575]: 2025-09-12 17:36:17.954 [INFO][4766] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--7qbhp-eth0 coredns-7c65d6cfc9- kube-system caab70bd-65e3-454e-b4f6-312204583e4c 1072 0 2025-09-12 17:35:20 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-7qbhp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliaad212b6789 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b4921da3126dd78c1405bbc5997af8ad3cbfdb7d0b54df097a390e15b8bb8604" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7qbhp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7qbhp-" Sep 12 17:36:18.187235 containerd[1575]: 2025-09-12 17:36:17.954 [INFO][4766] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b4921da3126dd78c1405bbc5997af8ad3cbfdb7d0b54df097a390e15b8bb8604" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7qbhp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7qbhp-eth0" Sep 12 17:36:18.187235 containerd[1575]: 2025-09-12 17:36:17.979 [INFO][4779] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b4921da3126dd78c1405bbc5997af8ad3cbfdb7d0b54df097a390e15b8bb8604" HandleID="k8s-pod-network.b4921da3126dd78c1405bbc5997af8ad3cbfdb7d0b54df097a390e15b8bb8604" Workload="localhost-k8s-coredns--7c65d6cfc9--7qbhp-eth0" Sep 12 17:36:18.187235 containerd[1575]: 2025-09-12 17:36:17.979 [INFO][4779] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b4921da3126dd78c1405bbc5997af8ad3cbfdb7d0b54df097a390e15b8bb8604" HandleID="k8s-pod-network.b4921da3126dd78c1405bbc5997af8ad3cbfdb7d0b54df097a390e15b8bb8604" Workload="localhost-k8s-coredns--7c65d6cfc9--7qbhp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005115f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-7qbhp", "timestamp":"2025-09-12 17:36:17.979098179 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:36:18.187235 containerd[1575]: 2025-09-12 17:36:17.979 [INFO][4779] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:18.187235 containerd[1575]: 2025-09-12 17:36:17.979 [INFO][4779] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:18.187235 containerd[1575]: 2025-09-12 17:36:17.979 [INFO][4779] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:36:18.187235 containerd[1575]: 2025-09-12 17:36:17.987 [INFO][4779] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b4921da3126dd78c1405bbc5997af8ad3cbfdb7d0b54df097a390e15b8bb8604" host="localhost" Sep 12 17:36:18.187235 containerd[1575]: 2025-09-12 17:36:17.991 [INFO][4779] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:36:18.187235 containerd[1575]: 2025-09-12 17:36:17.996 [INFO][4779] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:36:18.187235 containerd[1575]: 2025-09-12 17:36:17.998 [INFO][4779] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:36:18.187235 containerd[1575]: 2025-09-12 17:36:18.000 [INFO][4779] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:36:18.187235 containerd[1575]: 2025-09-12 17:36:18.000 [INFO][4779] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b4921da3126dd78c1405bbc5997af8ad3cbfdb7d0b54df097a390e15b8bb8604" host="localhost" Sep 12 17:36:18.187235 containerd[1575]: 2025-09-12 17:36:18.002 [INFO][4779] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b4921da3126dd78c1405bbc5997af8ad3cbfdb7d0b54df097a390e15b8bb8604 Sep 12 17:36:18.187235 containerd[1575]: 2025-09-12 17:36:18.036 [INFO][4779] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b4921da3126dd78c1405bbc5997af8ad3cbfdb7d0b54df097a390e15b8bb8604" host="localhost" Sep 12 17:36:18.187235 containerd[1575]: 2025-09-12 17:36:18.082 [INFO][4779] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.b4921da3126dd78c1405bbc5997af8ad3cbfdb7d0b54df097a390e15b8bb8604" host="localhost" Sep 12 17:36:18.187235 containerd[1575]: 2025-09-12 17:36:18.082 [INFO][4779] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.b4921da3126dd78c1405bbc5997af8ad3cbfdb7d0b54df097a390e15b8bb8604" host="localhost" Sep 12 17:36:18.187235 containerd[1575]: 2025-09-12 17:36:18.082 [INFO][4779] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:18.187235 containerd[1575]: 2025-09-12 17:36:18.082 [INFO][4779] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="b4921da3126dd78c1405bbc5997af8ad3cbfdb7d0b54df097a390e15b8bb8604" HandleID="k8s-pod-network.b4921da3126dd78c1405bbc5997af8ad3cbfdb7d0b54df097a390e15b8bb8604" Workload="localhost-k8s-coredns--7c65d6cfc9--7qbhp-eth0" Sep 12 17:36:18.188360 containerd[1575]: 2025-09-12 17:36:18.085 [INFO][4766] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b4921da3126dd78c1405bbc5997af8ad3cbfdb7d0b54df097a390e15b8bb8604" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7qbhp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7qbhp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--7qbhp-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"caab70bd-65e3-454e-b4f6-312204583e4c", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-7qbhp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaad212b6789", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:18.188360 containerd[1575]: 2025-09-12 17:36:18.085 [INFO][4766] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="b4921da3126dd78c1405bbc5997af8ad3cbfdb7d0b54df097a390e15b8bb8604" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7qbhp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7qbhp-eth0" Sep 12 17:36:18.188360 containerd[1575]: 2025-09-12 17:36:18.085 [INFO][4766] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaad212b6789 ContainerID="b4921da3126dd78c1405bbc5997af8ad3cbfdb7d0b54df097a390e15b8bb8604" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7qbhp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7qbhp-eth0" Sep 12 17:36:18.188360 containerd[1575]: 2025-09-12 17:36:18.091 [INFO][4766] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b4921da3126dd78c1405bbc5997af8ad3cbfdb7d0b54df097a390e15b8bb8604" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7qbhp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7qbhp-eth0" Sep 12 17:36:18.188360 containerd[1575]: 2025-09-12 17:36:18.092 [INFO][4766] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b4921da3126dd78c1405bbc5997af8ad3cbfdb7d0b54df097a390e15b8bb8604" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7qbhp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7qbhp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--7qbhp-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"caab70bd-65e3-454e-b4f6-312204583e4c", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b4921da3126dd78c1405bbc5997af8ad3cbfdb7d0b54df097a390e15b8bb8604", Pod:"coredns-7c65d6cfc9-7qbhp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaad212b6789", MAC:"56:ef:a5:ed:fd:a8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:18.188360 containerd[1575]: 2025-09-12 17:36:18.183 [INFO][4766] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b4921da3126dd78c1405bbc5997af8ad3cbfdb7d0b54df097a390e15b8bb8604" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7qbhp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7qbhp-eth0" Sep 12 17:36:18.493412 containerd[1575]: time="2025-09-12T17:36:18.493177286Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:36:18.493412 containerd[1575]: time="2025-09-12T17:36:18.493330642Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:36:18.493412 containerd[1575]: time="2025-09-12T17:36:18.493345200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:36:18.493964 containerd[1575]: time="2025-09-12T17:36:18.493815687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:36:18.528682 systemd-resolved[1465]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:36:18.568256 containerd[1575]: time="2025-09-12T17:36:18.568207167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7qbhp,Uid:caab70bd-65e3-454e-b4f6-312204583e4c,Namespace:kube-system,Attempt:1,} returns sandbox id \"b4921da3126dd78c1405bbc5997af8ad3cbfdb7d0b54df097a390e15b8bb8604\"" Sep 12 17:36:18.568939 kubelet[2643]: E0912 17:36:18.568896 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:36:18.571634 containerd[1575]: time="2025-09-12T17:36:18.571563258Z" level=info msg="CreateContainer within sandbox \"b4921da3126dd78c1405bbc5997af8ad3cbfdb7d0b54df097a390e15b8bb8604\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:36:18.691545 systemd-networkd[1243]: calibc0f5dafc95: Link UP Sep 12 17:36:18.691788 systemd-networkd[1243]: calid39fdd3d3f4: Gained IPv6LL Sep 12 17:36:18.692127 systemd-networkd[1243]: calibc0f5dafc95: Gained carrier Sep 12 17:36:18.743139 containerd[1575]: 2025-09-12 17:36:18.480 [INFO][4801] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--7nrf5-eth0 coredns-7c65d6cfc9- kube-system 9b17846b-f3a4-4894-aba8-8a48d931dcb0 1073 0 2025-09-12 17:35:20 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-7nrf5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibc0f5dafc95 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3622850b277001ffa27e5056c611b6888372a9797805968ddb0b8b8737b19513" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7nrf5" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7nrf5-" Sep 12 17:36:18.743139 containerd[1575]: 2025-09-12 17:36:18.480 [INFO][4801] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3622850b277001ffa27e5056c611b6888372a9797805968ddb0b8b8737b19513" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7nrf5" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7nrf5-eth0" Sep 12 17:36:18.743139 containerd[1575]: 2025-09-12 17:36:18.513 [INFO][4823] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3622850b277001ffa27e5056c611b6888372a9797805968ddb0b8b8737b19513" HandleID="k8s-pod-network.3622850b277001ffa27e5056c611b6888372a9797805968ddb0b8b8737b19513" Workload="localhost-k8s-coredns--7c65d6cfc9--7nrf5-eth0" Sep 12 17:36:18.743139 containerd[1575]: 2025-09-12 17:36:18.513 [INFO][4823] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3622850b277001ffa27e5056c611b6888372a9797805968ddb0b8b8737b19513" HandleID="k8s-pod-network.3622850b277001ffa27e5056c611b6888372a9797805968ddb0b8b8737b19513" Workload="localhost-k8s-coredns--7c65d6cfc9--7nrf5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f730), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-7nrf5", "timestamp":"2025-09-12 17:36:18.513148429 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:36:18.743139 containerd[1575]: 2025-09-12 17:36:18.513 [INFO][4823] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:18.743139 containerd[1575]: 2025-09-12 17:36:18.514 [INFO][4823] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:18.743139 containerd[1575]: 2025-09-12 17:36:18.514 [INFO][4823] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:36:18.743139 containerd[1575]: 2025-09-12 17:36:18.525 [INFO][4823] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3622850b277001ffa27e5056c611b6888372a9797805968ddb0b8b8737b19513" host="localhost" Sep 12 17:36:18.743139 containerd[1575]: 2025-09-12 17:36:18.536 [INFO][4823] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:36:18.743139 containerd[1575]: 2025-09-12 17:36:18.541 [INFO][4823] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:36:18.743139 containerd[1575]: 2025-09-12 17:36:18.543 [INFO][4823] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:36:18.743139 containerd[1575]: 2025-09-12 17:36:18.548 [INFO][4823] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:36:18.743139 containerd[1575]: 2025-09-12 17:36:18.548 [INFO][4823] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3622850b277001ffa27e5056c611b6888372a9797805968ddb0b8b8737b19513" host="localhost" Sep 12 17:36:18.743139 containerd[1575]: 2025-09-12 17:36:18.550 [INFO][4823] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3622850b277001ffa27e5056c611b6888372a9797805968ddb0b8b8737b19513 Sep 12 17:36:18.743139 containerd[1575]: 2025-09-12 17:36:18.572 [INFO][4823] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3622850b277001ffa27e5056c611b6888372a9797805968ddb0b8b8737b19513" host="localhost" Sep 12 17:36:18.743139 containerd[1575]: 2025-09-12 17:36:18.684 [INFO][4823] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.3622850b277001ffa27e5056c611b6888372a9797805968ddb0b8b8737b19513" host="localhost" Sep 12 17:36:18.743139 containerd[1575]: 2025-09-12 17:36:18.684 [INFO][4823] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.3622850b277001ffa27e5056c611b6888372a9797805968ddb0b8b8737b19513" host="localhost" Sep 12 17:36:18.743139 containerd[1575]: 2025-09-12 17:36:18.684 [INFO][4823] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:18.743139 containerd[1575]: 2025-09-12 17:36:18.684 [INFO][4823] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="3622850b277001ffa27e5056c611b6888372a9797805968ddb0b8b8737b19513" HandleID="k8s-pod-network.3622850b277001ffa27e5056c611b6888372a9797805968ddb0b8b8737b19513" Workload="localhost-k8s-coredns--7c65d6cfc9--7nrf5-eth0" Sep 12 17:36:18.744043 containerd[1575]: 2025-09-12 17:36:18.687 [INFO][4801] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3622850b277001ffa27e5056c611b6888372a9797805968ddb0b8b8737b19513" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7nrf5" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7nrf5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--7nrf5-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"9b17846b-f3a4-4894-aba8-8a48d931dcb0", ResourceVersion:"1073", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-7nrf5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibc0f5dafc95", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:18.744043 containerd[1575]: 2025-09-12 17:36:18.687 [INFO][4801] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="3622850b277001ffa27e5056c611b6888372a9797805968ddb0b8b8737b19513" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7nrf5" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7nrf5-eth0" Sep 12 17:36:18.744043 containerd[1575]: 2025-09-12 17:36:18.687 [INFO][4801] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibc0f5dafc95 ContainerID="3622850b277001ffa27e5056c611b6888372a9797805968ddb0b8b8737b19513" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7nrf5" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7nrf5-eth0" Sep 12 17:36:18.744043 containerd[1575]: 2025-09-12 17:36:18.693 [INFO][4801] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3622850b277001ffa27e5056c611b6888372a9797805968ddb0b8b8737b19513" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7nrf5" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7nrf5-eth0" Sep 12 17:36:18.744043 containerd[1575]: 2025-09-12 17:36:18.694 [INFO][4801] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3622850b277001ffa27e5056c611b6888372a9797805968ddb0b8b8737b19513" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7nrf5" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7nrf5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--7nrf5-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"9b17846b-f3a4-4894-aba8-8a48d931dcb0", ResourceVersion:"1073", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3622850b277001ffa27e5056c611b6888372a9797805968ddb0b8b8737b19513", Pod:"coredns-7c65d6cfc9-7nrf5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibc0f5dafc95", MAC:"42:30:f0:d1:c7:58", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:18.744043 containerd[1575]: 2025-09-12 17:36:18.737 [INFO][4801] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3622850b277001ffa27e5056c611b6888372a9797805968ddb0b8b8737b19513" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7nrf5" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7nrf5-eth0" Sep 12 17:36:18.775921 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount663152256.mount: Deactivated successfully. Sep 12 17:36:18.780180 containerd[1575]: time="2025-09-12T17:36:18.780138573Z" level=info msg="CreateContainer within sandbox \"b4921da3126dd78c1405bbc5997af8ad3cbfdb7d0b54df097a390e15b8bb8604\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b456a24ffa3c0fd31be75c82ca3c467044816761dfacd870abf20ca23b34eb6d\"" Sep 12 17:36:18.782457 containerd[1575]: time="2025-09-12T17:36:18.780969526Z" level=info msg="StartContainer for \"b456a24ffa3c0fd31be75c82ca3c467044816761dfacd870abf20ca23b34eb6d\"" Sep 12 17:36:18.794234 containerd[1575]: time="2025-09-12T17:36:18.791848735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:36:18.794234 containerd[1575]: time="2025-09-12T17:36:18.791918681Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:36:18.794234 containerd[1575]: time="2025-09-12T17:36:18.791949429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:36:18.794234 containerd[1575]: time="2025-09-12T17:36:18.792063359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:36:18.802930 systemd-networkd[1243]: cali20f925d1e60: Link UP Sep 12 17:36:18.807996 systemd-networkd[1243]: cali20f925d1e60: Gained carrier Sep 12 17:36:18.829288 containerd[1575]: 2025-09-12 17:36:18.573 [INFO][4847] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--577f47b55c--knc26-eth0 calico-kube-controllers-577f47b55c- calico-system 668d9640-9d0d-41dc-9d50-7ca43eccf073 1071 0 2025-09-12 17:35:33 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:577f47b55c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-577f47b55c-knc26 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali20f925d1e60 [] [] }} ContainerID="47ddf11907f9deeaf38ba2f612175d1f56d3774eae89b317a5731a5e6b6a2132" Namespace="calico-system" Pod="calico-kube-controllers-577f47b55c-knc26" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--577f47b55c--knc26-" Sep 12 17:36:18.829288 containerd[1575]: 2025-09-12 17:36:18.573 [INFO][4847] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="47ddf11907f9deeaf38ba2f612175d1f56d3774eae89b317a5731a5e6b6a2132" Namespace="calico-system" Pod="calico-kube-controllers-577f47b55c-knc26" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--577f47b55c--knc26-eth0" Sep 12 17:36:18.829288 containerd[1575]: 2025-09-12 17:36:18.721 [INFO][4882] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="47ddf11907f9deeaf38ba2f612175d1f56d3774eae89b317a5731a5e6b6a2132" HandleID="k8s-pod-network.47ddf11907f9deeaf38ba2f612175d1f56d3774eae89b317a5731a5e6b6a2132" Workload="localhost-k8s-calico--kube--controllers--577f47b55c--knc26-eth0" Sep 12 17:36:18.829288 containerd[1575]: 2025-09-12 17:36:18.721 [INFO][4882] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="47ddf11907f9deeaf38ba2f612175d1f56d3774eae89b317a5731a5e6b6a2132" HandleID="k8s-pod-network.47ddf11907f9deeaf38ba2f612175d1f56d3774eae89b317a5731a5e6b6a2132" Workload="localhost-k8s-calico--kube--controllers--577f47b55c--knc26-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7600), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-577f47b55c-knc26", "timestamp":"2025-09-12 17:36:18.721127634 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:36:18.829288 containerd[1575]: 2025-09-12 17:36:18.721 [INFO][4882] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:18.829288 containerd[1575]: 2025-09-12 17:36:18.721 [INFO][4882] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:18.829288 containerd[1575]: 2025-09-12 17:36:18.721 [INFO][4882] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:36:18.829288 containerd[1575]: 2025-09-12 17:36:18.743 [INFO][4882] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.47ddf11907f9deeaf38ba2f612175d1f56d3774eae89b317a5731a5e6b6a2132" host="localhost" Sep 12 17:36:18.829288 containerd[1575]: 2025-09-12 17:36:18.749 [INFO][4882] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:36:18.829288 containerd[1575]: 2025-09-12 17:36:18.755 [INFO][4882] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:36:18.829288 containerd[1575]: 2025-09-12 17:36:18.757 [INFO][4882] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:36:18.829288 containerd[1575]: 2025-09-12 17:36:18.759 [INFO][4882] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:36:18.829288 containerd[1575]: 2025-09-12 17:36:18.759 [INFO][4882] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.47ddf11907f9deeaf38ba2f612175d1f56d3774eae89b317a5731a5e6b6a2132" host="localhost" Sep 12 17:36:18.829288 containerd[1575]: 2025-09-12 17:36:18.760 [INFO][4882] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.47ddf11907f9deeaf38ba2f612175d1f56d3774eae89b317a5731a5e6b6a2132 Sep 12 17:36:18.829288 containerd[1575]: 2025-09-12 17:36:18.765 [INFO][4882] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.47ddf11907f9deeaf38ba2f612175d1f56d3774eae89b317a5731a5e6b6a2132" host="localhost" Sep 12 17:36:18.829288 containerd[1575]: 2025-09-12 17:36:18.777 [INFO][4882] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.47ddf11907f9deeaf38ba2f612175d1f56d3774eae89b317a5731a5e6b6a2132" host="localhost" Sep 12 17:36:18.829288 containerd[1575]: 2025-09-12 17:36:18.777 [INFO][4882] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.47ddf11907f9deeaf38ba2f612175d1f56d3774eae89b317a5731a5e6b6a2132" host="localhost" Sep 12 17:36:18.829288 containerd[1575]: 2025-09-12 17:36:18.777 [INFO][4882] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:18.829288 containerd[1575]: 2025-09-12 17:36:18.777 [INFO][4882] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="47ddf11907f9deeaf38ba2f612175d1f56d3774eae89b317a5731a5e6b6a2132" HandleID="k8s-pod-network.47ddf11907f9deeaf38ba2f612175d1f56d3774eae89b317a5731a5e6b6a2132" Workload="localhost-k8s-calico--kube--controllers--577f47b55c--knc26-eth0" Sep 12 17:36:18.830127 containerd[1575]: 2025-09-12 17:36:18.785 [INFO][4847] cni-plugin/k8s.go 418: Populated endpoint ContainerID="47ddf11907f9deeaf38ba2f612175d1f56d3774eae89b317a5731a5e6b6a2132" Namespace="calico-system" Pod="calico-kube-controllers-577f47b55c-knc26" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--577f47b55c--knc26-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--577f47b55c--knc26-eth0", GenerateName:"calico-kube-controllers-577f47b55c-", Namespace:"calico-system", SelfLink:"", UID:"668d9640-9d0d-41dc-9d50-7ca43eccf073", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"577f47b55c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-577f47b55c-knc26", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali20f925d1e60", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:18.830127 containerd[1575]: 2025-09-12 17:36:18.793 [INFO][4847] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="47ddf11907f9deeaf38ba2f612175d1f56d3774eae89b317a5731a5e6b6a2132" Namespace="calico-system" Pod="calico-kube-controllers-577f47b55c-knc26" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--577f47b55c--knc26-eth0" Sep 12 17:36:18.830127 containerd[1575]: 2025-09-12 17:36:18.793 [INFO][4847] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali20f925d1e60 ContainerID="47ddf11907f9deeaf38ba2f612175d1f56d3774eae89b317a5731a5e6b6a2132" Namespace="calico-system" Pod="calico-kube-controllers-577f47b55c-knc26" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--577f47b55c--knc26-eth0" Sep 12 17:36:18.830127 containerd[1575]: 2025-09-12 17:36:18.805 [INFO][4847] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="47ddf11907f9deeaf38ba2f612175d1f56d3774eae89b317a5731a5e6b6a2132" Namespace="calico-system" Pod="calico-kube-controllers-577f47b55c-knc26" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--577f47b55c--knc26-eth0" Sep 12 17:36:18.830127 containerd[1575]: 2025-09-12 17:36:18.806 [INFO][4847] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="47ddf11907f9deeaf38ba2f612175d1f56d3774eae89b317a5731a5e6b6a2132" Namespace="calico-system" Pod="calico-kube-controllers-577f47b55c-knc26" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--577f47b55c--knc26-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--577f47b55c--knc26-eth0", GenerateName:"calico-kube-controllers-577f47b55c-", Namespace:"calico-system", SelfLink:"", UID:"668d9640-9d0d-41dc-9d50-7ca43eccf073", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"577f47b55c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"47ddf11907f9deeaf38ba2f612175d1f56d3774eae89b317a5731a5e6b6a2132", Pod:"calico-kube-controllers-577f47b55c-knc26", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali20f925d1e60", MAC:"32:59:f0:b1:25:fe", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:18.830127 containerd[1575]: 2025-09-12 17:36:18.823 [INFO][4847] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="47ddf11907f9deeaf38ba2f612175d1f56d3774eae89b317a5731a5e6b6a2132" Namespace="calico-system" Pod="calico-kube-controllers-577f47b55c-knc26" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--577f47b55c--knc26-eth0" Sep 12 17:36:18.860942 systemd-resolved[1465]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:36:18.878653 containerd[1575]: time="2025-09-12T17:36:18.878350689Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:36:18.878653 containerd[1575]: time="2025-09-12T17:36:18.878415895Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:36:18.878653 containerd[1575]: time="2025-09-12T17:36:18.878445271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:36:18.878653 containerd[1575]: time="2025-09-12T17:36:18.878551997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:36:18.885931 containerd[1575]: time="2025-09-12T17:36:18.885872174Z" level=info msg="StartContainer for \"b456a24ffa3c0fd31be75c82ca3c467044816761dfacd870abf20ca23b34eb6d\" returns successfully" Sep 12 17:36:18.911537 containerd[1575]: time="2025-09-12T17:36:18.911488954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7nrf5,Uid:9b17846b-f3a4-4894-aba8-8a48d931dcb0,Namespace:kube-system,Attempt:1,} returns sandbox id \"3622850b277001ffa27e5056c611b6888372a9797805968ddb0b8b8737b19513\"" Sep 12 17:36:18.912584 kubelet[2643]: E0912 17:36:18.912551 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:36:18.916124 containerd[1575]: time="2025-09-12T17:36:18.915788666Z" level=info msg="CreateContainer within sandbox \"3622850b277001ffa27e5056c611b6888372a9797805968ddb0b8b8737b19513\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:36:18.921311 systemd-resolved[1465]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:36:18.928260 kubelet[2643]: E0912 17:36:18.928058 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:36:18.942401 containerd[1575]: time="2025-09-12T17:36:18.942332945Z" level=info msg="CreateContainer within sandbox \"3622850b277001ffa27e5056c611b6888372a9797805968ddb0b8b8737b19513\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8b8176ce8a081cefb93373d70cb47ed147ac0d8deda84b74e1d8aab88d2702a9\"" Sep 12 17:36:18.945491 containerd[1575]: time="2025-09-12T17:36:18.944309467Z" level=info msg="StartContainer for \"8b8176ce8a081cefb93373d70cb47ed147ac0d8deda84b74e1d8aab88d2702a9\"" Sep 12 17:36:18.949940 kubelet[2643]: I0912 17:36:18.949838 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-7qbhp" podStartSLOduration=58.949791999 podStartE2EDuration="58.949791999s" podCreationTimestamp="2025-09-12 17:35:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:36:18.949546566 +0000 UTC m=+64.492339674" watchObservedRunningTime="2025-09-12 17:36:18.949791999 +0000 UTC m=+64.492585107" Sep 12 17:36:18.965491 containerd[1575]: time="2025-09-12T17:36:18.965407042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-577f47b55c-knc26,Uid:668d9640-9d0d-41dc-9d50-7ca43eccf073,Namespace:calico-system,Attempt:1,} returns sandbox id \"47ddf11907f9deeaf38ba2f612175d1f56d3774eae89b317a5731a5e6b6a2132\"" Sep 12 17:36:19.027746 containerd[1575]: time="2025-09-12T17:36:19.027594682Z" level=info msg="StartContainer for \"8b8176ce8a081cefb93373d70cb47ed147ac0d8deda84b74e1d8aab88d2702a9\" returns successfully" Sep 12 17:36:19.557554 containerd[1575]: time="2025-09-12T17:36:19.557494024Z" level=info msg="StopPodSandbox for \"fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf\"" Sep 12 17:36:19.595647 containerd[1575]: time="2025-09-12T17:36:19.595584500Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:19.596687 containerd[1575]: time="2025-09-12T17:36:19.596323704Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 12 17:36:19.597722 containerd[1575]: time="2025-09-12T17:36:19.597691751Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:19.601597 containerd[1575]: time="2025-09-12T17:36:19.601563883Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:19.603206 containerd[1575]: time="2025-09-12T17:36:19.602255867Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 3.488489749s" Sep 12 17:36:19.603206 containerd[1575]: time="2025-09-12T17:36:19.602288921Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 12 17:36:19.603583 containerd[1575]: time="2025-09-12T17:36:19.603556744Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 12 17:36:19.605056 containerd[1575]: time="2025-09-12T17:36:19.604900654Z" level=info msg="CreateContainer within sandbox \"0bc7cdc33f110c5fd02081fdba7d53eaf7489d7dda7922c9c24931d5f6226aac\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 12 17:36:19.631538 containerd[1575]: time="2025-09-12T17:36:19.631367630Z" level=info msg="CreateContainer within sandbox \"0bc7cdc33f110c5fd02081fdba7d53eaf7489d7dda7922c9c24931d5f6226aac\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ecfa898f512219d0fde324d9c22065a3c395e5542490a07daa8256554e15fa61\"" Sep 12 17:36:19.632346 containerd[1575]: time="2025-09-12T17:36:19.632320858Z" level=info msg="StartContainer for \"ecfa898f512219d0fde324d9c22065a3c395e5542490a07daa8256554e15fa61\"" Sep 12 17:36:19.666503 containerd[1575]: 2025-09-12 17:36:19.606 [INFO][5082] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf" Sep 12 17:36:19.666503 containerd[1575]: 2025-09-12 17:36:19.606 [INFO][5082] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf" iface="eth0" netns="/var/run/netns/cni-dfd9606d-72ff-a894-2bd6-29a65704d662" Sep 12 17:36:19.666503 containerd[1575]: 2025-09-12 17:36:19.606 [INFO][5082] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf" iface="eth0" netns="/var/run/netns/cni-dfd9606d-72ff-a894-2bd6-29a65704d662" Sep 12 17:36:19.666503 containerd[1575]: 2025-09-12 17:36:19.606 [INFO][5082] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf" iface="eth0" netns="/var/run/netns/cni-dfd9606d-72ff-a894-2bd6-29a65704d662" Sep 12 17:36:19.666503 containerd[1575]: 2025-09-12 17:36:19.606 [INFO][5082] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf" Sep 12 17:36:19.666503 containerd[1575]: 2025-09-12 17:36:19.609 [INFO][5082] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf" Sep 12 17:36:19.666503 containerd[1575]: 2025-09-12 17:36:19.645 [INFO][5091] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf" HandleID="k8s-pod-network.fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf" Workload="localhost-k8s-calico--apiserver--6df87d7bb7--nmz5p-eth0" Sep 12 17:36:19.666503 containerd[1575]: 2025-09-12 17:36:19.646 [INFO][5091] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:19.666503 containerd[1575]: 2025-09-12 17:36:19.646 [INFO][5091] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:19.666503 containerd[1575]: 2025-09-12 17:36:19.653 [WARNING][5091] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf" HandleID="k8s-pod-network.fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf" Workload="localhost-k8s-calico--apiserver--6df87d7bb7--nmz5p-eth0" Sep 12 17:36:19.666503 containerd[1575]: 2025-09-12 17:36:19.654 [INFO][5091] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf" HandleID="k8s-pod-network.fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf" Workload="localhost-k8s-calico--apiserver--6df87d7bb7--nmz5p-eth0" Sep 12 17:36:19.666503 containerd[1575]: 2025-09-12 17:36:19.657 [INFO][5091] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:19.666503 containerd[1575]: 2025-09-12 17:36:19.660 [INFO][5082] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf" Sep 12 17:36:19.667234 containerd[1575]: time="2025-09-12T17:36:19.666728577Z" level=info msg="TearDown network for sandbox \"fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf\" successfully" Sep 12 17:36:19.667234 containerd[1575]: time="2025-09-12T17:36:19.666765457Z" level=info msg="StopPodSandbox for \"fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf\" returns successfully" Sep 12 17:36:19.668800 containerd[1575]: time="2025-09-12T17:36:19.668667493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6df87d7bb7-nmz5p,Uid:a331b72f-6ff9-42b5-a548-9c65ebf3a6da,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:36:19.710660 systemd[1]: run-netns-cni\x2ddfd9606d\x2d72ff\x2da894\x2d2bd6\x2d29a65704d662.mount: Deactivated successfully. Sep 12 17:36:19.778684 systemd-networkd[1243]: calibc0f5dafc95: Gained IPv6LL Sep 12 17:36:19.956098 containerd[1575]: time="2025-09-12T17:36:19.956036950Z" level=info msg="StartContainer for \"ecfa898f512219d0fde324d9c22065a3c395e5542490a07daa8256554e15fa61\" returns successfully" Sep 12 17:36:19.960593 kubelet[2643]: E0912 17:36:19.960557 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:36:19.962766 kubelet[2643]: E0912 17:36:19.962738 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:36:19.970642 systemd-networkd[1243]: caliaad212b6789: Gained IPv6LL Sep 12 17:36:20.145752 kubelet[2643]: I0912 17:36:20.145621 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-7nrf5" podStartSLOduration=60.145601772 podStartE2EDuration="1m0.145601772s" podCreationTimestamp="2025-09-12 17:35:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:36:20.144941141 +0000 UTC m=+65.687734259" watchObservedRunningTime="2025-09-12 17:36:20.145601772 +0000 UTC m=+65.688394880" Sep 12 17:36:20.253776 systemd[1]: Started sshd@10-10.0.0.90:22-10.0.0.1:41246.service - OpenSSH per-connection server daemon (10.0.0.1:41246). Sep 12 17:36:20.294688 systemd-networkd[1243]: calide0c1dcdf76: Link UP Sep 12 17:36:20.296351 systemd-networkd[1243]: calide0c1dcdf76: Gained carrier Sep 12 17:36:20.302621 sshd[5157]: Accepted publickey for core from 10.0.0.1 port 41246 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:36:20.305297 sshd[5157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:20.313306 systemd-logind[1556]: New session 11 of user core. Sep 12 17:36:20.315699 containerd[1575]: 2025-09-12 17:36:20.220 [INFO][5134] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6df87d7bb7--nmz5p-eth0 calico-apiserver-6df87d7bb7- calico-apiserver a331b72f-6ff9-42b5-a548-9c65ebf3a6da 1123 0 2025-09-12 17:35:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6df87d7bb7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6df87d7bb7-nmz5p eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calide0c1dcdf76 [] [] }} ContainerID="2f664e307b598b531daa997e8c602c7fa90f3bd6f61014f5e35f6b0453c2d517" Namespace="calico-apiserver" Pod="calico-apiserver-6df87d7bb7-nmz5p" WorkloadEndpoint="localhost-k8s-calico--apiserver--6df87d7bb7--nmz5p-" Sep 12 17:36:20.315699 containerd[1575]: 2025-09-12 17:36:20.220 [INFO][5134] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2f664e307b598b531daa997e8c602c7fa90f3bd6f61014f5e35f6b0453c2d517" Namespace="calico-apiserver" Pod="calico-apiserver-6df87d7bb7-nmz5p" WorkloadEndpoint="localhost-k8s-calico--apiserver--6df87d7bb7--nmz5p-eth0" Sep 12 17:36:20.315699 containerd[1575]: 2025-09-12 17:36:20.247 [INFO][5151] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2f664e307b598b531daa997e8c602c7fa90f3bd6f61014f5e35f6b0453c2d517" HandleID="k8s-pod-network.2f664e307b598b531daa997e8c602c7fa90f3bd6f61014f5e35f6b0453c2d517" Workload="localhost-k8s-calico--apiserver--6df87d7bb7--nmz5p-eth0" Sep 12 17:36:20.315699 containerd[1575]: 2025-09-12 17:36:20.248 [INFO][5151] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2f664e307b598b531daa997e8c602c7fa90f3bd6f61014f5e35f6b0453c2d517" HandleID="k8s-pod-network.2f664e307b598b531daa997e8c602c7fa90f3bd6f61014f5e35f6b0453c2d517" Workload="localhost-k8s-calico--apiserver--6df87d7bb7--nmz5p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001976f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6df87d7bb7-nmz5p", "timestamp":"2025-09-12 17:36:20.247859453 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:36:20.315699 containerd[1575]: 2025-09-12 17:36:20.248 [INFO][5151] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:20.315699 containerd[1575]: 2025-09-12 17:36:20.248 [INFO][5151] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:20.315699 containerd[1575]: 2025-09-12 17:36:20.248 [INFO][5151] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:36:20.315699 containerd[1575]: 2025-09-12 17:36:20.255 [INFO][5151] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2f664e307b598b531daa997e8c602c7fa90f3bd6f61014f5e35f6b0453c2d517" host="localhost" Sep 12 17:36:20.315699 containerd[1575]: 2025-09-12 17:36:20.264 [INFO][5151] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:36:20.315699 containerd[1575]: 2025-09-12 17:36:20.268 [INFO][5151] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:36:20.315699 containerd[1575]: 2025-09-12 17:36:20.270 [INFO][5151] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:36:20.315699 containerd[1575]: 2025-09-12 17:36:20.272 [INFO][5151] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:36:20.315699 containerd[1575]: 2025-09-12 17:36:20.272 [INFO][5151] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2f664e307b598b531daa997e8c602c7fa90f3bd6f61014f5e35f6b0453c2d517" host="localhost" Sep 12 17:36:20.315699 containerd[1575]: 2025-09-12 17:36:20.273 [INFO][5151] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2f664e307b598b531daa997e8c602c7fa90f3bd6f61014f5e35f6b0453c2d517 Sep 12 17:36:20.315699 containerd[1575]: 2025-09-12 17:36:20.278 [INFO][5151] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2f664e307b598b531daa997e8c602c7fa90f3bd6f61014f5e35f6b0453c2d517" host="localhost" Sep 12 17:36:20.315699 containerd[1575]: 2025-09-12 17:36:20.287 [INFO][5151] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.2f664e307b598b531daa997e8c602c7fa90f3bd6f61014f5e35f6b0453c2d517" host="localhost" Sep 12 17:36:20.315699 containerd[1575]: 2025-09-12 17:36:20.287 [INFO][5151] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.2f664e307b598b531daa997e8c602c7fa90f3bd6f61014f5e35f6b0453c2d517" host="localhost" Sep 12 17:36:20.315699 containerd[1575]: 2025-09-12 17:36:20.287 [INFO][5151] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:20.315699 containerd[1575]: 2025-09-12 17:36:20.287 [INFO][5151] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="2f664e307b598b531daa997e8c602c7fa90f3bd6f61014f5e35f6b0453c2d517" HandleID="k8s-pod-network.2f664e307b598b531daa997e8c602c7fa90f3bd6f61014f5e35f6b0453c2d517" Workload="localhost-k8s-calico--apiserver--6df87d7bb7--nmz5p-eth0" Sep 12 17:36:20.316295 containerd[1575]: 2025-09-12 17:36:20.291 [INFO][5134] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2f664e307b598b531daa997e8c602c7fa90f3bd6f61014f5e35f6b0453c2d517" Namespace="calico-apiserver" Pod="calico-apiserver-6df87d7bb7-nmz5p" WorkloadEndpoint="localhost-k8s-calico--apiserver--6df87d7bb7--nmz5p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6df87d7bb7--nmz5p-eth0", GenerateName:"calico-apiserver-6df87d7bb7-", Namespace:"calico-apiserver", SelfLink:"", UID:"a331b72f-6ff9-42b5-a548-9c65ebf3a6da", ResourceVersion:"1123", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6df87d7bb7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6df87d7bb7-nmz5p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calide0c1dcdf76", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:20.316295 containerd[1575]: 2025-09-12 17:36:20.291 [INFO][5134] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="2f664e307b598b531daa997e8c602c7fa90f3bd6f61014f5e35f6b0453c2d517" Namespace="calico-apiserver" Pod="calico-apiserver-6df87d7bb7-nmz5p" WorkloadEndpoint="localhost-k8s-calico--apiserver--6df87d7bb7--nmz5p-eth0" Sep 12 17:36:20.316295 containerd[1575]: 2025-09-12 17:36:20.291 [INFO][5134] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calide0c1dcdf76 ContainerID="2f664e307b598b531daa997e8c602c7fa90f3bd6f61014f5e35f6b0453c2d517" Namespace="calico-apiserver" Pod="calico-apiserver-6df87d7bb7-nmz5p" WorkloadEndpoint="localhost-k8s-calico--apiserver--6df87d7bb7--nmz5p-eth0" Sep 12 17:36:20.316295 containerd[1575]: 2025-09-12 17:36:20.295 [INFO][5134] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2f664e307b598b531daa997e8c602c7fa90f3bd6f61014f5e35f6b0453c2d517" Namespace="calico-apiserver" Pod="calico-apiserver-6df87d7bb7-nmz5p" WorkloadEndpoint="localhost-k8s-calico--apiserver--6df87d7bb7--nmz5p-eth0" Sep 12 17:36:20.316295 containerd[1575]: 2025-09-12 17:36:20.295 [INFO][5134] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2f664e307b598b531daa997e8c602c7fa90f3bd6f61014f5e35f6b0453c2d517" Namespace="calico-apiserver" Pod="calico-apiserver-6df87d7bb7-nmz5p" WorkloadEndpoint="localhost-k8s-calico--apiserver--6df87d7bb7--nmz5p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6df87d7bb7--nmz5p-eth0", GenerateName:"calico-apiserver-6df87d7bb7-", Namespace:"calico-apiserver", SelfLink:"", UID:"a331b72f-6ff9-42b5-a548-9c65ebf3a6da", ResourceVersion:"1123", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6df87d7bb7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2f664e307b598b531daa997e8c602c7fa90f3bd6f61014f5e35f6b0453c2d517", Pod:"calico-apiserver-6df87d7bb7-nmz5p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calide0c1dcdf76", MAC:"5e:06:7b:b9:a6:34", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:20.316295 containerd[1575]: 2025-09-12 17:36:20.310 [INFO][5134] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2f664e307b598b531daa997e8c602c7fa90f3bd6f61014f5e35f6b0453c2d517" Namespace="calico-apiserver" Pod="calico-apiserver-6df87d7bb7-nmz5p" WorkloadEndpoint="localhost-k8s-calico--apiserver--6df87d7bb7--nmz5p-eth0" Sep 12 17:36:20.319930 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 17:36:20.337967 containerd[1575]: time="2025-09-12T17:36:20.337178144Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:36:20.337967 containerd[1575]: time="2025-09-12T17:36:20.337755967Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:36:20.337967 containerd[1575]: time="2025-09-12T17:36:20.337769243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:36:20.337967 containerd[1575]: time="2025-09-12T17:36:20.337863314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:36:20.369468 systemd-resolved[1465]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:36:20.399148 containerd[1575]: time="2025-09-12T17:36:20.399028142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6df87d7bb7-nmz5p,Uid:a331b72f-6ff9-42b5-a548-9c65ebf3a6da,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"2f664e307b598b531daa997e8c602c7fa90f3bd6f61014f5e35f6b0453c2d517\"" Sep 12 17:36:20.418728 systemd-networkd[1243]: cali20f925d1e60: Gained IPv6LL Sep 12 17:36:20.462103 sshd[5157]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:20.466339 systemd[1]: sshd@10-10.0.0.90:22-10.0.0.1:41246.service: Deactivated successfully. Sep 12 17:36:20.469320 systemd-logind[1556]: Session 11 logged out. Waiting for processes to exit. Sep 12 17:36:20.469372 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 17:36:20.470669 systemd-logind[1556]: Removed session 11. Sep 12 17:36:20.966252 kubelet[2643]: E0912 17:36:20.966207 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:36:20.966768 kubelet[2643]: E0912 17:36:20.966276 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:36:21.968949 kubelet[2643]: E0912 17:36:21.968869 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:36:22.148294 systemd-networkd[1243]: calide0c1dcdf76: Gained IPv6LL Sep 12 17:36:22.331797 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1753632660.mount: Deactivated successfully. Sep 12 17:36:24.431302 containerd[1575]: time="2025-09-12T17:36:24.431202387Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:24.432328 containerd[1575]: time="2025-09-12T17:36:24.432213981Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 12 17:36:24.433883 containerd[1575]: time="2025-09-12T17:36:24.433828182Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:24.437134 containerd[1575]: time="2025-09-12T17:36:24.437068407Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:24.437863 containerd[1575]: time="2025-09-12T17:36:24.437804162Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 4.834215416s" Sep 12 17:36:24.437863 containerd[1575]: time="2025-09-12T17:36:24.437854087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 12 17:36:24.444036 containerd[1575]: time="2025-09-12T17:36:24.443973886Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 12 17:36:24.444214 containerd[1575]: time="2025-09-12T17:36:24.443991610Z" level=info msg="CreateContainer within sandbox \"72c8f78d18dee40748a7ad0261d7d18a4f849e681f7983b08c301b7af8f3f5b5\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 12 17:36:24.462810 containerd[1575]: time="2025-09-12T17:36:24.462717940Z" level=info msg="CreateContainer within sandbox \"72c8f78d18dee40748a7ad0261d7d18a4f849e681f7983b08c301b7af8f3f5b5\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"107c76fd3175a94c8bdd14d97496e17cee48d62ffd4e803f02e4342823571afe\"" Sep 12 17:36:24.465175 containerd[1575]: time="2025-09-12T17:36:24.463741576Z" level=info msg="StartContainer for \"107c76fd3175a94c8bdd14d97496e17cee48d62ffd4e803f02e4342823571afe\"" Sep 12 17:36:24.677828 containerd[1575]: time="2025-09-12T17:36:24.677725815Z" level=info msg="StartContainer for \"107c76fd3175a94c8bdd14d97496e17cee48d62ffd4e803f02e4342823571afe\" returns successfully" Sep 12 17:36:24.995325 kubelet[2643]: I0912 17:36:24.995214 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-ngmvz" podStartSLOduration=43.916972944 podStartE2EDuration="51.995191579s" podCreationTimestamp="2025-09-12 17:35:33 +0000 UTC" firstStartedPulling="2025-09-12 17:36:16.363470321 +0000 UTC m=+61.906263429" lastFinishedPulling="2025-09-12 17:36:24.441688956 +0000 UTC m=+69.984482064" observedRunningTime="2025-09-12 17:36:24.99482276 +0000 UTC m=+70.537615868" watchObservedRunningTime="2025-09-12 17:36:24.995191579 +0000 UTC m=+70.537984697" Sep 12 17:36:25.473933 systemd[1]: Started sshd@11-10.0.0.90:22-10.0.0.1:41256.service - OpenSSH per-connection server daemon (10.0.0.1:41256). Sep 12 17:36:25.556838 kubelet[2643]: E0912 17:36:25.556751 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:36:25.662926 sshd[5311]: Accepted publickey for core from 10.0.0.1 port 41256 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:36:25.664947 sshd[5311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:25.669498 systemd-logind[1556]: New session 12 of user core. Sep 12 17:36:25.679751 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 17:36:25.873506 sshd[5311]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:25.879040 systemd[1]: sshd@11-10.0.0.90:22-10.0.0.1:41256.service: Deactivated successfully. Sep 12 17:36:25.882294 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 17:36:25.882374 systemd-logind[1556]: Session 12 logged out. Waiting for processes to exit. Sep 12 17:36:25.884382 systemd-logind[1556]: Removed session 12. Sep 12 17:36:26.697705 containerd[1575]: time="2025-09-12T17:36:26.697396039Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:26.700602 containerd[1575]: time="2025-09-12T17:36:26.700463526Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 12 17:36:26.702611 containerd[1575]: time="2025-09-12T17:36:26.702537677Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:26.706572 containerd[1575]: time="2025-09-12T17:36:26.706507557Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:26.707329 containerd[1575]: time="2025-09-12T17:36:26.707288055Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 2.263242321s" Sep 12 17:36:26.707388 containerd[1575]: time="2025-09-12T17:36:26.707337800Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 12 17:36:26.708753 containerd[1575]: time="2025-09-12T17:36:26.708727598Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:36:26.709907 containerd[1575]: time="2025-09-12T17:36:26.709875812Z" level=info msg="CreateContainer within sandbox \"5f74ae58193fa3ad603250ff2d5b8392d9b2cf1c2f602c5c2cb45f595bf31de2\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 12 17:36:26.728832 containerd[1575]: time="2025-09-12T17:36:26.728780175Z" level=info msg="CreateContainer within sandbox \"5f74ae58193fa3ad603250ff2d5b8392d9b2cf1c2f602c5c2cb45f595bf31de2\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"147d564a94895c027a37e0a40148da6e32d17e8d12182a2f4be574eb2bcdb574\"" Sep 12 17:36:26.729346 containerd[1575]: time="2025-09-12T17:36:26.729325411Z" level=info msg="StartContainer for \"147d564a94895c027a37e0a40148da6e32d17e8d12182a2f4be574eb2bcdb574\"" Sep 12 17:36:27.453998 containerd[1575]: time="2025-09-12T17:36:27.453932633Z" level=info msg="StartContainer for \"147d564a94895c027a37e0a40148da6e32d17e8d12182a2f4be574eb2bcdb574\" returns successfully" Sep 12 17:36:30.889802 systemd[1]: Started sshd@12-10.0.0.90:22-10.0.0.1:45538.service - OpenSSH per-connection server daemon (10.0.0.1:45538). Sep 12 17:36:30.959322 sshd[5418]: Accepted publickey for core from 10.0.0.1 port 45538 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:36:30.961588 sshd[5418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:30.967405 systemd-logind[1556]: New session 13 of user core. Sep 12 17:36:30.976951 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 17:36:31.985180 sshd[5418]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:31.995866 systemd[1]: Started sshd@13-10.0.0.90:22-10.0.0.1:45542.service - OpenSSH per-connection server daemon (10.0.0.1:45542). Sep 12 17:36:31.996909 systemd[1]: sshd@12-10.0.0.90:22-10.0.0.1:45538.service: Deactivated successfully. Sep 12 17:36:32.003057 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 17:36:32.008013 systemd-logind[1556]: Session 13 logged out. Waiting for processes to exit. Sep 12 17:36:32.009806 systemd-logind[1556]: Removed session 13. Sep 12 17:36:32.047714 sshd[5436]: Accepted publickey for core from 10.0.0.1 port 45542 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:36:32.051006 sshd[5436]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:32.058203 systemd-logind[1556]: New session 14 of user core. Sep 12 17:36:32.061890 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 17:36:32.294217 sshd[5436]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:32.306604 systemd[1]: Started sshd@14-10.0.0.90:22-10.0.0.1:45550.service - OpenSSH per-connection server daemon (10.0.0.1:45550). Sep 12 17:36:32.307325 systemd[1]: sshd@13-10.0.0.90:22-10.0.0.1:45542.service: Deactivated successfully. Sep 12 17:36:32.314422 systemd-logind[1556]: Session 14 logged out. Waiting for processes to exit. Sep 12 17:36:32.314918 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 17:36:32.322692 systemd-logind[1556]: Removed session 14. Sep 12 17:36:32.383566 sshd[5451]: Accepted publickey for core from 10.0.0.1 port 45550 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:36:32.385682 sshd[5451]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:32.395070 systemd-logind[1556]: New session 15 of user core. Sep 12 17:36:32.402272 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 17:36:32.604822 systemd-journald[1159]: Under memory pressure, flushing caches. Sep 12 17:36:32.579815 systemd-resolved[1465]: Under memory pressure, flushing caches. Sep 12 17:36:32.579889 systemd-resolved[1465]: Flushed all caches. Sep 12 17:36:32.686301 sshd[5451]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:32.695539 systemd[1]: sshd@14-10.0.0.90:22-10.0.0.1:45550.service: Deactivated successfully. Sep 12 17:36:32.700290 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 17:36:32.701957 systemd-logind[1556]: Session 15 logged out. Waiting for processes to exit. Sep 12 17:36:32.703699 systemd-logind[1556]: Removed session 15. Sep 12 17:36:33.919209 containerd[1575]: time="2025-09-12T17:36:33.917253397Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:33.920380 containerd[1575]: time="2025-09-12T17:36:33.920308912Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 12 17:36:33.924468 containerd[1575]: time="2025-09-12T17:36:33.923369347Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:33.926537 containerd[1575]: time="2025-09-12T17:36:33.926499695Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:33.927511 containerd[1575]: time="2025-09-12T17:36:33.927467928Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 7.218705623s" Sep 12 17:36:33.927746 containerd[1575]: time="2025-09-12T17:36:33.927609117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 17:36:33.929072 containerd[1575]: time="2025-09-12T17:36:33.928809855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 12 17:36:33.932105 containerd[1575]: time="2025-09-12T17:36:33.931147807Z" level=info msg="CreateContainer within sandbox \"ddadf29d82c896e8e294d6eac9a6f192ab00e02a5653091c71c3508870b0b335\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:36:33.975024 containerd[1575]: time="2025-09-12T17:36:33.971658872Z" level=info msg="CreateContainer within sandbox \"ddadf29d82c896e8e294d6eac9a6f192ab00e02a5653091c71c3508870b0b335\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"42dd65934e60a86f66a58b2e30993661500b176a40e2a06ecf447c6943d153d3\"" Sep 12 17:36:33.975024 containerd[1575]: time="2025-09-12T17:36:33.973489274Z" level=info msg="StartContainer for \"42dd65934e60a86f66a58b2e30993661500b176a40e2a06ecf447c6943d153d3\"" Sep 12 17:36:34.516104 containerd[1575]: time="2025-09-12T17:36:34.516022100Z" level=info msg="StartContainer for \"42dd65934e60a86f66a58b2e30993661500b176a40e2a06ecf447c6943d153d3\" returns successfully" Sep 12 17:36:34.628861 systemd-resolved[1465]: Under memory pressure, flushing caches. Sep 12 17:36:34.628891 systemd-resolved[1465]: Flushed all caches. Sep 12 17:36:34.632922 systemd-journald[1159]: Under memory pressure, flushing caches. Sep 12 17:36:35.542883 kubelet[2643]: I0912 17:36:35.542742 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6df87d7bb7-ns7sf" podStartSLOduration=49.099380932 podStartE2EDuration="1m5.542693311s" podCreationTimestamp="2025-09-12 17:35:30 +0000 UTC" firstStartedPulling="2025-09-12 17:36:17.485234002 +0000 UTC m=+63.028027110" lastFinishedPulling="2025-09-12 17:36:33.928546381 +0000 UTC m=+79.471339489" observedRunningTime="2025-09-12 17:36:35.541789744 +0000 UTC m=+81.084582862" watchObservedRunningTime="2025-09-12 17:36:35.542693311 +0000 UTC m=+81.085486419" Sep 12 17:36:37.666627 containerd[1575]: time="2025-09-12T17:36:37.666578844Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:37.669059 containerd[1575]: time="2025-09-12T17:36:37.668972526Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 12 17:36:37.702851 systemd[1]: Started sshd@15-10.0.0.90:22-10.0.0.1:45556.service - OpenSSH per-connection server daemon (10.0.0.1:45556). Sep 12 17:36:37.826062 sshd[5550]: Accepted publickey for core from 10.0.0.1 port 45556 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:36:37.828048 sshd[5550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:37.832361 systemd-logind[1556]: New session 16 of user core. Sep 12 17:36:37.839844 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 17:36:37.841043 containerd[1575]: time="2025-09-12T17:36:37.840985238Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:38.287844 containerd[1575]: time="2025-09-12T17:36:38.287753786Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:38.288757 containerd[1575]: time="2025-09-12T17:36:38.288710773Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 4.359839392s" Sep 12 17:36:38.288853 containerd[1575]: time="2025-09-12T17:36:38.288766690Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 12 17:36:38.290645 containerd[1575]: time="2025-09-12T17:36:38.290216007Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 12 17:36:38.302882 containerd[1575]: time="2025-09-12T17:36:38.302792829Z" level=info msg="CreateContainer within sandbox \"47ddf11907f9deeaf38ba2f612175d1f56d3774eae89b317a5731a5e6b6a2132\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 12 17:36:38.344099 containerd[1575]: time="2025-09-12T17:36:38.344025181Z" level=info msg="CreateContainer within sandbox \"47ddf11907f9deeaf38ba2f612175d1f56d3774eae89b317a5731a5e6b6a2132\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"4bad5393f67b0f2d20cef56902eb25b9769d49bbf7e8ee82268f8f6e2d1092bc\"" Sep 12 17:36:38.346722 containerd[1575]: time="2025-09-12T17:36:38.346660413Z" level=info msg="StartContainer for \"4bad5393f67b0f2d20cef56902eb25b9769d49bbf7e8ee82268f8f6e2d1092bc\"" Sep 12 17:36:38.369932 sshd[5550]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:38.375099 systemd[1]: sshd@15-10.0.0.90:22-10.0.0.1:45556.service: Deactivated successfully. Sep 12 17:36:38.379900 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 17:36:38.380776 systemd-logind[1556]: Session 16 logged out. Waiting for processes to exit. Sep 12 17:36:38.381809 systemd-logind[1556]: Removed session 16. Sep 12 17:36:38.462237 containerd[1575]: time="2025-09-12T17:36:38.462180478Z" level=info msg="StartContainer for \"4bad5393f67b0f2d20cef56902eb25b9769d49bbf7e8ee82268f8f6e2d1092bc\" returns successfully" Sep 12 17:36:38.622991 kubelet[2643]: I0912 17:36:38.622918 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-577f47b55c-knc26" podStartSLOduration=46.30064711 podStartE2EDuration="1m5.622896062s" podCreationTimestamp="2025-09-12 17:35:33 +0000 UTC" firstStartedPulling="2025-09-12 17:36:18.967751674 +0000 UTC m=+64.510544782" lastFinishedPulling="2025-09-12 17:36:38.290000626 +0000 UTC m=+83.832793734" observedRunningTime="2025-09-12 17:36:38.549125596 +0000 UTC m=+84.091918704" watchObservedRunningTime="2025-09-12 17:36:38.622896062 +0000 UTC m=+84.165689170" Sep 12 17:36:38.724535 systemd-journald[1159]: Under memory pressure, flushing caches. Sep 12 17:36:38.722554 systemd-resolved[1465]: Under memory pressure, flushing caches. Sep 12 17:36:38.722591 systemd-resolved[1465]: Flushed all caches. Sep 12 17:36:40.124702 containerd[1575]: time="2025-09-12T17:36:40.124618352Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:40.126892 containerd[1575]: time="2025-09-12T17:36:40.126818790Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 12 17:36:40.129113 containerd[1575]: time="2025-09-12T17:36:40.129069654Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:40.132487 containerd[1575]: time="2025-09-12T17:36:40.132441487Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:40.133304 containerd[1575]: time="2025-09-12T17:36:40.133274226Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 1.843021117s" Sep 12 17:36:40.133372 containerd[1575]: time="2025-09-12T17:36:40.133313240Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 12 17:36:40.141106 containerd[1575]: time="2025-09-12T17:36:40.140834851Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:36:40.148820 containerd[1575]: time="2025-09-12T17:36:40.142185068Z" level=info msg="CreateContainer within sandbox \"0bc7cdc33f110c5fd02081fdba7d53eaf7489d7dda7922c9c24931d5f6226aac\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 12 17:36:40.171591 containerd[1575]: time="2025-09-12T17:36:40.171539492Z" level=info msg="CreateContainer within sandbox \"0bc7cdc33f110c5fd02081fdba7d53eaf7489d7dda7922c9c24931d5f6226aac\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"185b2ef4f44bb9e7efa97eb71855646985513d54f740e60e443ff08528df6273\"" Sep 12 17:36:40.172180 containerd[1575]: time="2025-09-12T17:36:40.172154186Z" level=info msg="StartContainer for \"185b2ef4f44bb9e7efa97eb71855646985513d54f740e60e443ff08528df6273\"" Sep 12 17:36:40.251455 containerd[1575]: time="2025-09-12T17:36:40.251382325Z" level=info msg="StartContainer for \"185b2ef4f44bb9e7efa97eb71855646985513d54f740e60e443ff08528df6273\" returns successfully" Sep 12 17:36:40.558172 kubelet[2643]: E0912 17:36:40.557632 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:36:40.560199 kubelet[2643]: I0912 17:36:40.558610 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-hwr5d" podStartSLOduration=43.530813271 podStartE2EDuration="1m7.558587473s" podCreationTimestamp="2025-09-12 17:35:33 +0000 UTC" firstStartedPulling="2025-09-12 17:36:16.112684712 +0000 UTC m=+61.655477810" lastFinishedPulling="2025-09-12 17:36:40.140458894 +0000 UTC m=+85.683252012" observedRunningTime="2025-09-12 17:36:40.556629387 +0000 UTC m=+86.099422505" watchObservedRunningTime="2025-09-12 17:36:40.558587473 +0000 UTC m=+86.101380581" Sep 12 17:36:40.569002 containerd[1575]: time="2025-09-12T17:36:40.568909386Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:40.569676 containerd[1575]: time="2025-09-12T17:36:40.569617467Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 12 17:36:40.572121 containerd[1575]: time="2025-09-12T17:36:40.572044477Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 431.153709ms" Sep 12 17:36:40.572121 containerd[1575]: time="2025-09-12T17:36:40.572100354Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 17:36:40.575087 containerd[1575]: time="2025-09-12T17:36:40.573406105Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 12 17:36:40.576149 containerd[1575]: time="2025-09-12T17:36:40.576111947Z" level=info msg="CreateContainer within sandbox \"2f664e307b598b531daa997e8c602c7fa90f3bd6f61014f5e35f6b0453c2d517\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:36:40.608571 containerd[1575]: time="2025-09-12T17:36:40.608510431Z" level=info msg="CreateContainer within sandbox \"2f664e307b598b531daa997e8c602c7fa90f3bd6f61014f5e35f6b0453c2d517\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0da2554bf680b2bb3b7ce00469f635f308ec6be9e4999fc3e3c3ae31219eda0b\"" Sep 12 17:36:40.610749 containerd[1575]: time="2025-09-12T17:36:40.609549763Z" level=info msg="StartContainer for \"0da2554bf680b2bb3b7ce00469f635f308ec6be9e4999fc3e3c3ae31219eda0b\"" Sep 12 17:36:40.720488 containerd[1575]: time="2025-09-12T17:36:40.717803251Z" level=info msg="StartContainer for \"0da2554bf680b2bb3b7ce00469f635f308ec6be9e4999fc3e3c3ae31219eda0b\" returns successfully" Sep 12 17:36:40.742477 kubelet[2643]: I0912 17:36:40.741115 2643 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 12 17:36:40.742477 kubelet[2643]: I0912 17:36:40.741193 2643 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 12 17:36:42.558928 kubelet[2643]: I0912 17:36:42.558887 2643 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:36:42.740694 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount746428236.mount: Deactivated successfully. Sep 12 17:36:42.757267 containerd[1575]: time="2025-09-12T17:36:42.757204961Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:42.757996 containerd[1575]: time="2025-09-12T17:36:42.757949461Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 12 17:36:42.759324 containerd[1575]: time="2025-09-12T17:36:42.759293373Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:42.762707 containerd[1575]: time="2025-09-12T17:36:42.762675081Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:42.763470 containerd[1575]: time="2025-09-12T17:36:42.763418519Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 2.189934936s" Sep 12 17:36:42.763535 containerd[1575]: time="2025-09-12T17:36:42.763481358Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 12 17:36:42.765524 containerd[1575]: time="2025-09-12T17:36:42.765483245Z" level=info msg="CreateContainer within sandbox \"5f74ae58193fa3ad603250ff2d5b8392d9b2cf1c2f602c5c2cb45f595bf31de2\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 12 17:36:42.777409 containerd[1575]: time="2025-09-12T17:36:42.777366342Z" level=info msg="CreateContainer within sandbox \"5f74ae58193fa3ad603250ff2d5b8392d9b2cf1c2f602c5c2cb45f595bf31de2\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"60ff4ac2230f1b07cbd049364e4d187d691ca27be93a63b883863e4456c74d86\"" Sep 12 17:36:42.780744 containerd[1575]: time="2025-09-12T17:36:42.780681943Z" level=info msg="StartContainer for \"60ff4ac2230f1b07cbd049364e4d187d691ca27be93a63b883863e4456c74d86\"" Sep 12 17:36:42.886460 containerd[1575]: time="2025-09-12T17:36:42.886384172Z" level=info msg="StartContainer for \"60ff4ac2230f1b07cbd049364e4d187d691ca27be93a63b883863e4456c74d86\" returns successfully" Sep 12 17:36:43.381692 systemd[1]: Started sshd@16-10.0.0.90:22-10.0.0.1:59742.service - OpenSSH per-connection server daemon (10.0.0.1:59742). Sep 12 17:36:43.425953 sshd[5771]: Accepted publickey for core from 10.0.0.1 port 59742 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:36:43.427916 sshd[5771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:43.433328 systemd-logind[1556]: New session 17 of user core. Sep 12 17:36:43.443975 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 17:36:43.610868 kubelet[2643]: I0912 17:36:43.610708 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6df87d7bb7-nmz5p" podStartSLOduration=53.438342128 podStartE2EDuration="1m13.61065983s" podCreationTimestamp="2025-09-12 17:35:30 +0000 UTC" firstStartedPulling="2025-09-12 17:36:20.400672579 +0000 UTC m=+65.943465698" lastFinishedPulling="2025-09-12 17:36:40.572990292 +0000 UTC m=+86.115783400" observedRunningTime="2025-09-12 17:36:41.566950833 +0000 UTC m=+87.109743941" watchObservedRunningTime="2025-09-12 17:36:43.61065983 +0000 UTC m=+89.153452958" Sep 12 17:36:43.612118 kubelet[2643]: I0912 17:36:43.611049 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-784fd4cb5b-84mxg" podStartSLOduration=3.914056448 podStartE2EDuration="29.611040516s" podCreationTimestamp="2025-09-12 17:36:14 +0000 UTC" firstStartedPulling="2025-09-12 17:36:17.067284942 +0000 UTC m=+62.610078050" lastFinishedPulling="2025-09-12 17:36:42.76426901 +0000 UTC m=+88.307062118" observedRunningTime="2025-09-12 17:36:43.6098833 +0000 UTC m=+89.152676438" watchObservedRunningTime="2025-09-12 17:36:43.611040516 +0000 UTC m=+89.153833624" Sep 12 17:36:44.054329 sshd[5771]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:44.059366 systemd[1]: sshd@16-10.0.0.90:22-10.0.0.1:59742.service: Deactivated successfully. Sep 12 17:36:44.063795 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 17:36:44.064804 systemd-logind[1556]: Session 17 logged out. Waiting for processes to exit. Sep 12 17:36:44.066529 systemd-logind[1556]: Removed session 17. Sep 12 17:36:46.560334 kubelet[2643]: E0912 17:36:46.560266 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:36:47.556418 kubelet[2643]: E0912 17:36:47.556361 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:36:49.065041 systemd[1]: Started sshd@17-10.0.0.90:22-10.0.0.1:59750.service - OpenSSH per-connection server daemon (10.0.0.1:59750). Sep 12 17:36:49.105569 sshd[5788]: Accepted publickey for core from 10.0.0.1 port 59750 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:36:49.107317 sshd[5788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:49.111547 systemd-logind[1556]: New session 18 of user core. Sep 12 17:36:49.115892 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 17:36:49.328613 sshd[5788]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:49.337691 systemd[1]: sshd@17-10.0.0.90:22-10.0.0.1:59750.service: Deactivated successfully. Sep 12 17:36:49.343096 systemd-logind[1556]: Session 18 logged out. Waiting for processes to exit. Sep 12 17:36:49.344239 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 17:36:49.345475 systemd-logind[1556]: Removed session 18. Sep 12 17:36:52.166454 kubelet[2643]: I0912 17:36:52.166391 2643 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:36:52.610973 systemd-resolved[1465]: Under memory pressure, flushing caches. Sep 12 17:36:52.612781 systemd-journald[1159]: Under memory pressure, flushing caches. Sep 12 17:36:52.611019 systemd-resolved[1465]: Flushed all caches. Sep 12 17:36:54.340822 systemd[1]: Started sshd@18-10.0.0.90:22-10.0.0.1:41342.service - OpenSSH per-connection server daemon (10.0.0.1:41342). Sep 12 17:36:54.437911 sshd[5807]: Accepted publickey for core from 10.0.0.1 port 41342 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:36:54.439728 sshd[5807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:54.445216 systemd-logind[1556]: New session 19 of user core. Sep 12 17:36:54.450917 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 17:36:54.675784 sshd[5807]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:54.680874 systemd[1]: sshd@18-10.0.0.90:22-10.0.0.1:41342.service: Deactivated successfully. Sep 12 17:36:54.684470 systemd-logind[1556]: Session 19 logged out. Waiting for processes to exit. Sep 12 17:36:54.685528 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 17:36:54.688677 systemd-logind[1556]: Removed session 19. Sep 12 17:36:59.691003 systemd[1]: Started sshd@19-10.0.0.90:22-10.0.0.1:41358.service - OpenSSH per-connection server daemon (10.0.0.1:41358). Sep 12 17:36:59.747735 sshd[5829]: Accepted publickey for core from 10.0.0.1 port 41358 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:36:59.750033 sshd[5829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:59.756848 systemd-logind[1556]: New session 20 of user core. Sep 12 17:36:59.768047 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 17:36:59.991376 sshd[5829]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:59.999730 systemd[1]: Started sshd@20-10.0.0.90:22-10.0.0.1:42626.service - OpenSSH per-connection server daemon (10.0.0.1:42626). Sep 12 17:37:00.000358 systemd[1]: sshd@19-10.0.0.90:22-10.0.0.1:41358.service: Deactivated successfully. Sep 12 17:37:00.005121 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 17:37:00.006065 systemd-logind[1556]: Session 20 logged out. Waiting for processes to exit. Sep 12 17:37:00.007186 systemd-logind[1556]: Removed session 20. Sep 12 17:37:00.036026 sshd[5841]: Accepted publickey for core from 10.0.0.1 port 42626 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:37:00.037775 sshd[5841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:37:00.042014 systemd-logind[1556]: New session 21 of user core. Sep 12 17:37:00.049734 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 17:37:00.422117 sshd[5841]: pam_unix(sshd:session): session closed for user core Sep 12 17:37:00.432000 systemd[1]: Started sshd@21-10.0.0.90:22-10.0.0.1:42630.service - OpenSSH per-connection server daemon (10.0.0.1:42630). Sep 12 17:37:00.432633 systemd[1]: sshd@20-10.0.0.90:22-10.0.0.1:42626.service: Deactivated successfully. Sep 12 17:37:00.435823 systemd-logind[1556]: Session 21 logged out. Waiting for processes to exit. Sep 12 17:37:00.436754 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 17:37:00.438984 systemd-logind[1556]: Removed session 21. Sep 12 17:37:00.476589 sshd[5857]: Accepted publickey for core from 10.0.0.1 port 42630 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:37:00.478847 sshd[5857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:37:00.483852 systemd-logind[1556]: New session 22 of user core. Sep 12 17:37:00.490902 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 17:37:02.805695 sshd[5857]: pam_unix(sshd:session): session closed for user core Sep 12 17:37:02.818949 systemd[1]: Started sshd@22-10.0.0.90:22-10.0.0.1:42636.service - OpenSSH per-connection server daemon (10.0.0.1:42636). Sep 12 17:37:02.819723 systemd[1]: sshd@21-10.0.0.90:22-10.0.0.1:42630.service: Deactivated successfully. Sep 12 17:37:02.828962 systemd-logind[1556]: Session 22 logged out. Waiting for processes to exit. Sep 12 17:37:02.829236 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 17:37:02.837912 systemd-logind[1556]: Removed session 22. Sep 12 17:37:02.886287 sshd[5937]: Accepted publickey for core from 10.0.0.1 port 42636 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:37:02.888421 sshd[5937]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:37:02.894107 systemd-logind[1556]: New session 23 of user core. Sep 12 17:37:02.903777 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 17:37:03.563974 kubelet[2643]: E0912 17:37:03.562709 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:37:03.738958 sshd[5937]: pam_unix(sshd:session): session closed for user core Sep 12 17:37:03.751855 systemd[1]: Started sshd@23-10.0.0.90:22-10.0.0.1:42648.service - OpenSSH per-connection server daemon (10.0.0.1:42648). Sep 12 17:37:03.754803 systemd[1]: sshd@22-10.0.0.90:22-10.0.0.1:42636.service: Deactivated successfully. Sep 12 17:37:03.759268 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 17:37:03.764245 systemd-logind[1556]: Session 23 logged out. Waiting for processes to exit. Sep 12 17:37:03.766016 systemd-logind[1556]: Removed session 23. Sep 12 17:37:03.791810 sshd[5952]: Accepted publickey for core from 10.0.0.1 port 42648 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:37:03.793977 sshd[5952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:37:03.800520 systemd-logind[1556]: New session 24 of user core. Sep 12 17:37:03.809280 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 17:37:04.011303 sshd[5952]: pam_unix(sshd:session): session closed for user core Sep 12 17:37:04.021730 systemd[1]: sshd@23-10.0.0.90:22-10.0.0.1:42648.service: Deactivated successfully. Sep 12 17:37:04.032224 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 17:37:04.033585 systemd-logind[1556]: Session 24 logged out. Waiting for processes to exit. Sep 12 17:37:04.034814 systemd-logind[1556]: Removed session 24. Sep 12 17:37:04.643126 systemd-resolved[1465]: Under memory pressure, flushing caches. Sep 12 17:37:04.643135 systemd-resolved[1465]: Flushed all caches. Sep 12 17:37:04.645554 systemd-journald[1159]: Under memory pressure, flushing caches. Sep 12 17:37:09.028952 systemd[1]: Started sshd@24-10.0.0.90:22-10.0.0.1:42660.service - OpenSSH per-connection server daemon (10.0.0.1:42660). Sep 12 17:37:09.079399 sshd[5991]: Accepted publickey for core from 10.0.0.1 port 42660 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:37:09.081504 sshd[5991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:37:09.087256 systemd-logind[1556]: New session 25 of user core. Sep 12 17:37:09.090824 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 17:37:09.270000 sshd[5991]: pam_unix(sshd:session): session closed for user core Sep 12 17:37:09.274578 systemd[1]: sshd@24-10.0.0.90:22-10.0.0.1:42660.service: Deactivated successfully. Sep 12 17:37:09.277275 systemd-logind[1556]: Session 25 logged out. Waiting for processes to exit. Sep 12 17:37:09.277549 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 17:37:09.278761 systemd-logind[1556]: Removed session 25. Sep 12 17:37:14.280037 systemd[1]: Started sshd@25-10.0.0.90:22-10.0.0.1:39202.service - OpenSSH per-connection server daemon (10.0.0.1:39202). Sep 12 17:37:14.331967 sshd[6033]: Accepted publickey for core from 10.0.0.1 port 39202 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:37:14.333996 sshd[6033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:37:14.339599 systemd-logind[1556]: New session 26 of user core. Sep 12 17:37:14.345868 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 17:37:14.652007 sshd[6033]: pam_unix(sshd:session): session closed for user core Sep 12 17:37:14.657944 systemd[1]: sshd@25-10.0.0.90:22-10.0.0.1:39202.service: Deactivated successfully. Sep 12 17:37:14.661667 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 17:37:14.664576 systemd-logind[1556]: Session 26 logged out. Waiting for processes to exit. Sep 12 17:37:14.666520 systemd-logind[1556]: Removed session 26. Sep 12 17:37:16.075446 containerd[1575]: time="2025-09-12T17:37:16.075354301Z" level=info msg="StopPodSandbox for \"9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678\"" Sep 12 17:37:16.747055 containerd[1575]: 2025-09-12 17:37:16.296 [WARNING][6060] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--7nrf5-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"9b17846b-f3a4-4894-aba8-8a48d931dcb0", ResourceVersion:"1135", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3622850b277001ffa27e5056c611b6888372a9797805968ddb0b8b8737b19513", Pod:"coredns-7c65d6cfc9-7nrf5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibc0f5dafc95", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:16.747055 containerd[1575]: 2025-09-12 17:37:16.298 [INFO][6060] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678" Sep 12 17:37:16.747055 containerd[1575]: 2025-09-12 17:37:16.298 [INFO][6060] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678" iface="eth0" netns="" Sep 12 17:37:16.747055 containerd[1575]: 2025-09-12 17:37:16.298 [INFO][6060] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678" Sep 12 17:37:16.747055 containerd[1575]: 2025-09-12 17:37:16.298 [INFO][6060] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678" Sep 12 17:37:16.747055 containerd[1575]: 2025-09-12 17:37:16.719 [INFO][6068] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678" HandleID="k8s-pod-network.9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678" Workload="localhost-k8s-coredns--7c65d6cfc9--7nrf5-eth0" Sep 12 17:37:16.747055 containerd[1575]: 2025-09-12 17:37:16.723 [INFO][6068] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:16.747055 containerd[1575]: 2025-09-12 17:37:16.723 [INFO][6068] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:16.747055 containerd[1575]: 2025-09-12 17:37:16.738 [WARNING][6068] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678" HandleID="k8s-pod-network.9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678" Workload="localhost-k8s-coredns--7c65d6cfc9--7nrf5-eth0" Sep 12 17:37:16.747055 containerd[1575]: 2025-09-12 17:37:16.738 [INFO][6068] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678" HandleID="k8s-pod-network.9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678" Workload="localhost-k8s-coredns--7c65d6cfc9--7nrf5-eth0" Sep 12 17:37:16.747055 containerd[1575]: 2025-09-12 17:37:16.740 [INFO][6068] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:16.747055 containerd[1575]: 2025-09-12 17:37:16.743 [INFO][6060] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678" Sep 12 17:37:16.756243 containerd[1575]: time="2025-09-12T17:37:16.756153089Z" level=info msg="TearDown network for sandbox \"9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678\" successfully" Sep 12 17:37:16.756243 containerd[1575]: time="2025-09-12T17:37:16.756223391Z" level=info msg="StopPodSandbox for \"9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678\" returns successfully" Sep 12 17:37:16.756899 containerd[1575]: time="2025-09-12T17:37:16.756856350Z" level=info msg="RemovePodSandbox for \"9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678\"" Sep 12 17:37:16.756899 containerd[1575]: time="2025-09-12T17:37:16.756901145Z" level=info msg="Forcibly stopping sandbox \"9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678\"" Sep 12 17:37:16.832726 containerd[1575]: 2025-09-12 17:37:16.793 [WARNING][6085] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--7nrf5-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"9b17846b-f3a4-4894-aba8-8a48d931dcb0", ResourceVersion:"1135", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3622850b277001ffa27e5056c611b6888372a9797805968ddb0b8b8737b19513", Pod:"coredns-7c65d6cfc9-7nrf5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibc0f5dafc95", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:16.832726 containerd[1575]: 2025-09-12 17:37:16.793 [INFO][6085] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678" Sep 12 17:37:16.832726 containerd[1575]: 2025-09-12 17:37:16.793 [INFO][6085] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678" iface="eth0" netns="" Sep 12 17:37:16.832726 containerd[1575]: 2025-09-12 17:37:16.793 [INFO][6085] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678" Sep 12 17:37:16.832726 containerd[1575]: 2025-09-12 17:37:16.793 [INFO][6085] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678" Sep 12 17:37:16.832726 containerd[1575]: 2025-09-12 17:37:16.817 [INFO][6093] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678" HandleID="k8s-pod-network.9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678" Workload="localhost-k8s-coredns--7c65d6cfc9--7nrf5-eth0" Sep 12 17:37:16.832726 containerd[1575]: 2025-09-12 17:37:16.817 [INFO][6093] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:16.832726 containerd[1575]: 2025-09-12 17:37:16.817 [INFO][6093] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:16.832726 containerd[1575]: 2025-09-12 17:37:16.822 [WARNING][6093] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678" HandleID="k8s-pod-network.9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678" Workload="localhost-k8s-coredns--7c65d6cfc9--7nrf5-eth0" Sep 12 17:37:16.832726 containerd[1575]: 2025-09-12 17:37:16.822 [INFO][6093] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678" HandleID="k8s-pod-network.9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678" Workload="localhost-k8s-coredns--7c65d6cfc9--7nrf5-eth0" Sep 12 17:37:16.832726 containerd[1575]: 2025-09-12 17:37:16.824 [INFO][6093] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:16.832726 containerd[1575]: 2025-09-12 17:37:16.828 [INFO][6085] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678" Sep 12 17:37:16.833215 containerd[1575]: time="2025-09-12T17:37:16.832776347Z" level=info msg="TearDown network for sandbox \"9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678\" successfully" Sep 12 17:37:16.864753 containerd[1575]: time="2025-09-12T17:37:16.864676078Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:37:16.865003 containerd[1575]: time="2025-09-12T17:37:16.864797518Z" level=info msg="RemovePodSandbox \"9219de1b8ce0880e674eafed79d4eef7ab505951cf532231651c1f5220aa9678\" returns successfully" Sep 12 17:37:16.865380 containerd[1575]: time="2025-09-12T17:37:16.865340376Z" level=info msg="StopPodSandbox for \"21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab\"" Sep 12 17:37:16.996015 containerd[1575]: 2025-09-12 17:37:16.906 [WARNING][6110] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6df87d7bb7--ns7sf-eth0", GenerateName:"calico-apiserver-6df87d7bb7-", Namespace:"calico-apiserver", SelfLink:"", UID:"fdbb1e09-6ae8-4cec-b2fc-9195acd01c9c", ResourceVersion:"1237", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6df87d7bb7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ddadf29d82c896e8e294d6eac9a6f192ab00e02a5653091c71c3508870b0b335", Pod:"calico-apiserver-6df87d7bb7-ns7sf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid39fdd3d3f4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:16.996015 containerd[1575]: 2025-09-12 17:37:16.906 [INFO][6110] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab" Sep 12 17:37:16.996015 containerd[1575]: 2025-09-12 17:37:16.906 [INFO][6110] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab" iface="eth0" netns="" Sep 12 17:37:16.996015 containerd[1575]: 2025-09-12 17:37:16.906 [INFO][6110] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab" Sep 12 17:37:16.996015 containerd[1575]: 2025-09-12 17:37:16.906 [INFO][6110] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab" Sep 12 17:37:16.996015 containerd[1575]: 2025-09-12 17:37:16.929 [INFO][6119] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab" HandleID="k8s-pod-network.21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab" Workload="localhost-k8s-calico--apiserver--6df87d7bb7--ns7sf-eth0" Sep 12 17:37:16.996015 containerd[1575]: 2025-09-12 17:37:16.929 [INFO][6119] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:16.996015 containerd[1575]: 2025-09-12 17:37:16.929 [INFO][6119] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:16.996015 containerd[1575]: 2025-09-12 17:37:16.958 [WARNING][6119] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab" HandleID="k8s-pod-network.21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab" Workload="localhost-k8s-calico--apiserver--6df87d7bb7--ns7sf-eth0" Sep 12 17:37:16.996015 containerd[1575]: 2025-09-12 17:37:16.958 [INFO][6119] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab" HandleID="k8s-pod-network.21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab" Workload="localhost-k8s-calico--apiserver--6df87d7bb7--ns7sf-eth0" Sep 12 17:37:16.996015 containerd[1575]: 2025-09-12 17:37:16.972 [INFO][6119] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:16.996015 containerd[1575]: 2025-09-12 17:37:16.978 [INFO][6110] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab" Sep 12 17:37:16.996015 containerd[1575]: time="2025-09-12T17:37:16.995831385Z" level=info msg="TearDown network for sandbox \"21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab\" successfully" Sep 12 17:37:16.996015 containerd[1575]: time="2025-09-12T17:37:16.995868013Z" level=info msg="StopPodSandbox for \"21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab\" returns successfully" Sep 12 17:37:16.998775 containerd[1575]: time="2025-09-12T17:37:16.997671901Z" level=info msg="RemovePodSandbox for \"21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab\"" Sep 12 17:37:16.998775 containerd[1575]: time="2025-09-12T17:37:16.997715914Z" level=info msg="Forcibly stopping sandbox \"21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab\"" Sep 12 17:37:17.089285 containerd[1575]: 2025-09-12 17:37:17.045 [WARNING][6137] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6df87d7bb7--ns7sf-eth0", GenerateName:"calico-apiserver-6df87d7bb7-", Namespace:"calico-apiserver", SelfLink:"", UID:"fdbb1e09-6ae8-4cec-b2fc-9195acd01c9c", ResourceVersion:"1237", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6df87d7bb7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ddadf29d82c896e8e294d6eac9a6f192ab00e02a5653091c71c3508870b0b335", Pod:"calico-apiserver-6df87d7bb7-ns7sf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid39fdd3d3f4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:17.089285 containerd[1575]: 2025-09-12 17:37:17.045 [INFO][6137] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab" Sep 12 17:37:17.089285 containerd[1575]: 2025-09-12 17:37:17.045 [INFO][6137] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab" iface="eth0" netns="" Sep 12 17:37:17.089285 containerd[1575]: 2025-09-12 17:37:17.045 [INFO][6137] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab" Sep 12 17:37:17.089285 containerd[1575]: 2025-09-12 17:37:17.045 [INFO][6137] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab" Sep 12 17:37:17.089285 containerd[1575]: 2025-09-12 17:37:17.071 [INFO][6146] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab" HandleID="k8s-pod-network.21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab" Workload="localhost-k8s-calico--apiserver--6df87d7bb7--ns7sf-eth0" Sep 12 17:37:17.089285 containerd[1575]: 2025-09-12 17:37:17.071 [INFO][6146] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:17.089285 containerd[1575]: 2025-09-12 17:37:17.071 [INFO][6146] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:17.089285 containerd[1575]: 2025-09-12 17:37:17.078 [WARNING][6146] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab" HandleID="k8s-pod-network.21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab" Workload="localhost-k8s-calico--apiserver--6df87d7bb7--ns7sf-eth0" Sep 12 17:37:17.089285 containerd[1575]: 2025-09-12 17:37:17.078 [INFO][6146] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab" HandleID="k8s-pod-network.21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab" Workload="localhost-k8s-calico--apiserver--6df87d7bb7--ns7sf-eth0" Sep 12 17:37:17.089285 containerd[1575]: 2025-09-12 17:37:17.079 [INFO][6146] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:17.089285 containerd[1575]: 2025-09-12 17:37:17.084 [INFO][6137] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab" Sep 12 17:37:17.089285 containerd[1575]: time="2025-09-12T17:37:17.088257177Z" level=info msg="TearDown network for sandbox \"21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab\" successfully" Sep 12 17:37:17.095383 containerd[1575]: time="2025-09-12T17:37:17.094558401Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:37:17.095383 containerd[1575]: time="2025-09-12T17:37:17.094751116Z" level=info msg="RemovePodSandbox \"21ee65085b9606f3e65e129bc3eabae16271516b644370030e618090e0f4d4ab\" returns successfully" Sep 12 17:37:17.111612 containerd[1575]: time="2025-09-12T17:37:17.109690109Z" level=info msg="StopPodSandbox for \"fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf\"" Sep 12 17:37:17.196139 containerd[1575]: 2025-09-12 17:37:17.148 [WARNING][6164] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6df87d7bb7--nmz5p-eth0", GenerateName:"calico-apiserver-6df87d7bb7-", Namespace:"calico-apiserver", SelfLink:"", UID:"a331b72f-6ff9-42b5-a548-9c65ebf3a6da", ResourceVersion:"1353", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6df87d7bb7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2f664e307b598b531daa997e8c602c7fa90f3bd6f61014f5e35f6b0453c2d517", Pod:"calico-apiserver-6df87d7bb7-nmz5p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calide0c1dcdf76", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:17.196139 containerd[1575]: 2025-09-12 17:37:17.149 [INFO][6164] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf" Sep 12 17:37:17.196139 containerd[1575]: 2025-09-12 17:37:17.149 [INFO][6164] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf" iface="eth0" netns="" Sep 12 17:37:17.196139 containerd[1575]: 2025-09-12 17:37:17.149 [INFO][6164] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf" Sep 12 17:37:17.196139 containerd[1575]: 2025-09-12 17:37:17.149 [INFO][6164] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf" Sep 12 17:37:17.196139 containerd[1575]: 2025-09-12 17:37:17.180 [INFO][6173] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf" HandleID="k8s-pod-network.fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf" Workload="localhost-k8s-calico--apiserver--6df87d7bb7--nmz5p-eth0" Sep 12 17:37:17.196139 containerd[1575]: 2025-09-12 17:37:17.180 [INFO][6173] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:17.196139 containerd[1575]: 2025-09-12 17:37:17.180 [INFO][6173] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:17.196139 containerd[1575]: 2025-09-12 17:37:17.186 [WARNING][6173] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf" HandleID="k8s-pod-network.fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf" Workload="localhost-k8s-calico--apiserver--6df87d7bb7--nmz5p-eth0" Sep 12 17:37:17.196139 containerd[1575]: 2025-09-12 17:37:17.186 [INFO][6173] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf" HandleID="k8s-pod-network.fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf" Workload="localhost-k8s-calico--apiserver--6df87d7bb7--nmz5p-eth0" Sep 12 17:37:17.196139 containerd[1575]: 2025-09-12 17:37:17.188 [INFO][6173] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:17.196139 containerd[1575]: 2025-09-12 17:37:17.191 [INFO][6164] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf" Sep 12 17:37:17.196702 containerd[1575]: time="2025-09-12T17:37:17.196195217Z" level=info msg="TearDown network for sandbox \"fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf\" successfully" Sep 12 17:37:17.196702 containerd[1575]: time="2025-09-12T17:37:17.196231756Z" level=info msg="StopPodSandbox for \"fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf\" returns successfully" Sep 12 17:37:17.196898 containerd[1575]: time="2025-09-12T17:37:17.196866368Z" level=info msg="RemovePodSandbox for \"fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf\"" Sep 12 17:37:17.196937 containerd[1575]: time="2025-09-12T17:37:17.196905012Z" level=info msg="Forcibly stopping sandbox \"fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf\"" Sep 12 17:37:17.295682 containerd[1575]: 2025-09-12 17:37:17.249 [WARNING][6190] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6df87d7bb7--nmz5p-eth0", GenerateName:"calico-apiserver-6df87d7bb7-", Namespace:"calico-apiserver", SelfLink:"", UID:"a331b72f-6ff9-42b5-a548-9c65ebf3a6da", ResourceVersion:"1353", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6df87d7bb7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2f664e307b598b531daa997e8c602c7fa90f3bd6f61014f5e35f6b0453c2d517", Pod:"calico-apiserver-6df87d7bb7-nmz5p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calide0c1dcdf76", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:17.295682 containerd[1575]: 2025-09-12 17:37:17.249 [INFO][6190] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf" Sep 12 17:37:17.295682 containerd[1575]: 2025-09-12 17:37:17.249 [INFO][6190] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf" iface="eth0" netns="" Sep 12 17:37:17.295682 containerd[1575]: 2025-09-12 17:37:17.249 [INFO][6190] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf" Sep 12 17:37:17.295682 containerd[1575]: 2025-09-12 17:37:17.249 [INFO][6190] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf" Sep 12 17:37:17.295682 containerd[1575]: 2025-09-12 17:37:17.277 [INFO][6198] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf" HandleID="k8s-pod-network.fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf" Workload="localhost-k8s-calico--apiserver--6df87d7bb7--nmz5p-eth0" Sep 12 17:37:17.295682 containerd[1575]: 2025-09-12 17:37:17.277 [INFO][6198] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:17.295682 containerd[1575]: 2025-09-12 17:37:17.277 [INFO][6198] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:17.295682 containerd[1575]: 2025-09-12 17:37:17.285 [WARNING][6198] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf" HandleID="k8s-pod-network.fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf" Workload="localhost-k8s-calico--apiserver--6df87d7bb7--nmz5p-eth0" Sep 12 17:37:17.295682 containerd[1575]: 2025-09-12 17:37:17.285 [INFO][6198] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf" HandleID="k8s-pod-network.fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf" Workload="localhost-k8s-calico--apiserver--6df87d7bb7--nmz5p-eth0" Sep 12 17:37:17.295682 containerd[1575]: 2025-09-12 17:37:17.287 [INFO][6198] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:17.295682 containerd[1575]: 2025-09-12 17:37:17.291 [INFO][6190] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf" Sep 12 17:37:17.295682 containerd[1575]: time="2025-09-12T17:37:17.294763552Z" level=info msg="TearDown network for sandbox \"fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf\" successfully" Sep 12 17:37:17.300378 containerd[1575]: time="2025-09-12T17:37:17.300166244Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:37:17.300378 containerd[1575]: time="2025-09-12T17:37:17.300268527Z" level=info msg="RemovePodSandbox \"fc55a3643072164dc0e90a9b93a0f5061547f60a7530f263d4504aa617360eaf\" returns successfully" Sep 12 17:37:17.301134 containerd[1575]: time="2025-09-12T17:37:17.301098029Z" level=info msg="StopPodSandbox for \"310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb\"" Sep 12 17:37:17.377773 containerd[1575]: 2025-09-12 17:37:17.338 [WARNING][6216] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--ngmvz-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"002eb908-eead-44c8-b785-c0b17d959030", ResourceVersion:"1390", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"72c8f78d18dee40748a7ad0261d7d18a4f849e681f7983b08c301b7af8f3f5b5", Pod:"goldmane-7988f88666-ngmvz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic8ecd6454fd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:17.377773 containerd[1575]: 2025-09-12 17:37:17.338 [INFO][6216] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb" Sep 12 17:37:17.377773 containerd[1575]: 2025-09-12 17:37:17.338 [INFO][6216] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb" iface="eth0" netns="" Sep 12 17:37:17.377773 containerd[1575]: 2025-09-12 17:37:17.338 [INFO][6216] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb" Sep 12 17:37:17.377773 containerd[1575]: 2025-09-12 17:37:17.338 [INFO][6216] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb" Sep 12 17:37:17.377773 containerd[1575]: 2025-09-12 17:37:17.364 [INFO][6224] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb" HandleID="k8s-pod-network.310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb" Workload="localhost-k8s-goldmane--7988f88666--ngmvz-eth0" Sep 12 17:37:17.377773 containerd[1575]: 2025-09-12 17:37:17.364 [INFO][6224] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:17.377773 containerd[1575]: 2025-09-12 17:37:17.364 [INFO][6224] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:17.377773 containerd[1575]: 2025-09-12 17:37:17.369 [WARNING][6224] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb" HandleID="k8s-pod-network.310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb" Workload="localhost-k8s-goldmane--7988f88666--ngmvz-eth0" Sep 12 17:37:17.377773 containerd[1575]: 2025-09-12 17:37:17.370 [INFO][6224] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb" HandleID="k8s-pod-network.310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb" Workload="localhost-k8s-goldmane--7988f88666--ngmvz-eth0" Sep 12 17:37:17.377773 containerd[1575]: 2025-09-12 17:37:17.371 [INFO][6224] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:17.377773 containerd[1575]: 2025-09-12 17:37:17.374 [INFO][6216] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb" Sep 12 17:37:17.378633 containerd[1575]: time="2025-09-12T17:37:17.378501690Z" level=info msg="TearDown network for sandbox \"310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb\" successfully" Sep 12 17:37:17.378633 containerd[1575]: time="2025-09-12T17:37:17.378535484Z" level=info msg="StopPodSandbox for \"310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb\" returns successfully" Sep 12 17:37:17.379482 containerd[1575]: time="2025-09-12T17:37:17.379195684Z" level=info msg="RemovePodSandbox for \"310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb\"" Sep 12 17:37:17.379482 containerd[1575]: time="2025-09-12T17:37:17.379224098Z" level=info msg="Forcibly stopping sandbox \"310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb\"" Sep 12 17:37:17.465922 containerd[1575]: 2025-09-12 17:37:17.425 [WARNING][6242] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--ngmvz-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"002eb908-eead-44c8-b785-c0b17d959030", ResourceVersion:"1390", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"72c8f78d18dee40748a7ad0261d7d18a4f849e681f7983b08c301b7af8f3f5b5", Pod:"goldmane-7988f88666-ngmvz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic8ecd6454fd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:17.465922 containerd[1575]: 2025-09-12 17:37:17.425 [INFO][6242] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb" Sep 12 17:37:17.465922 containerd[1575]: 2025-09-12 17:37:17.425 [INFO][6242] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb" iface="eth0" netns="" Sep 12 17:37:17.465922 containerd[1575]: 2025-09-12 17:37:17.425 [INFO][6242] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb" Sep 12 17:37:17.465922 containerd[1575]: 2025-09-12 17:37:17.425 [INFO][6242] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb" Sep 12 17:37:17.465922 containerd[1575]: 2025-09-12 17:37:17.450 [INFO][6251] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb" HandleID="k8s-pod-network.310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb" Workload="localhost-k8s-goldmane--7988f88666--ngmvz-eth0" Sep 12 17:37:17.465922 containerd[1575]: 2025-09-12 17:37:17.450 [INFO][6251] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:17.465922 containerd[1575]: 2025-09-12 17:37:17.450 [INFO][6251] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:17.465922 containerd[1575]: 2025-09-12 17:37:17.458 [WARNING][6251] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb" HandleID="k8s-pod-network.310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb" Workload="localhost-k8s-goldmane--7988f88666--ngmvz-eth0" Sep 12 17:37:17.465922 containerd[1575]: 2025-09-12 17:37:17.458 [INFO][6251] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb" HandleID="k8s-pod-network.310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb" Workload="localhost-k8s-goldmane--7988f88666--ngmvz-eth0" Sep 12 17:37:17.465922 containerd[1575]: 2025-09-12 17:37:17.459 [INFO][6251] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:17.465922 containerd[1575]: 2025-09-12 17:37:17.462 [INFO][6242] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb" Sep 12 17:37:17.465922 containerd[1575]: time="2025-09-12T17:37:17.465872468Z" level=info msg="TearDown network for sandbox \"310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb\" successfully" Sep 12 17:37:17.470422 containerd[1575]: time="2025-09-12T17:37:17.470375436Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:37:17.470595 containerd[1575]: time="2025-09-12T17:37:17.470563162Z" level=info msg="RemovePodSandbox \"310ce887c5036399afb10c6eecc38ec45851d5575fdc0ce8f254b184a684b1cb\" returns successfully" Sep 12 17:37:17.471147 containerd[1575]: time="2025-09-12T17:37:17.471110889Z" level=info msg="StopPodSandbox for \"02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4\"" Sep 12 17:37:17.544956 containerd[1575]: 2025-09-12 17:37:17.508 [WARNING][6269] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--hwr5d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0afbe5bf-287d-4d57-b5ad-630766b8207a", ResourceVersion:"1284", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0bc7cdc33f110c5fd02081fdba7d53eaf7489d7dda7922c9c24931d5f6226aac", Pod:"csi-node-driver-hwr5d", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0f7c47a91fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:17.544956 containerd[1575]: 2025-09-12 17:37:17.509 [INFO][6269] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4" Sep 12 17:37:17.544956 containerd[1575]: 2025-09-12 17:37:17.509 [INFO][6269] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4" iface="eth0" netns="" Sep 12 17:37:17.544956 containerd[1575]: 2025-09-12 17:37:17.509 [INFO][6269] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4" Sep 12 17:37:17.544956 containerd[1575]: 2025-09-12 17:37:17.509 [INFO][6269] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4" Sep 12 17:37:17.544956 containerd[1575]: 2025-09-12 17:37:17.531 [INFO][6277] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4" HandleID="k8s-pod-network.02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4" Workload="localhost-k8s-csi--node--driver--hwr5d-eth0" Sep 12 17:37:17.544956 containerd[1575]: 2025-09-12 17:37:17.531 [INFO][6277] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:17.544956 containerd[1575]: 2025-09-12 17:37:17.531 [INFO][6277] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:17.544956 containerd[1575]: 2025-09-12 17:37:17.537 [WARNING][6277] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4" HandleID="k8s-pod-network.02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4" Workload="localhost-k8s-csi--node--driver--hwr5d-eth0" Sep 12 17:37:17.544956 containerd[1575]: 2025-09-12 17:37:17.537 [INFO][6277] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4" HandleID="k8s-pod-network.02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4" Workload="localhost-k8s-csi--node--driver--hwr5d-eth0" Sep 12 17:37:17.544956 containerd[1575]: 2025-09-12 17:37:17.538 [INFO][6277] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:17.544956 containerd[1575]: 2025-09-12 17:37:17.541 [INFO][6269] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4" Sep 12 17:37:17.545557 containerd[1575]: time="2025-09-12T17:37:17.545517536Z" level=info msg="TearDown network for sandbox \"02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4\" successfully" Sep 12 17:37:17.545557 containerd[1575]: time="2025-09-12T17:37:17.545550327Z" level=info msg="StopPodSandbox for \"02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4\" returns successfully" Sep 12 17:37:17.546458 containerd[1575]: time="2025-09-12T17:37:17.546086522Z" level=info msg="RemovePodSandbox for \"02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4\"" Sep 12 17:37:17.546458 containerd[1575]: time="2025-09-12T17:37:17.546115237Z" level=info msg="Forcibly stopping sandbox \"02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4\"" Sep 12 17:37:17.618957 containerd[1575]: 2025-09-12 17:37:17.579 [WARNING][6295] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--hwr5d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0afbe5bf-287d-4d57-b5ad-630766b8207a", ResourceVersion:"1284", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0bc7cdc33f110c5fd02081fdba7d53eaf7489d7dda7922c9c24931d5f6226aac", Pod:"csi-node-driver-hwr5d", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0f7c47a91fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:17.618957 containerd[1575]: 2025-09-12 17:37:17.580 [INFO][6295] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4" Sep 12 17:37:17.618957 containerd[1575]: 2025-09-12 17:37:17.580 [INFO][6295] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4" iface="eth0" netns="" Sep 12 17:37:17.618957 containerd[1575]: 2025-09-12 17:37:17.580 [INFO][6295] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4" Sep 12 17:37:17.618957 containerd[1575]: 2025-09-12 17:37:17.580 [INFO][6295] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4" Sep 12 17:37:17.618957 containerd[1575]: 2025-09-12 17:37:17.604 [INFO][6303] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4" HandleID="k8s-pod-network.02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4" Workload="localhost-k8s-csi--node--driver--hwr5d-eth0" Sep 12 17:37:17.618957 containerd[1575]: 2025-09-12 17:37:17.604 [INFO][6303] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:17.618957 containerd[1575]: 2025-09-12 17:37:17.604 [INFO][6303] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:17.618957 containerd[1575]: 2025-09-12 17:37:17.610 [WARNING][6303] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4" HandleID="k8s-pod-network.02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4" Workload="localhost-k8s-csi--node--driver--hwr5d-eth0" Sep 12 17:37:17.618957 containerd[1575]: 2025-09-12 17:37:17.610 [INFO][6303] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4" HandleID="k8s-pod-network.02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4" Workload="localhost-k8s-csi--node--driver--hwr5d-eth0" Sep 12 17:37:17.618957 containerd[1575]: 2025-09-12 17:37:17.612 [INFO][6303] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:17.618957 containerd[1575]: 2025-09-12 17:37:17.615 [INFO][6295] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4" Sep 12 17:37:17.619487 containerd[1575]: time="2025-09-12T17:37:17.619018846Z" level=info msg="TearDown network for sandbox \"02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4\" successfully" Sep 12 17:37:17.624237 containerd[1575]: time="2025-09-12T17:37:17.624202683Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:37:17.624314 containerd[1575]: time="2025-09-12T17:37:17.624294608Z" level=info msg="RemovePodSandbox \"02ae69c0a0f770ca98e23991cc14e3fddc35347fbce6110800f41893581895b4\" returns successfully" Sep 12 17:37:17.624937 containerd[1575]: time="2025-09-12T17:37:17.624894924Z" level=info msg="StopPodSandbox for \"d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1\"" Sep 12 17:37:17.700265 containerd[1575]: 2025-09-12 17:37:17.659 [WARNING][6322] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--7qbhp-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"caab70bd-65e3-454e-b4f6-312204583e4c", ResourceVersion:"1139", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b4921da3126dd78c1405bbc5997af8ad3cbfdb7d0b54df097a390e15b8bb8604", Pod:"coredns-7c65d6cfc9-7qbhp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaad212b6789", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:17.700265 containerd[1575]: 2025-09-12 17:37:17.659 [INFO][6322] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1" Sep 12 17:37:17.700265 containerd[1575]: 2025-09-12 17:37:17.659 [INFO][6322] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1" iface="eth0" netns="" Sep 12 17:37:17.700265 containerd[1575]: 2025-09-12 17:37:17.659 [INFO][6322] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1" Sep 12 17:37:17.700265 containerd[1575]: 2025-09-12 17:37:17.660 [INFO][6322] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1" Sep 12 17:37:17.700265 containerd[1575]: 2025-09-12 17:37:17.682 [INFO][6330] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1" HandleID="k8s-pod-network.d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1" Workload="localhost-k8s-coredns--7c65d6cfc9--7qbhp-eth0" Sep 12 17:37:17.700265 containerd[1575]: 2025-09-12 17:37:17.682 [INFO][6330] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:17.700265 containerd[1575]: 2025-09-12 17:37:17.682 [INFO][6330] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:17.700265 containerd[1575]: 2025-09-12 17:37:17.690 [WARNING][6330] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1" HandleID="k8s-pod-network.d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1" Workload="localhost-k8s-coredns--7c65d6cfc9--7qbhp-eth0" Sep 12 17:37:17.700265 containerd[1575]: 2025-09-12 17:37:17.690 [INFO][6330] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1" HandleID="k8s-pod-network.d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1" Workload="localhost-k8s-coredns--7c65d6cfc9--7qbhp-eth0" Sep 12 17:37:17.700265 containerd[1575]: 2025-09-12 17:37:17.693 [INFO][6330] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:17.700265 containerd[1575]: 2025-09-12 17:37:17.697 [INFO][6322] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1" Sep 12 17:37:17.700917 containerd[1575]: time="2025-09-12T17:37:17.700300592Z" level=info msg="TearDown network for sandbox \"d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1\" successfully" Sep 12 17:37:17.700917 containerd[1575]: time="2025-09-12T17:37:17.700328355Z" level=info msg="StopPodSandbox for \"d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1\" returns successfully" Sep 12 17:37:17.703543 containerd[1575]: time="2025-09-12T17:37:17.701011379Z" level=info msg="RemovePodSandbox for \"d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1\"" Sep 12 17:37:17.703543 containerd[1575]: time="2025-09-12T17:37:17.701050303Z" level=info msg="Forcibly stopping sandbox \"d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1\"" Sep 12 17:37:17.800311 containerd[1575]: 2025-09-12 17:37:17.761 [WARNING][6349] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--7qbhp-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"caab70bd-65e3-454e-b4f6-312204583e4c", ResourceVersion:"1139", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b4921da3126dd78c1405bbc5997af8ad3cbfdb7d0b54df097a390e15b8bb8604", Pod:"coredns-7c65d6cfc9-7qbhp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaad212b6789", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:17.800311 containerd[1575]: 2025-09-12 17:37:17.761 [INFO][6349] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1" Sep 12 17:37:17.800311 containerd[1575]: 2025-09-12 17:37:17.761 [INFO][6349] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1" iface="eth0" netns="" Sep 12 17:37:17.800311 containerd[1575]: 2025-09-12 17:37:17.761 [INFO][6349] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1" Sep 12 17:37:17.800311 containerd[1575]: 2025-09-12 17:37:17.761 [INFO][6349] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1" Sep 12 17:37:17.800311 containerd[1575]: 2025-09-12 17:37:17.785 [INFO][6358] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1" HandleID="k8s-pod-network.d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1" Workload="localhost-k8s-coredns--7c65d6cfc9--7qbhp-eth0" Sep 12 17:37:17.800311 containerd[1575]: 2025-09-12 17:37:17.785 [INFO][6358] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:17.800311 containerd[1575]: 2025-09-12 17:37:17.785 [INFO][6358] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:17.800311 containerd[1575]: 2025-09-12 17:37:17.790 [WARNING][6358] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1" HandleID="k8s-pod-network.d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1" Workload="localhost-k8s-coredns--7c65d6cfc9--7qbhp-eth0" Sep 12 17:37:17.800311 containerd[1575]: 2025-09-12 17:37:17.790 [INFO][6358] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1" HandleID="k8s-pod-network.d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1" Workload="localhost-k8s-coredns--7c65d6cfc9--7qbhp-eth0" Sep 12 17:37:17.800311 containerd[1575]: 2025-09-12 17:37:17.792 [INFO][6358] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:17.800311 containerd[1575]: 2025-09-12 17:37:17.795 [INFO][6349] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1" Sep 12 17:37:17.800815 containerd[1575]: time="2025-09-12T17:37:17.800294748Z" level=info msg="TearDown network for sandbox \"d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1\" successfully" Sep 12 17:37:17.804570 containerd[1575]: time="2025-09-12T17:37:17.804543494Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:37:17.804626 containerd[1575]: time="2025-09-12T17:37:17.804608718Z" level=info msg="RemovePodSandbox \"d3ee9dee55f93de03b75a5024473d52d031c9378cba149e0a52011ba9e6380c1\" returns successfully" Sep 12 17:37:17.805097 containerd[1575]: time="2025-09-12T17:37:17.805073919Z" level=info msg="StopPodSandbox for \"c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4\"" Sep 12 17:37:17.880076 containerd[1575]: 2025-09-12 17:37:17.842 [WARNING][6375] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--577f47b55c--knc26-eth0", GenerateName:"calico-kube-controllers-577f47b55c-", Namespace:"calico-system", SelfLink:"", UID:"668d9640-9d0d-41dc-9d50-7ca43eccf073", ResourceVersion:"1264", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"577f47b55c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"47ddf11907f9deeaf38ba2f612175d1f56d3774eae89b317a5731a5e6b6a2132", Pod:"calico-kube-controllers-577f47b55c-knc26", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali20f925d1e60", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:17.880076 containerd[1575]: 2025-09-12 17:37:17.842 [INFO][6375] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4" Sep 12 17:37:17.880076 containerd[1575]: 2025-09-12 17:37:17.842 [INFO][6375] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4" iface="eth0" netns="" Sep 12 17:37:17.880076 containerd[1575]: 2025-09-12 17:37:17.842 [INFO][6375] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4" Sep 12 17:37:17.880076 containerd[1575]: 2025-09-12 17:37:17.842 [INFO][6375] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4" Sep 12 17:37:17.880076 containerd[1575]: 2025-09-12 17:37:17.864 [INFO][6384] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4" HandleID="k8s-pod-network.c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4" Workload="localhost-k8s-calico--kube--controllers--577f47b55c--knc26-eth0" Sep 12 17:37:17.880076 containerd[1575]: 2025-09-12 17:37:17.864 [INFO][6384] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:17.880076 containerd[1575]: 2025-09-12 17:37:17.864 [INFO][6384] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:17.880076 containerd[1575]: 2025-09-12 17:37:17.870 [WARNING][6384] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4" HandleID="k8s-pod-network.c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4" Workload="localhost-k8s-calico--kube--controllers--577f47b55c--knc26-eth0" Sep 12 17:37:17.880076 containerd[1575]: 2025-09-12 17:37:17.870 [INFO][6384] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4" HandleID="k8s-pod-network.c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4" Workload="localhost-k8s-calico--kube--controllers--577f47b55c--knc26-eth0" Sep 12 17:37:17.880076 containerd[1575]: 2025-09-12 17:37:17.871 [INFO][6384] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:17.880076 containerd[1575]: 2025-09-12 17:37:17.876 [INFO][6375] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4" Sep 12 17:37:17.880569 containerd[1575]: time="2025-09-12T17:37:17.880136818Z" level=info msg="TearDown network for sandbox \"c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4\" successfully" Sep 12 17:37:17.880569 containerd[1575]: time="2025-09-12T17:37:17.880176984Z" level=info msg="StopPodSandbox for \"c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4\" returns successfully" Sep 12 17:37:17.880872 containerd[1575]: time="2025-09-12T17:37:17.880832605Z" level=info msg="RemovePodSandbox for \"c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4\"" Sep 12 17:37:17.880906 containerd[1575]: time="2025-09-12T17:37:17.880881799Z" level=info msg="Forcibly stopping sandbox \"c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4\"" Sep 12 17:37:17.953836 containerd[1575]: 2025-09-12 17:37:17.917 [WARNING][6402] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--577f47b55c--knc26-eth0", GenerateName:"calico-kube-controllers-577f47b55c-", Namespace:"calico-system", SelfLink:"", UID:"668d9640-9d0d-41dc-9d50-7ca43eccf073", ResourceVersion:"1264", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"577f47b55c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"47ddf11907f9deeaf38ba2f612175d1f56d3774eae89b317a5731a5e6b6a2132", Pod:"calico-kube-controllers-577f47b55c-knc26", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali20f925d1e60", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:17.953836 containerd[1575]: 2025-09-12 17:37:17.917 [INFO][6402] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4" Sep 12 17:37:17.953836 containerd[1575]: 2025-09-12 17:37:17.917 [INFO][6402] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4" iface="eth0" netns="" Sep 12 17:37:17.953836 containerd[1575]: 2025-09-12 17:37:17.917 [INFO][6402] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4" Sep 12 17:37:17.953836 containerd[1575]: 2025-09-12 17:37:17.917 [INFO][6402] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4" Sep 12 17:37:17.953836 containerd[1575]: 2025-09-12 17:37:17.939 [INFO][6410] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4" HandleID="k8s-pod-network.c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4" Workload="localhost-k8s-calico--kube--controllers--577f47b55c--knc26-eth0" Sep 12 17:37:17.953836 containerd[1575]: 2025-09-12 17:37:17.939 [INFO][6410] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:17.953836 containerd[1575]: 2025-09-12 17:37:17.939 [INFO][6410] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:17.953836 containerd[1575]: 2025-09-12 17:37:17.945 [WARNING][6410] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4" HandleID="k8s-pod-network.c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4" Workload="localhost-k8s-calico--kube--controllers--577f47b55c--knc26-eth0" Sep 12 17:37:17.953836 containerd[1575]: 2025-09-12 17:37:17.945 [INFO][6410] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4" HandleID="k8s-pod-network.c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4" Workload="localhost-k8s-calico--kube--controllers--577f47b55c--knc26-eth0" Sep 12 17:37:17.953836 containerd[1575]: 2025-09-12 17:37:17.946 [INFO][6410] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:17.953836 containerd[1575]: 2025-09-12 17:37:17.950 [INFO][6402] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4" Sep 12 17:37:17.954772 containerd[1575]: time="2025-09-12T17:37:17.953889205Z" level=info msg="TearDown network for sandbox \"c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4\" successfully" Sep 12 17:37:17.959227 containerd[1575]: time="2025-09-12T17:37:17.959149948Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:37:17.959300 containerd[1575]: time="2025-09-12T17:37:17.959263112Z" level=info msg="RemovePodSandbox \"c6efdafdb48da36d2be661653e03f18deb295cd1ade6223773afc7c7d13bf7c4\" returns successfully" Sep 12 17:37:19.668106 systemd[1]: Started sshd@26-10.0.0.90:22-10.0.0.1:39218.service - OpenSSH per-connection server daemon (10.0.0.1:39218). Sep 12 17:37:19.774486 sshd[6417]: Accepted publickey for core from 10.0.0.1 port 39218 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:37:19.775383 sshd[6417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:37:19.801917 systemd-logind[1556]: New session 27 of user core. Sep 12 17:37:19.809953 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 12 17:37:20.169275 sshd[6417]: pam_unix(sshd:session): session closed for user core Sep 12 17:37:20.174978 systemd[1]: sshd@26-10.0.0.90:22-10.0.0.1:39218.service: Deactivated successfully. Sep 12 17:37:20.179914 systemd[1]: session-27.scope: Deactivated successfully. Sep 12 17:37:20.181664 systemd-logind[1556]: Session 27 logged out. Waiting for processes to exit. Sep 12 17:37:20.183693 systemd-logind[1556]: Removed session 27. Sep 12 17:37:25.179791 systemd[1]: Started sshd@27-10.0.0.90:22-10.0.0.1:41954.service - OpenSSH per-connection server daemon (10.0.0.1:41954). Sep 12 17:37:25.306933 sshd[6434]: Accepted publickey for core from 10.0.0.1 port 41954 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:37:25.309200 sshd[6434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:37:25.314003 systemd-logind[1556]: New session 28 of user core. Sep 12 17:37:25.322924 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 12 17:37:25.503779 sshd[6434]: pam_unix(sshd:session): session closed for user core Sep 12 17:37:25.509377 systemd[1]: sshd@27-10.0.0.90:22-10.0.0.1:41954.service: Deactivated successfully. Sep 12 17:37:25.512194 systemd-logind[1556]: Session 28 logged out. Waiting for processes to exit. Sep 12 17:37:25.512900 systemd[1]: session-28.scope: Deactivated successfully. Sep 12 17:37:25.515061 systemd-logind[1556]: Removed session 28.