Sep 13 00:10:01.930589 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 22:30:50 -00 2025 Sep 13 00:10:01.930618 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:10:01.930632 kernel: BIOS-provided physical RAM map: Sep 13 00:10:01.930641 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 13 00:10:01.930649 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 13 00:10:01.930657 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 13 00:10:01.930666 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 13 00:10:01.930675 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 13 00:10:01.930683 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 13 00:10:01.930695 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 13 00:10:01.930707 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 13 00:10:01.930715 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 13 00:10:01.930723 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 13 00:10:01.930731 kernel: NX (Execute Disable) protection: active Sep 13 00:10:01.930741 kernel: APIC: Static calls initialized Sep 13 00:10:01.930754 kernel: SMBIOS 2.8 present. Sep 13 00:10:01.930763 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 13 00:10:01.930772 kernel: Hypervisor detected: KVM Sep 13 00:10:01.930892 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 13 00:10:01.930902 kernel: kvm-clock: using sched offset of 2722038204 cycles Sep 13 00:10:01.930912 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 13 00:10:01.930922 kernel: tsc: Detected 2794.748 MHz processor Sep 13 00:10:01.930931 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 00:10:01.930940 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 00:10:01.930954 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 13 00:10:01.930963 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 13 00:10:01.930973 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 00:10:01.930982 kernel: Using GB pages for direct mapping Sep 13 00:10:01.930991 kernel: ACPI: Early table checksum verification disabled Sep 13 00:10:01.931000 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 13 00:10:01.931009 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:10:01.931018 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:10:01.931027 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:10:01.931040 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 13 00:10:01.931049 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:10:01.931059 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:10:01.931068 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:10:01.931077 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:10:01.931086 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 13 00:10:01.931095 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 13 00:10:01.931112 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 13 00:10:01.931128 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 13 00:10:01.931139 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 13 00:10:01.931151 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 13 00:10:01.931163 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 13 00:10:01.931175 kernel: No NUMA configuration found Sep 13 00:10:01.931187 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 13 00:10:01.931212 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Sep 13 00:10:01.931222 kernel: Zone ranges: Sep 13 00:10:01.931231 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 00:10:01.931241 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 13 00:10:01.931251 kernel: Normal empty Sep 13 00:10:01.931260 kernel: Movable zone start for each node Sep 13 00:10:01.931269 kernel: Early memory node ranges Sep 13 00:10:01.931279 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 13 00:10:01.931288 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 13 00:10:01.931297 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 13 00:10:01.931310 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:10:01.931320 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 13 00:10:01.931329 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 13 00:10:01.931338 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 13 00:10:01.931347 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 13 00:10:01.931357 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 13 00:10:01.931366 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 13 00:10:01.931376 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 13 00:10:01.931385 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 00:10:01.931398 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 13 00:10:01.931407 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 13 00:10:01.931416 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 00:10:01.931426 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 13 00:10:01.931435 kernel: TSC deadline timer available Sep 13 00:10:01.931445 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 13 00:10:01.931454 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 13 00:10:01.931463 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 13 00:10:01.931473 kernel: kvm-guest: setup PV sched yield Sep 13 00:10:01.931485 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 13 00:10:01.931495 kernel: Booting paravirtualized kernel on KVM Sep 13 00:10:01.931505 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 00:10:01.931514 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 13 00:10:01.931524 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u524288 Sep 13 00:10:01.931534 kernel: pcpu-alloc: s197160 r8192 d32216 u524288 alloc=1*2097152 Sep 13 00:10:01.931543 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 13 00:10:01.931552 kernel: kvm-guest: PV spinlocks enabled Sep 13 00:10:01.931561 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 13 00:10:01.931575 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:10:01.931586 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:10:01.931595 kernel: random: crng init done Sep 13 00:10:01.931604 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 00:10:01.931614 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:10:01.931623 kernel: Fallback order for Node 0: 0 Sep 13 00:10:01.931633 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Sep 13 00:10:01.931642 kernel: Policy zone: DMA32 Sep 13 00:10:01.931654 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:10:01.931665 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 136900K reserved, 0K cma-reserved) Sep 13 00:10:01.931674 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 13 00:10:01.931684 kernel: ftrace: allocating 37974 entries in 149 pages Sep 13 00:10:01.931693 kernel: ftrace: allocated 149 pages with 4 groups Sep 13 00:10:01.931703 kernel: Dynamic Preempt: voluntary Sep 13 00:10:01.931712 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 00:10:01.931722 kernel: rcu: RCU event tracing is enabled. Sep 13 00:10:01.931732 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 13 00:10:01.931745 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 00:10:01.931755 kernel: Rude variant of Tasks RCU enabled. Sep 13 00:10:01.931764 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:10:01.931774 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:10:01.931796 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 13 00:10:01.931805 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 13 00:10:01.931814 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 13 00:10:01.931824 kernel: Console: colour VGA+ 80x25 Sep 13 00:10:01.931833 kernel: printk: console [ttyS0] enabled Sep 13 00:10:01.931842 kernel: ACPI: Core revision 20230628 Sep 13 00:10:01.931855 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 13 00:10:01.931864 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 00:10:01.931873 kernel: x2apic enabled Sep 13 00:10:01.931883 kernel: APIC: Switched APIC routing to: physical x2apic Sep 13 00:10:01.931892 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 13 00:10:01.931902 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 13 00:10:01.931911 kernel: kvm-guest: setup PV IPIs Sep 13 00:10:01.931932 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 13 00:10:01.931942 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 13 00:10:01.931952 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 13 00:10:01.931962 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 13 00:10:01.931974 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 13 00:10:01.931984 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 13 00:10:01.931994 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 00:10:01.932004 kernel: Spectre V2 : Mitigation: Retpolines Sep 13 00:10:01.932015 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 13 00:10:01.932027 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 13 00:10:01.932038 kernel: active return thunk: retbleed_return_thunk Sep 13 00:10:01.932048 kernel: RETBleed: Mitigation: untrained return thunk Sep 13 00:10:01.932058 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 13 00:10:01.932068 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 13 00:10:01.932078 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 13 00:10:01.932089 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 13 00:10:01.932099 kernel: active return thunk: srso_return_thunk Sep 13 00:10:01.932112 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 13 00:10:01.932122 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 00:10:01.932132 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 00:10:01.932143 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 00:10:01.932153 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 00:10:01.932163 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 13 00:10:01.932172 kernel: Freeing SMP alternatives memory: 32K Sep 13 00:10:01.932182 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:10:01.932192 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 13 00:10:01.932215 kernel: landlock: Up and running. Sep 13 00:10:01.932225 kernel: SELinux: Initializing. Sep 13 00:10:01.932235 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:10:01.932245 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:10:01.932255 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 13 00:10:01.932265 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 13 00:10:01.932275 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 13 00:10:01.932286 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 13 00:10:01.932296 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 13 00:10:01.932309 kernel: ... version: 0 Sep 13 00:10:01.932319 kernel: ... bit width: 48 Sep 13 00:10:01.932329 kernel: ... generic registers: 6 Sep 13 00:10:01.932339 kernel: ... value mask: 0000ffffffffffff Sep 13 00:10:01.932349 kernel: ... max period: 00007fffffffffff Sep 13 00:10:01.932359 kernel: ... fixed-purpose events: 0 Sep 13 00:10:01.932369 kernel: ... event mask: 000000000000003f Sep 13 00:10:01.932379 kernel: signal: max sigframe size: 1776 Sep 13 00:10:01.932389 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:10:01.932402 kernel: rcu: Max phase no-delay instances is 400. Sep 13 00:10:01.932412 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:10:01.932422 kernel: smpboot: x86: Booting SMP configuration: Sep 13 00:10:01.932431 kernel: .... node #0, CPUs: #1 #2 #3 Sep 13 00:10:01.932441 kernel: smp: Brought up 1 node, 4 CPUs Sep 13 00:10:01.932452 kernel: smpboot: Max logical packages: 1 Sep 13 00:10:01.932462 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 13 00:10:01.932472 kernel: devtmpfs: initialized Sep 13 00:10:01.932482 kernel: x86/mm: Memory block size: 128MB Sep 13 00:10:01.932492 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:10:01.932505 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 13 00:10:01.932515 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:10:01.932525 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:10:01.932535 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:10:01.932545 kernel: audit: type=2000 audit(1757722201.013:1): state=initialized audit_enabled=0 res=1 Sep 13 00:10:01.932555 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:10:01.932565 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 00:10:01.932575 kernel: cpuidle: using governor menu Sep 13 00:10:01.932588 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:10:01.932598 kernel: dca service started, version 1.12.1 Sep 13 00:10:01.932654 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 13 00:10:01.932665 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 13 00:10:01.932675 kernel: PCI: Using configuration type 1 for base access Sep 13 00:10:01.932685 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 00:10:01.932695 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:10:01.932705 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 13 00:10:01.932715 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:10:01.932729 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 13 00:10:01.932739 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:10:01.932748 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:10:01.932759 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:10:01.932769 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 00:10:01.932829 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 13 00:10:01.932841 kernel: ACPI: Interpreter enabled Sep 13 00:10:01.932851 kernel: ACPI: PM: (supports S0 S3 S5) Sep 13 00:10:01.932861 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 00:10:01.932875 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 00:10:01.932885 kernel: PCI: Using E820 reservations for host bridge windows Sep 13 00:10:01.932895 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 13 00:10:01.932905 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 00:10:01.933143 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:10:01.933317 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 13 00:10:01.933468 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 13 00:10:01.933482 kernel: PCI host bridge to bus 0000:00 Sep 13 00:10:01.933635 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 00:10:01.933775 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 13 00:10:01.933929 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 00:10:01.934065 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 13 00:10:01.934212 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 13 00:10:01.934364 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 13 00:10:01.934575 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 00:10:01.934754 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 13 00:10:01.934940 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 13 00:10:01.935093 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Sep 13 00:10:01.935256 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Sep 13 00:10:01.935405 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Sep 13 00:10:01.935552 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 13 00:10:01.935718 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 13 00:10:01.935889 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Sep 13 00:10:01.936147 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Sep 13 00:10:01.936318 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Sep 13 00:10:01.936524 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 13 00:10:01.936682 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Sep 13 00:10:01.936860 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Sep 13 00:10:01.937019 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Sep 13 00:10:01.937189 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 13 00:10:01.937358 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Sep 13 00:10:01.937507 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Sep 13 00:10:01.937657 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 13 00:10:01.937828 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Sep 13 00:10:01.937990 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 13 00:10:01.938153 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 13 00:10:01.938392 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 13 00:10:01.938563 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Sep 13 00:10:01.938750 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Sep 13 00:10:01.938942 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 13 00:10:01.939097 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Sep 13 00:10:01.939118 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 13 00:10:01.939129 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 13 00:10:01.939140 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 13 00:10:01.939151 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 13 00:10:01.939161 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 13 00:10:01.939172 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 13 00:10:01.939182 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 13 00:10:01.939193 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 13 00:10:01.939215 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 13 00:10:01.939230 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 13 00:10:01.939240 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 13 00:10:01.939250 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 13 00:10:01.939261 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 13 00:10:01.939271 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 13 00:10:01.939282 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 13 00:10:01.939293 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 13 00:10:01.939303 kernel: iommu: Default domain type: Translated Sep 13 00:10:01.939314 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 00:10:01.939327 kernel: PCI: Using ACPI for IRQ routing Sep 13 00:10:01.939338 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 00:10:01.939348 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 13 00:10:01.939359 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 13 00:10:01.939558 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 13 00:10:01.939707 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 13 00:10:01.939884 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 13 00:10:01.939900 kernel: vgaarb: loaded Sep 13 00:10:01.939915 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 13 00:10:01.939926 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 13 00:10:01.939937 kernel: clocksource: Switched to clocksource kvm-clock Sep 13 00:10:01.939948 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:10:01.939964 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:10:01.939975 kernel: pnp: PnP ACPI init Sep 13 00:10:01.940157 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 13 00:10:01.940175 kernel: pnp: PnP ACPI: found 6 devices Sep 13 00:10:01.940186 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 00:10:01.940246 kernel: NET: Registered PF_INET protocol family Sep 13 00:10:01.940257 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 00:10:01.940268 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 13 00:10:01.940280 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:10:01.940291 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:10:01.940303 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 13 00:10:01.940314 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 13 00:10:01.940325 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:10:01.940340 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:10:01.940351 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:10:01.940362 kernel: NET: Registered PF_XDP protocol family Sep 13 00:10:01.940510 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 13 00:10:01.940647 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 13 00:10:01.940988 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 13 00:10:01.941130 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 13 00:10:01.941283 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 13 00:10:01.941418 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 13 00:10:01.941437 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:10:01.941449 kernel: Initialise system trusted keyrings Sep 13 00:10:01.941460 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 13 00:10:01.941471 kernel: Key type asymmetric registered Sep 13 00:10:01.941482 kernel: Asymmetric key parser 'x509' registered Sep 13 00:10:01.941493 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 13 00:10:01.941505 kernel: io scheduler mq-deadline registered Sep 13 00:10:01.941516 kernel: io scheduler kyber registered Sep 13 00:10:01.941527 kernel: io scheduler bfq registered Sep 13 00:10:01.941541 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 00:10:01.941553 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 13 00:10:01.941565 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 13 00:10:01.941576 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 13 00:10:01.941587 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:10:01.941598 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 00:10:01.941609 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 13 00:10:01.941620 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 13 00:10:01.941631 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 13 00:10:01.941797 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 13 00:10:01.941940 kernel: rtc_cmos 00:04: registered as rtc0 Sep 13 00:10:01.942083 kernel: rtc_cmos 00:04: setting system clock to 2025-09-13T00:10:01 UTC (1757722201) Sep 13 00:10:01.942236 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 13 00:10:01.942250 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 13 00:10:01.942262 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 13 00:10:01.942273 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:10:01.942284 kernel: Segment Routing with IPv6 Sep 13 00:10:01.942300 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:10:01.942311 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:10:01.942322 kernel: Key type dns_resolver registered Sep 13 00:10:01.942332 kernel: IPI shorthand broadcast: enabled Sep 13 00:10:01.942343 kernel: sched_clock: Marking stable (821002538, 123873429)->(968139938, -23263971) Sep 13 00:10:01.942354 kernel: registered taskstats version 1 Sep 13 00:10:01.942365 kernel: Loading compiled-in X.509 certificates Sep 13 00:10:01.942377 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 1274e0c573ac8d09163d6bc6d1ee1445fb2f8cc6' Sep 13 00:10:01.942388 kernel: Key type .fscrypt registered Sep 13 00:10:01.942402 kernel: Key type fscrypt-provisioning registered Sep 13 00:10:01.942413 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:10:01.942424 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:10:01.942435 kernel: ima: No architecture policies found Sep 13 00:10:01.942446 kernel: clk: Disabling unused clocks Sep 13 00:10:01.942456 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 13 00:10:01.942467 kernel: Write protecting the kernel read-only data: 36864k Sep 13 00:10:01.942478 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 13 00:10:01.942492 kernel: Run /init as init process Sep 13 00:10:01.942503 kernel: with arguments: Sep 13 00:10:01.942514 kernel: /init Sep 13 00:10:01.942524 kernel: with environment: Sep 13 00:10:01.942535 kernel: HOME=/ Sep 13 00:10:01.942545 kernel: TERM=linux Sep 13 00:10:01.942556 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:10:01.942570 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:10:01.942584 systemd[1]: Detected virtualization kvm. Sep 13 00:10:01.942599 systemd[1]: Detected architecture x86-64. Sep 13 00:10:01.942610 systemd[1]: Running in initrd. Sep 13 00:10:01.942621 systemd[1]: No hostname configured, using default hostname. Sep 13 00:10:01.942633 systemd[1]: Hostname set to . Sep 13 00:10:01.942644 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:10:01.942656 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:10:01.942667 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:10:01.942681 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:10:01.942694 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 13 00:10:01.942720 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:10:01.942735 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 13 00:10:01.942747 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 13 00:10:01.942764 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 13 00:10:01.942872 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 13 00:10:01.942886 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:10:01.942898 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:10:01.942910 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:10:01.942922 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:10:01.942934 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:10:01.942946 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:10:01.942967 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:10:01.943060 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:10:01.943075 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 00:10:01.943087 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 13 00:10:01.943102 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:10:01.943114 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:10:01.943127 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:10:01.943143 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:10:01.943157 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 13 00:10:01.943173 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:10:01.943185 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 13 00:10:01.943222 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:10:01.943255 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:10:01.943284 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:10:01.943299 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:10:01.943312 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 13 00:10:01.943326 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:10:01.943338 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:10:01.943378 systemd-journald[192]: Collecting audit messages is disabled. Sep 13 00:10:01.943415 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:10:01.943431 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:10:01.943444 systemd-journald[192]: Journal started Sep 13 00:10:01.943471 systemd-journald[192]: Runtime Journal (/run/log/journal/2b9966dd86fb4f88b3f4c949a69f6f35) is 6.0M, max 48.4M, 42.3M free. Sep 13 00:10:01.936659 systemd-modules-load[194]: Inserted module 'overlay' Sep 13 00:10:01.983067 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:10:01.983103 kernel: Bridge firewalling registered Sep 13 00:10:01.973012 systemd-modules-load[194]: Inserted module 'br_netfilter' Sep 13 00:10:01.984889 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:10:01.985093 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:10:02.001375 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:10:02.003654 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:10:02.007401 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:10:02.009087 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:10:02.014640 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:10:02.018761 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:10:02.021807 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:10:02.029403 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:10:02.044174 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:10:02.048045 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:10:02.053101 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 13 00:10:02.074492 dracut-cmdline[231]: dracut-dracut-053 Sep 13 00:10:02.077753 dracut-cmdline[231]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:10:02.086830 systemd-resolved[221]: Positive Trust Anchors: Sep 13 00:10:02.086851 systemd-resolved[221]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:10:02.086888 systemd-resolved[221]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:10:02.090157 systemd-resolved[221]: Defaulting to hostname 'linux'. Sep 13 00:10:02.091833 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:10:02.099929 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:10:02.199863 kernel: SCSI subsystem initialized Sep 13 00:10:02.213829 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:10:02.235729 kernel: iscsi: registered transport (tcp) Sep 13 00:10:02.269453 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:10:02.269549 kernel: QLogic iSCSI HBA Driver Sep 13 00:10:02.344411 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 13 00:10:02.358092 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 13 00:10:02.399198 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:10:02.399294 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:10:02.399311 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 13 00:10:02.457842 kernel: raid6: avx2x4 gen() 27637 MB/s Sep 13 00:10:02.474853 kernel: raid6: avx2x2 gen() 25661 MB/s Sep 13 00:10:02.492093 kernel: raid6: avx2x1 gen() 17868 MB/s Sep 13 00:10:02.492204 kernel: raid6: using algorithm avx2x4 gen() 27637 MB/s Sep 13 00:10:02.509985 kernel: raid6: .... xor() 5932 MB/s, rmw enabled Sep 13 00:10:02.510067 kernel: raid6: using avx2x2 recovery algorithm Sep 13 00:10:02.534012 kernel: xor: automatically using best checksumming function avx Sep 13 00:10:02.743828 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 13 00:10:02.759810 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:10:02.771165 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:10:02.787736 systemd-udevd[414]: Using default interface naming scheme 'v255'. Sep 13 00:10:02.794014 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:10:02.805269 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 13 00:10:02.824940 dracut-pre-trigger[422]: rd.md=0: removing MD RAID activation Sep 13 00:10:02.867998 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:10:02.875153 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:10:02.963647 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:10:02.976023 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 13 00:10:02.992063 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 13 00:10:02.992933 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:10:02.995453 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:10:02.996260 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:10:03.007005 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 13 00:10:03.025505 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:10:03.026755 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 13 00:10:03.028949 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:10:03.044119 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 13 00:10:03.048132 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:10:03.048160 kernel: GPT:9289727 != 19775487 Sep 13 00:10:03.048190 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:10:03.050683 kernel: GPT:9289727 != 19775487 Sep 13 00:10:03.050751 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:10:03.050766 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:10:03.057004 kernel: AVX2 version of gcm_enc/dec engaged. Sep 13 00:10:03.057069 kernel: AES CTR mode by8 optimization enabled Sep 13 00:10:03.058841 kernel: libata version 3.00 loaded. Sep 13 00:10:03.059324 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:10:03.059771 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:10:03.064315 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:10:03.065997 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:10:03.074337 kernel: ahci 0000:00:1f.2: version 3.0 Sep 13 00:10:03.074623 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 13 00:10:03.066282 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:10:03.077867 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 13 00:10:03.078048 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 13 00:10:03.072308 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:10:03.088382 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:10:03.097811 kernel: scsi host0: ahci Sep 13 00:10:03.098059 kernel: scsi host1: ahci Sep 13 00:10:03.098264 kernel: scsi host2: ahci Sep 13 00:10:03.098465 kernel: scsi host3: ahci Sep 13 00:10:03.098746 kernel: scsi host4: ahci Sep 13 00:10:03.102973 kernel: BTRFS: device fsid fa70a3b0-3d47-4508-bba0-9fa4607626aa devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (459) Sep 13 00:10:03.104766 kernel: scsi host5: ahci Sep 13 00:10:03.108065 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Sep 13 00:10:03.108085 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Sep 13 00:10:03.108099 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Sep 13 00:10:03.108112 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Sep 13 00:10:03.109693 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Sep 13 00:10:03.109722 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Sep 13 00:10:03.117886 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (465) Sep 13 00:10:03.122774 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 13 00:10:03.133632 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 13 00:10:03.140144 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 13 00:10:03.179196 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 13 00:10:03.225636 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 13 00:10:03.227897 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:10:03.245263 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 13 00:10:03.248214 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:10:03.273344 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:10:03.420593 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 13 00:10:03.420692 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 13 00:10:03.420707 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 13 00:10:03.421817 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 13 00:10:03.422897 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 13 00:10:03.423822 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 13 00:10:03.424833 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 13 00:10:03.426121 kernel: ata3.00: applying bridge limits Sep 13 00:10:03.426233 kernel: ata3.00: configured for UDMA/100 Sep 13 00:10:03.426933 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 13 00:10:03.472867 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 13 00:10:03.473326 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 13 00:10:03.488880 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 13 00:10:03.596858 disk-uuid[554]: Primary Header is updated. Sep 13 00:10:03.596858 disk-uuid[554]: Secondary Entries is updated. Sep 13 00:10:03.596858 disk-uuid[554]: Secondary Header is updated. Sep 13 00:10:03.602812 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:10:03.607812 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:10:03.613831 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:10:04.609808 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:10:04.610650 disk-uuid[579]: The operation has completed successfully. Sep 13 00:10:04.646165 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:10:04.646348 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 13 00:10:04.675669 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 13 00:10:04.684348 sh[594]: Success Sep 13 00:10:04.702931 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 13 00:10:04.758544 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 13 00:10:04.774220 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 13 00:10:04.778209 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 13 00:10:04.798181 kernel: BTRFS info (device dm-0): first mount of filesystem fa70a3b0-3d47-4508-bba0-9fa4607626aa Sep 13 00:10:04.798250 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:10:04.798266 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 13 00:10:04.799409 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 13 00:10:04.800277 kernel: BTRFS info (device dm-0): using free space tree Sep 13 00:10:04.819631 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 13 00:10:04.821090 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 13 00:10:04.831100 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 13 00:10:04.832534 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 13 00:10:04.852658 kernel: BTRFS info (device vda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:10:04.852729 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:10:04.852744 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:10:04.861048 kernel: BTRFS info (device vda6): auto enabling async discard Sep 13 00:10:04.871405 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:10:04.872926 kernel: BTRFS info (device vda6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:10:04.986049 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:10:04.999142 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:10:05.027341 systemd-networkd[772]: lo: Link UP Sep 13 00:10:05.027353 systemd-networkd[772]: lo: Gained carrier Sep 13 00:10:05.031888 systemd-networkd[772]: Enumeration completed Sep 13 00:10:05.033256 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:10:05.033925 systemd[1]: Reached target network.target - Network. Sep 13 00:10:05.035997 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 13 00:10:05.038736 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:10:05.038741 systemd-networkd[772]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:10:05.043952 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 13 00:10:05.048842 systemd-networkd[772]: eth0: Link UP Sep 13 00:10:05.048856 systemd-networkd[772]: eth0: Gained carrier Sep 13 00:10:05.048873 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:10:05.075028 systemd-networkd[772]: eth0: DHCPv4 address 10.0.0.108/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 00:10:05.116410 systemd-resolved[221]: Detected conflict on linux IN A 10.0.0.108 Sep 13 00:10:05.116434 systemd-resolved[221]: Hostname conflict, changing published hostname from 'linux' to 'linux2'. Sep 13 00:10:05.332796 ignition[776]: Ignition 2.19.0 Sep 13 00:10:05.332815 ignition[776]: Stage: fetch-offline Sep 13 00:10:05.332892 ignition[776]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:10:05.332914 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:10:05.333077 ignition[776]: parsed url from cmdline: "" Sep 13 00:10:05.333082 ignition[776]: no config URL provided Sep 13 00:10:05.333089 ignition[776]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:10:05.333114 ignition[776]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:10:05.333147 ignition[776]: op(1): [started] loading QEMU firmware config module Sep 13 00:10:05.333154 ignition[776]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 13 00:10:05.343669 ignition[776]: op(1): [finished] loading QEMU firmware config module Sep 13 00:10:05.385721 ignition[776]: parsing config with SHA512: e6704c27595c82f0fde8ddc405cba45d342693aca9a3476f3ed3585b954c2b8332f0694f6224c181c03b139b0f286d1a646b1fba1cc4b340366f3b8e7f7f490a Sep 13 00:10:05.390656 unknown[776]: fetched base config from "system" Sep 13 00:10:05.390672 unknown[776]: fetched user config from "qemu" Sep 13 00:10:05.391139 ignition[776]: fetch-offline: fetch-offline passed Sep 13 00:10:05.391228 ignition[776]: Ignition finished successfully Sep 13 00:10:05.394902 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:10:05.398059 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 13 00:10:05.407128 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 13 00:10:05.427309 ignition[786]: Ignition 2.19.0 Sep 13 00:10:05.427323 ignition[786]: Stage: kargs Sep 13 00:10:05.427553 ignition[786]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:10:05.427569 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:10:05.428914 ignition[786]: kargs: kargs passed Sep 13 00:10:05.428975 ignition[786]: Ignition finished successfully Sep 13 00:10:05.436077 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 13 00:10:05.448021 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 13 00:10:05.499708 ignition[794]: Ignition 2.19.0 Sep 13 00:10:05.499720 ignition[794]: Stage: disks Sep 13 00:10:05.499982 ignition[794]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:10:05.499997 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:10:05.501148 ignition[794]: disks: disks passed Sep 13 00:10:05.503649 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 13 00:10:05.501203 ignition[794]: Ignition finished successfully Sep 13 00:10:05.505492 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 13 00:10:05.507491 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 00:10:05.508798 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:10:05.510636 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:10:05.511775 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:10:05.524111 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 13 00:10:05.541579 systemd-fsck[804]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 13 00:10:05.854091 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 13 00:10:05.861017 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 13 00:10:05.997881 kernel: EXT4-fs (vda9): mounted filesystem 3a3ecd49-b269-4fcb-bb61-e2994e1868ee r/w with ordered data mode. Quota mode: none. Sep 13 00:10:05.999394 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 13 00:10:06.001934 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 13 00:10:06.014340 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:10:06.016143 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 13 00:10:06.017549 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 13 00:10:06.017600 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:10:06.017628 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:10:06.027515 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 13 00:10:06.036938 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (812) Sep 13 00:10:06.037021 kernel: BTRFS info (device vda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:10:06.037037 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:10:06.038800 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:10:06.039462 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 13 00:10:06.043854 kernel: BTRFS info (device vda6): auto enabling async discard Sep 13 00:10:06.045898 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:10:06.099414 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:10:06.105553 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:10:06.112905 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:10:06.121881 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:10:06.265174 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 13 00:10:06.278051 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 13 00:10:06.282033 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 13 00:10:06.291332 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 13 00:10:06.294704 kernel: BTRFS info (device vda6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:10:06.333218 ignition[927]: INFO : Ignition 2.19.0 Sep 13 00:10:06.333218 ignition[927]: INFO : Stage: mount Sep 13 00:10:06.335560 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:10:06.335560 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:10:06.335560 ignition[927]: INFO : mount: mount passed Sep 13 00:10:06.335560 ignition[927]: INFO : Ignition finished successfully Sep 13 00:10:06.337460 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 13 00:10:06.349020 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 13 00:10:06.349812 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 13 00:10:06.853345 systemd-networkd[772]: eth0: Gained IPv6LL Sep 13 00:10:07.009110 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:10:07.017832 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (940) Sep 13 00:10:07.020173 kernel: BTRFS info (device vda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:10:07.020250 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:10:07.020278 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:10:07.024820 kernel: BTRFS info (device vda6): auto enabling async discard Sep 13 00:10:07.026802 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:10:07.059294 ignition[957]: INFO : Ignition 2.19.0 Sep 13 00:10:07.059294 ignition[957]: INFO : Stage: files Sep 13 00:10:07.061330 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:10:07.061330 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:10:07.061330 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:10:07.074277 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:10:07.074277 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:10:07.074277 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:10:07.074277 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:10:07.074277 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:10:07.065563 unknown[957]: wrote ssh authorized keys file for user: core Sep 13 00:10:07.083204 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:10:07.083204 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 13 00:10:07.137496 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 00:10:07.557652 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:10:07.576631 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:10:07.576631 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:10:07.576631 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:10:07.576631 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:10:07.576631 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:10:07.576631 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:10:07.576631 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:10:07.576631 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:10:07.576631 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:10:07.576631 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:10:07.576631 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:10:07.576631 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:10:07.576631 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:10:07.576631 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 13 00:10:07.988031 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 13 00:10:08.667064 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:10:08.667064 ignition[957]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 13 00:10:09.396058 ignition[957]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:10:09.398340 ignition[957]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:10:09.398340 ignition[957]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 13 00:10:09.398340 ignition[957]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 13 00:10:09.398340 ignition[957]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:10:09.398340 ignition[957]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:10:09.398340 ignition[957]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 13 00:10:09.398340 ignition[957]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 13 00:10:09.449456 ignition[957]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:10:09.457376 ignition[957]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:10:09.459153 ignition[957]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 13 00:10:09.459153 ignition[957]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:10:09.459153 ignition[957]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:10:09.459153 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:10:09.459153 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:10:09.459153 ignition[957]: INFO : files: files passed Sep 13 00:10:09.459153 ignition[957]: INFO : Ignition finished successfully Sep 13 00:10:09.460372 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 13 00:10:09.513987 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 13 00:10:09.515764 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 13 00:10:09.523953 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:10:09.524116 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 13 00:10:09.529188 initrd-setup-root-after-ignition[985]: grep: /sysroot/oem/oem-release: No such file or directory Sep 13 00:10:09.533425 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:10:09.533425 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:10:09.570423 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:10:09.573626 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:10:09.574864 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 13 00:10:09.593187 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 13 00:10:09.649651 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:10:09.649850 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 13 00:10:09.650448 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 13 00:10:09.653760 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 13 00:10:09.656215 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 13 00:10:09.657314 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 13 00:10:09.690603 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:10:09.730232 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 13 00:10:09.741208 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:10:09.743836 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:10:09.746229 systemd[1]: Stopped target timers.target - Timer Units. Sep 13 00:10:09.748052 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:10:09.749098 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:10:09.751641 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 13 00:10:09.753636 systemd[1]: Stopped target basic.target - Basic System. Sep 13 00:10:09.755430 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 13 00:10:09.799251 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:10:09.801942 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 13 00:10:09.804445 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 13 00:10:09.806495 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:10:09.809122 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 13 00:10:09.811199 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 13 00:10:09.813158 systemd[1]: Stopped target swap.target - Swaps. Sep 13 00:10:09.814710 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:10:09.815697 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:10:09.817913 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:10:09.820033 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:10:09.822302 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 13 00:10:09.823245 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:10:09.825839 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:10:09.826944 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 13 00:10:09.829220 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:10:09.830463 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:10:09.833348 systemd[1]: Stopped target paths.target - Path Units. Sep 13 00:10:09.835475 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:10:09.836686 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:10:09.839667 systemd[1]: Stopped target slices.target - Slice Units. Sep 13 00:10:09.841523 systemd[1]: Stopped target sockets.target - Socket Units. Sep 13 00:10:09.843385 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:10:09.844310 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:10:09.954089 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:10:09.954185 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:10:09.957106 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:10:09.958271 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:10:09.961078 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:10:09.962280 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 13 00:10:09.984143 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 13 00:10:10.071996 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:10:10.073324 ignition[1012]: INFO : Ignition 2.19.0 Sep 13 00:10:10.073324 ignition[1012]: INFO : Stage: umount Sep 13 00:10:10.073324 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:10:10.073324 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:10:10.073324 ignition[1012]: INFO : umount: umount passed Sep 13 00:10:10.073324 ignition[1012]: INFO : Ignition finished successfully Sep 13 00:10:10.073359 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:10:10.170134 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 13 00:10:10.172299 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:10:10.173597 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:10:10.176374 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:10:10.177754 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:10:10.182841 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:10:10.184146 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 13 00:10:10.188613 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:10:10.189844 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 13 00:10:10.235321 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:10:10.237643 systemd[1]: Stopped target network.target - Network. Sep 13 00:10:10.239736 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:10:10.240544 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 13 00:10:10.243025 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:10:10.243112 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 13 00:10:10.243504 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:10:10.243548 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 13 00:10:10.243995 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 13 00:10:10.244040 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 13 00:10:10.244556 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 13 00:10:10.249772 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 13 00:10:10.250514 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:10:10.250621 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 13 00:10:10.255279 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:10:10.255368 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 13 00:10:10.259029 systemd-networkd[772]: eth0: DHCPv6 lease lost Sep 13 00:10:10.262714 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:10:10.262883 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 13 00:10:10.263686 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:10:10.263733 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:10:10.276914 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 13 00:10:10.277246 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:10:10.277322 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:10:10.347597 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:10:10.352497 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:10:10.352657 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 13 00:10:10.358228 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:10:10.358317 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:10:10.359134 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:10:10.359192 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 13 00:10:10.359426 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 13 00:10:10.359481 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:10:10.417197 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:10:10.417365 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 13 00:10:10.434775 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:10:10.434994 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:10:10.435867 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:10:10.435932 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 13 00:10:10.438806 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:10:10.438860 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:10:10.441400 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:10:10.441466 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:10:10.442349 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:10:10.442400 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 13 00:10:10.447230 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:10:10.447291 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:10:10.463074 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 13 00:10:10.530828 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 00:10:10.530970 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:10:10.534062 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:10:10.534117 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:10:10.549331 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:10:10.549503 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 13 00:10:10.550601 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 13 00:10:10.566229 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 13 00:10:10.574672 systemd[1]: Switching root. Sep 13 00:10:10.610884 systemd-journald[192]: Journal stopped Sep 13 00:10:13.450831 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Sep 13 00:10:13.450935 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:10:13.450958 kernel: SELinux: policy capability open_perms=1 Sep 13 00:10:13.450996 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:10:13.451019 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:10:13.451040 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:10:13.451056 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:10:13.451071 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:10:13.451087 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:10:13.451106 kernel: audit: type=1403 audit(1757722212.251:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 00:10:13.451127 systemd[1]: Successfully loaded SELinux policy in 89.145ms. Sep 13 00:10:13.451154 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.246ms. Sep 13 00:10:13.451185 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:10:13.451202 systemd[1]: Detected virtualization kvm. Sep 13 00:10:13.451219 systemd[1]: Detected architecture x86-64. Sep 13 00:10:13.451236 systemd[1]: Detected first boot. Sep 13 00:10:13.451253 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:10:13.451269 zram_generator::config[1056]: No configuration found. Sep 13 00:10:13.451288 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:10:13.451305 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 00:10:13.451331 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 13 00:10:13.451348 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 00:10:13.451373 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 13 00:10:13.451392 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 13 00:10:13.451412 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 13 00:10:13.451428 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 13 00:10:13.451445 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 13 00:10:13.451461 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 13 00:10:13.451479 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 13 00:10:13.451505 systemd[1]: Created slice user.slice - User and Session Slice. Sep 13 00:10:13.451522 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:10:13.451539 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:10:13.451556 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 13 00:10:13.451575 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 13 00:10:13.451591 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 13 00:10:13.451608 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:10:13.451625 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 13 00:10:13.451642 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:10:13.451669 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 13 00:10:13.451684 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 13 00:10:13.451702 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 13 00:10:13.451719 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 13 00:10:13.451735 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:10:13.451752 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:10:13.451768 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:10:13.451816 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:10:13.451846 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 13 00:10:13.451872 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 13 00:10:13.451890 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:10:13.451912 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:10:13.451929 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:10:13.451950 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 13 00:10:13.451967 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 13 00:10:13.451984 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 13 00:10:13.452006 systemd[1]: Mounting media.mount - External Media Directory... Sep 13 00:10:13.452034 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:10:13.452051 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 13 00:10:13.452068 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 13 00:10:13.452084 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 13 00:10:13.452101 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:10:13.452118 systemd[1]: Reached target machines.target - Containers. Sep 13 00:10:13.452134 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 13 00:10:13.452160 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:10:13.452184 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:10:13.452201 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 13 00:10:13.452224 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:10:13.452240 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:10:13.452255 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:10:13.452271 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 13 00:10:13.452287 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:10:13.452306 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:10:13.452332 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 00:10:13.452349 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 13 00:10:13.452366 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 00:10:13.452382 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 00:10:13.452400 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:10:13.452441 systemd-journald[1119]: Collecting audit messages is disabled. Sep 13 00:10:13.452474 kernel: loop: module loaded Sep 13 00:10:13.452491 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:10:13.452516 kernel: fuse: init (API version 7.39) Sep 13 00:10:13.452535 systemd-journald[1119]: Journal started Sep 13 00:10:13.452566 systemd-journald[1119]: Runtime Journal (/run/log/journal/2b9966dd86fb4f88b3f4c949a69f6f35) is 6.0M, max 48.4M, 42.3M free. Sep 13 00:10:13.013512 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:10:13.032600 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 13 00:10:13.033126 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 00:10:13.033530 systemd[1]: systemd-journald.service: Consumed 1.227s CPU time. Sep 13 00:10:13.460807 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 00:10:13.464225 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 13 00:10:13.469802 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:10:13.469848 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 00:10:13.470835 systemd[1]: Stopped verity-setup.service. Sep 13 00:10:13.491993 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:10:13.494807 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:10:13.495752 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 13 00:10:13.496999 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 13 00:10:13.498305 systemd[1]: Mounted media.mount - External Media Directory. Sep 13 00:10:13.499428 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 13 00:10:13.500621 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 13 00:10:13.501877 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 13 00:10:13.503148 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:10:13.505048 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:10:13.505225 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 13 00:10:13.506813 kernel: ACPI: bus type drm_connector registered Sep 13 00:10:13.507469 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:10:13.507668 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:10:13.509298 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:10:13.509471 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:10:13.510818 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:10:13.510995 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:10:13.512501 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:10:13.512691 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 13 00:10:13.514046 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:10:13.514214 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:10:13.515556 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:10:13.516919 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 00:10:13.535301 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 00:10:13.545954 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 13 00:10:13.548610 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 13 00:10:13.549893 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:10:13.551068 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:10:13.554081 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 13 00:10:13.557689 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 13 00:10:13.583968 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 13 00:10:13.586240 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:10:13.586279 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:10:13.588763 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 13 00:10:13.591305 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 13 00:10:13.623920 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 13 00:10:13.633874 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:10:13.644134 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 13 00:10:13.648380 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 13 00:10:13.652389 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:10:13.662195 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 13 00:10:13.667999 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 13 00:10:13.674127 systemd-journald[1119]: Time spent on flushing to /var/log/journal/2b9966dd86fb4f88b3f4c949a69f6f35 is 25.249ms for 954 entries. Sep 13 00:10:13.674127 systemd-journald[1119]: System Journal (/var/log/journal/2b9966dd86fb4f88b3f4c949a69f6f35) is 8.0M, max 195.6M, 187.6M free. Sep 13 00:10:13.724038 systemd-journald[1119]: Received client request to flush runtime journal. Sep 13 00:10:13.687745 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:10:13.690034 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 13 00:10:13.692737 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 13 00:10:13.696959 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 13 00:10:13.710073 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 13 00:10:13.760579 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:10:13.762682 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 13 00:10:13.764820 kernel: loop0: detected capacity change from 0 to 140768 Sep 13 00:10:13.767805 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 13 00:10:13.793382 udevadm[1181]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 13 00:10:13.861817 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:10:13.981879 kernel: loop1: detected capacity change from 0 to 142488 Sep 13 00:10:14.001656 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 13 00:10:14.010985 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 13 00:10:14.057857 kernel: loop2: detected capacity change from 0 to 221472 Sep 13 00:10:14.308398 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 13 00:10:14.337819 kernel: loop3: detected capacity change from 0 to 140768 Sep 13 00:10:14.338957 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:10:14.444988 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Sep 13 00:10:14.445008 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Sep 13 00:10:14.452299 kernel: loop4: detected capacity change from 0 to 142488 Sep 13 00:10:14.453497 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:10:14.464866 kernel: loop5: detected capacity change from 0 to 221472 Sep 13 00:10:14.508776 (sd-merge)[1191]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 13 00:10:14.509627 (sd-merge)[1191]: Merged extensions into '/usr'. Sep 13 00:10:14.511734 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:10:14.512631 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 13 00:10:14.519245 systemd[1]: Reloading requested from client PID 1166 ('systemd-sysext') (unit systemd-sysext.service)... Sep 13 00:10:14.519265 systemd[1]: Reloading... Sep 13 00:10:14.648084 zram_generator::config[1220]: No configuration found. Sep 13 00:10:14.819833 ldconfig[1153]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:10:15.041687 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:10:15.092627 systemd[1]: Reloading finished in 572 ms. Sep 13 00:10:15.132517 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 13 00:10:15.134323 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 13 00:10:15.152986 systemd[1]: Starting ensure-sysext.service... Sep 13 00:10:15.204337 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:10:15.258397 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:10:15.258923 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 13 00:10:15.260267 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:10:15.260694 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Sep 13 00:10:15.260827 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Sep 13 00:10:15.265413 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:10:15.265432 systemd-tmpfiles[1258]: Skipping /boot Sep 13 00:10:15.265462 systemd[1]: Reloading requested from client PID 1257 ('systemctl') (unit ensure-sysext.service)... Sep 13 00:10:15.265476 systemd[1]: Reloading... Sep 13 00:10:15.277842 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:10:15.277856 systemd-tmpfiles[1258]: Skipping /boot Sep 13 00:10:15.335044 zram_generator::config[1288]: No configuration found. Sep 13 00:10:15.435480 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:10:15.485537 systemd[1]: Reloading finished in 219 ms. Sep 13 00:10:15.508553 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:10:15.543891 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:10:15.591083 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 13 00:10:15.597229 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 13 00:10:15.603058 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:10:15.607975 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 13 00:10:15.617351 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 13 00:10:15.645428 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 13 00:10:15.677369 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:10:15.677682 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:10:15.687933 augenrules[1346]: No rules Sep 13 00:10:15.689265 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:10:15.693040 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:10:15.698060 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:10:15.700408 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:10:15.703048 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:10:15.716347 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:10:15.718492 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:10:15.720588 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 13 00:10:15.723051 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:10:15.723264 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:10:15.725543 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:10:15.725719 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:10:15.728980 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:10:15.729202 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:10:15.733359 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 13 00:10:15.744154 systemd-udevd[1353]: Using default interface naming scheme 'v255'. Sep 13 00:10:15.758135 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 13 00:10:15.762960 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 13 00:10:15.777545 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:10:15.777811 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:10:15.785072 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:10:15.804043 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:10:15.837890 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:10:15.839449 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:10:15.842802 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 13 00:10:15.844431 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:10:15.844563 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:10:15.845696 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:10:15.851005 systemd-resolved[1331]: Positive Trust Anchors: Sep 13 00:10:15.851021 systemd-resolved[1331]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:10:15.851060 systemd-resolved[1331]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:10:15.871559 systemd-resolved[1331]: Defaulting to hostname 'linux'. Sep 13 00:10:15.873896 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1378) Sep 13 00:10:15.874519 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:10:15.908225 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:10:15.908431 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:10:15.910520 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:10:15.910897 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:10:15.913506 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:10:15.913741 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:10:15.928939 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 13 00:10:15.966856 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 13 00:10:15.998825 kernel: ACPI: button: Power Button [PWRF] Sep 13 00:10:16.006749 systemd[1]: Finished ensure-sysext.service. Sep 13 00:10:16.011819 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 13 00:10:16.027294 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 13 00:10:16.027489 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 13 00:10:16.033343 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 13 00:10:16.036843 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 13 00:10:16.060501 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 13 00:10:16.069072 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:10:16.070469 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:10:16.070615 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:10:16.122838 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 00:10:16.128906 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:10:16.133940 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:10:16.142813 kernel: kvm_amd: TSC scaling supported Sep 13 00:10:16.142868 kernel: kvm_amd: Nested Virtualization enabled Sep 13 00:10:16.142885 kernel: kvm_amd: Nested Paging enabled Sep 13 00:10:16.142925 kernel: kvm_amd: LBR virtualization supported Sep 13 00:10:16.142952 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 13 00:10:16.142968 kernel: kvm_amd: Virtual GIF supported Sep 13 00:10:16.148057 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:10:16.152457 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:10:16.153745 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:10:16.158021 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 13 00:10:16.164445 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:10:16.169958 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 13 00:10:16.173032 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:10:16.174256 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:10:16.174291 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:10:16.177404 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:10:16.177602 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:10:16.199984 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:10:16.200197 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:10:16.200819 kernel: EDAC MC: Ver: 3.0.0 Sep 13 00:10:16.204381 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:10:16.204660 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:10:16.206506 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:10:16.206748 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:10:16.208224 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 13 00:10:16.218644 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:10:16.218731 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:10:16.228406 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 13 00:10:16.252684 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 13 00:10:16.270807 lvm[1422]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:10:16.282951 systemd-networkd[1409]: lo: Link UP Sep 13 00:10:16.282967 systemd-networkd[1409]: lo: Gained carrier Sep 13 00:10:16.285031 systemd-networkd[1409]: Enumeration completed Sep 13 00:10:16.285160 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:10:16.286306 systemd-networkd[1409]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:10:16.286318 systemd-networkd[1409]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:10:16.299593 systemd-networkd[1409]: eth0: Link UP Sep 13 00:10:16.299608 systemd-networkd[1409]: eth0: Gained carrier Sep 13 00:10:16.299625 systemd-networkd[1409]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:10:16.300527 systemd[1]: Reached target network.target - Network. Sep 13 00:10:16.319871 systemd-networkd[1409]: eth0: DHCPv4 address 10.0.0.108/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 00:10:16.320675 systemd-timesyncd[1410]: Network configuration changed, trying to establish connection. Sep 13 00:10:16.321916 systemd-timesyncd[1410]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 13 00:10:16.321986 systemd-timesyncd[1410]: Initial clock synchronization to Sat 2025-09-13 00:10:16.661621 UTC. Sep 13 00:10:16.365077 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 13 00:10:16.390286 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 13 00:10:16.391923 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:10:16.393324 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 13 00:10:16.395960 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:10:16.397053 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:10:16.398288 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 13 00:10:16.399620 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 13 00:10:16.400969 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 13 00:10:16.402177 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:10:16.402209 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:10:16.403101 systemd[1]: Reached target time-set.target - System Time Set. Sep 13 00:10:16.405032 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 13 00:10:16.406192 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 13 00:10:16.407438 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:10:16.409561 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 13 00:10:16.412427 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 13 00:10:16.435277 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 13 00:10:16.437866 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 13 00:10:16.439576 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 13 00:10:16.440796 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:10:16.441720 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:10:16.446501 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:10:16.446526 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:10:16.447668 systemd[1]: Starting containerd.service - containerd container runtime... Sep 13 00:10:16.449725 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 13 00:10:16.452817 lvm[1432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:10:16.453929 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 13 00:10:16.457905 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 13 00:10:16.462362 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 13 00:10:16.463725 jq[1435]: false Sep 13 00:10:16.464307 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 13 00:10:16.471911 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 13 00:10:16.480679 extend-filesystems[1436]: Found loop3 Sep 13 00:10:16.489886 extend-filesystems[1436]: Found loop4 Sep 13 00:10:16.489886 extend-filesystems[1436]: Found loop5 Sep 13 00:10:16.489886 extend-filesystems[1436]: Found sr0 Sep 13 00:10:16.489886 extend-filesystems[1436]: Found vda Sep 13 00:10:16.489886 extend-filesystems[1436]: Found vda1 Sep 13 00:10:16.489886 extend-filesystems[1436]: Found vda2 Sep 13 00:10:16.489886 extend-filesystems[1436]: Found vda3 Sep 13 00:10:16.489886 extend-filesystems[1436]: Found usr Sep 13 00:10:16.489886 extend-filesystems[1436]: Found vda4 Sep 13 00:10:16.489886 extend-filesystems[1436]: Found vda6 Sep 13 00:10:16.489886 extend-filesystems[1436]: Found vda7 Sep 13 00:10:16.489886 extend-filesystems[1436]: Found vda9 Sep 13 00:10:16.489886 extend-filesystems[1436]: Checking size of /dev/vda9 Sep 13 00:10:16.483065 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 13 00:10:16.504415 extend-filesystems[1436]: Resized partition /dev/vda9 Sep 13 00:10:16.496068 dbus-daemon[1434]: [system] SELinux support is enabled Sep 13 00:10:16.489119 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 13 00:10:16.499245 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 13 00:10:16.507191 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:10:16.507874 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 00:10:16.510004 systemd[1]: Starting update-engine.service - Update Engine... Sep 13 00:10:16.516459 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 13 00:10:16.522468 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1368) Sep 13 00:10:16.522533 extend-filesystems[1453]: resize2fs 1.47.1 (20-May-2024) Sep 13 00:10:16.518905 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 13 00:10:16.523067 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 13 00:10:16.528678 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:10:16.529200 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 13 00:10:16.529680 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:10:16.530056 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 13 00:10:16.533253 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:10:16.533515 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 13 00:10:16.535965 jq[1454]: true Sep 13 00:10:16.539905 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 13 00:10:16.546368 jq[1459]: true Sep 13 00:10:16.557941 (ntainerd)[1465]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 13 00:10:16.563546 update_engine[1452]: I20250913 00:10:16.563107 1452 main.cc:92] Flatcar Update Engine starting Sep 13 00:10:16.566829 update_engine[1452]: I20250913 00:10:16.566043 1452 update_check_scheduler.cc:74] Next update check in 11m46s Sep 13 00:10:16.591487 systemd[1]: Started update-engine.service - Update Engine. Sep 13 00:10:16.620750 tar[1458]: linux-amd64/helm Sep 13 00:10:16.622189 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:10:16.622225 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 13 00:10:16.623727 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:10:16.623748 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 13 00:10:16.639203 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 13 00:10:16.647716 systemd-logind[1450]: Watching system buttons on /dev/input/event1 (Power Button) Sep 13 00:10:16.647751 systemd-logind[1450]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 00:10:16.648995 systemd-logind[1450]: New seat seat0. Sep 13 00:10:16.650569 systemd[1]: Started systemd-logind.service - User Login Management. Sep 13 00:10:16.688102 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 13 00:10:16.798947 locksmithd[1482]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:10:17.004682 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 13 00:10:17.571409 extend-filesystems[1453]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 13 00:10:17.571409 extend-filesystems[1453]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 13 00:10:17.571409 extend-filesystems[1453]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 13 00:10:17.581731 sshd_keygen[1467]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:10:17.572917 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:10:17.582256 extend-filesystems[1436]: Resized filesystem in /dev/vda9 Sep 13 00:10:17.573167 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 13 00:10:17.616130 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 13 00:10:17.648567 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 13 00:10:17.653279 systemd[1]: Started sshd@0-10.0.0.108:22-10.0.0.1:42540.service - OpenSSH per-connection server daemon (10.0.0.1:42540). Sep 13 00:10:17.660331 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:10:17.660614 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 13 00:10:17.677483 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 13 00:10:17.766292 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 13 00:10:17.769166 containerd[1465]: time="2025-09-13T00:10:17.769013538Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 13 00:10:17.776533 bash[1488]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:10:17.786307 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 13 00:10:17.794311 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 13 00:10:17.795363 systemd[1]: Reached target getty.target - Login Prompts. Sep 13 00:10:17.796446 sshd[1510]: Accepted publickey for core from 10.0.0.1 port 42540 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:10:17.799336 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 13 00:10:17.803716 sshd[1510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:10:17.805712 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 13 00:10:17.816398 containerd[1465]: time="2025-09-13T00:10:17.816285092Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:10:17.816589 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 13 00:10:17.820636 containerd[1465]: time="2025-09-13T00:10:17.820581463Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:10:17.820636 containerd[1465]: time="2025-09-13T00:10:17.820622990Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:10:17.820743 containerd[1465]: time="2025-09-13T00:10:17.820644611Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:10:17.820946 containerd[1465]: time="2025-09-13T00:10:17.820921526Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 13 00:10:17.820983 containerd[1465]: time="2025-09-13T00:10:17.820954457Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 13 00:10:17.821065 containerd[1465]: time="2025-09-13T00:10:17.821042432Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:10:17.821065 containerd[1465]: time="2025-09-13T00:10:17.821060198Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:10:17.821343 containerd[1465]: time="2025-09-13T00:10:17.821305916Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:10:17.821343 containerd[1465]: time="2025-09-13T00:10:17.821332329Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:10:17.821412 containerd[1465]: time="2025-09-13T00:10:17.821351621Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:10:17.821412 containerd[1465]: time="2025-09-13T00:10:17.821367726Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:10:17.821539 containerd[1465]: time="2025-09-13T00:10:17.821514399Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:10:17.822080 containerd[1465]: time="2025-09-13T00:10:17.822038159Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:10:17.822250 containerd[1465]: time="2025-09-13T00:10:17.822210525Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:10:17.822250 containerd[1465]: time="2025-09-13T00:10:17.822246099Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:10:17.822378 containerd[1465]: time="2025-09-13T00:10:17.822360174Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:10:17.822438 containerd[1465]: time="2025-09-13T00:10:17.822420585Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:10:17.848260 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 13 00:10:17.852927 systemd-logind[1450]: New session 1 of user core. Sep 13 00:10:17.868648 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 13 00:10:17.902256 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 13 00:10:17.936549 (systemd)[1526]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:10:18.112393 containerd[1465]: time="2025-09-13T00:10:18.112215149Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:10:18.112570 containerd[1465]: time="2025-09-13T00:10:18.112522160Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:10:18.112607 containerd[1465]: time="2025-09-13T00:10:18.112584945Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 13 00:10:18.112637 containerd[1465]: time="2025-09-13T00:10:18.112625648Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 13 00:10:18.112687 containerd[1465]: time="2025-09-13T00:10:18.112654962Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:10:18.115835 containerd[1465]: time="2025-09-13T00:10:18.112972561Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:10:18.115835 containerd[1465]: time="2025-09-13T00:10:18.113370143Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:10:18.115835 containerd[1465]: time="2025-09-13T00:10:18.113541599Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 13 00:10:18.115835 containerd[1465]: time="2025-09-13T00:10:18.113561893Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 13 00:10:18.115835 containerd[1465]: time="2025-09-13T00:10:18.113582332Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 13 00:10:18.115835 containerd[1465]: time="2025-09-13T00:10:18.113599707Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:10:18.115835 containerd[1465]: time="2025-09-13T00:10:18.113616791Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:10:18.115835 containerd[1465]: time="2025-09-13T00:10:18.113639953Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:10:18.115835 containerd[1465]: time="2025-09-13T00:10:18.113664538Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:10:18.115835 containerd[1465]: time="2025-09-13T00:10:18.113688022Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:10:18.115835 containerd[1465]: time="2025-09-13T00:10:18.113707652Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:10:18.115835 containerd[1465]: time="2025-09-13T00:10:18.113724091Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:10:18.115835 containerd[1465]: time="2025-09-13T00:10:18.113746214Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:10:18.115835 containerd[1465]: time="2025-09-13T00:10:18.113789421Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:10:18.116180 containerd[1465]: time="2025-09-13T00:10:18.113811524Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:10:18.116180 containerd[1465]: time="2025-09-13T00:10:18.113848339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:10:18.116180 containerd[1465]: time="2025-09-13T00:10:18.113865932Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:10:18.116180 containerd[1465]: time="2025-09-13T00:10:18.113888284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:10:18.116180 containerd[1465]: time="2025-09-13T00:10:18.113911498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:10:18.116180 containerd[1465]: time="2025-09-13T00:10:18.113929059Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:10:18.116180 containerd[1465]: time="2025-09-13T00:10:18.113946392Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:10:18.116180 containerd[1465]: time="2025-09-13T00:10:18.113963319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 13 00:10:18.116180 containerd[1465]: time="2025-09-13T00:10:18.113982782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 13 00:10:18.116180 containerd[1465]: time="2025-09-13T00:10:18.113997881Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:10:18.116180 containerd[1465]: time="2025-09-13T00:10:18.114017698Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 13 00:10:18.116180 containerd[1465]: time="2025-09-13T00:10:18.114034583Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:10:18.116180 containerd[1465]: time="2025-09-13T00:10:18.114061808Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 13 00:10:18.116180 containerd[1465]: time="2025-09-13T00:10:18.114091755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 13 00:10:18.116180 containerd[1465]: time="2025-09-13T00:10:18.114109078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:10:18.116492 containerd[1465]: time="2025-09-13T00:10:18.114126733Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:10:18.116492 containerd[1465]: time="2025-09-13T00:10:18.114201758Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:10:18.116492 containerd[1465]: time="2025-09-13T00:10:18.114231425Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 13 00:10:18.116492 containerd[1465]: time="2025-09-13T00:10:18.114248571Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:10:18.116492 containerd[1465]: time="2025-09-13T00:10:18.114265499Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 13 00:10:18.116492 containerd[1465]: time="2025-09-13T00:10:18.114279173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:10:18.116492 containerd[1465]: time="2025-09-13T00:10:18.114295904Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 13 00:10:18.116492 containerd[1465]: time="2025-09-13T00:10:18.114316011Z" level=info msg="NRI interface is disabled by configuration." Sep 13 00:10:18.116492 containerd[1465]: time="2025-09-13T00:10:18.114341978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:10:18.116781 containerd[1465]: time="2025-09-13T00:10:18.114766598Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:10:18.116781 containerd[1465]: time="2025-09-13T00:10:18.115080332Z" level=info msg="Connect containerd service" Sep 13 00:10:18.116781 containerd[1465]: time="2025-09-13T00:10:18.115160272Z" level=info msg="using legacy CRI server" Sep 13 00:10:18.116781 containerd[1465]: time="2025-09-13T00:10:18.115173843Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 13 00:10:18.116781 containerd[1465]: time="2025-09-13T00:10:18.115449992Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:10:18.117129 containerd[1465]: time="2025-09-13T00:10:18.116930939Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:10:18.117198 containerd[1465]: time="2025-09-13T00:10:18.117109067Z" level=info msg="Start subscribing containerd event" Sep 13 00:10:18.117375 containerd[1465]: time="2025-09-13T00:10:18.117246346Z" level=info msg="Start recovering state" Sep 13 00:10:18.117375 containerd[1465]: time="2025-09-13T00:10:18.117332687Z" level=info msg="Start event monitor" Sep 13 00:10:18.117375 containerd[1465]: time="2025-09-13T00:10:18.117365930Z" level=info msg="Start snapshots syncer" Sep 13 00:10:18.117453 containerd[1465]: time="2025-09-13T00:10:18.117381382Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:10:18.117453 containerd[1465]: time="2025-09-13T00:10:18.117413033Z" level=info msg="Start streaming server" Sep 13 00:10:18.120008 containerd[1465]: time="2025-09-13T00:10:18.119968222Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:10:18.120331 containerd[1465]: time="2025-09-13T00:10:18.120087171Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:10:18.120457 containerd[1465]: time="2025-09-13T00:10:18.120422030Z" level=info msg="containerd successfully booted in 0.352871s" Sep 13 00:10:18.120600 systemd[1]: Started containerd.service - containerd container runtime. Sep 13 00:10:18.153091 tar[1458]: linux-amd64/LICENSE Sep 13 00:10:18.153091 tar[1458]: linux-amd64/README.md Sep 13 00:10:18.155097 systemd[1526]: Queued start job for default target default.target. Sep 13 00:10:18.169396 systemd[1526]: Created slice app.slice - User Application Slice. Sep 13 00:10:18.169438 systemd[1526]: Reached target paths.target - Paths. Sep 13 00:10:18.169459 systemd[1526]: Reached target timers.target - Timers. Sep 13 00:10:18.171793 systemd[1526]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 13 00:10:18.176645 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 13 00:10:18.192882 systemd[1526]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 13 00:10:18.193077 systemd[1526]: Reached target sockets.target - Sockets. Sep 13 00:10:18.193109 systemd[1526]: Reached target basic.target - Basic System. Sep 13 00:10:18.193173 systemd[1526]: Reached target default.target - Main User Target. Sep 13 00:10:18.193229 systemd[1526]: Startup finished in 206ms. Sep 13 00:10:18.194050 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 13 00:10:18.207439 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 13 00:10:18.277568 systemd[1]: Started sshd@1-10.0.0.108:22-10.0.0.1:42556.service - OpenSSH per-connection server daemon (10.0.0.1:42556). Sep 13 00:10:18.309015 systemd-networkd[1409]: eth0: Gained IPv6LL Sep 13 00:10:18.313717 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 13 00:10:18.316190 systemd[1]: Reached target network-online.target - Network is Online. Sep 13 00:10:18.329408 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 13 00:10:18.361838 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:10:18.366637 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 13 00:10:18.387704 sshd[1540]: Accepted publickey for core from 10.0.0.1 port 42556 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:10:18.390907 sshd[1540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:10:18.400282 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 13 00:10:18.402570 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 13 00:10:18.402955 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 13 00:10:18.407683 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 13 00:10:18.411312 systemd-logind[1450]: New session 2 of user core. Sep 13 00:10:18.419232 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 13 00:10:18.517710 sshd[1540]: pam_unix(sshd:session): session closed for user core Sep 13 00:10:18.532066 systemd[1]: sshd@1-10.0.0.108:22-10.0.0.1:42556.service: Deactivated successfully. Sep 13 00:10:18.534737 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 00:10:18.537368 systemd-logind[1450]: Session 2 logged out. Waiting for processes to exit. Sep 13 00:10:18.563443 systemd[1]: Started sshd@2-10.0.0.108:22-10.0.0.1:42568.service - OpenSSH per-connection server daemon (10.0.0.1:42568). Sep 13 00:10:18.566820 systemd-logind[1450]: Removed session 2. Sep 13 00:10:18.648437 sshd[1564]: Accepted publickey for core from 10.0.0.1 port 42568 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:10:18.651681 sshd[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:10:18.656034 systemd-logind[1450]: New session 3 of user core. Sep 13 00:10:18.663984 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 13 00:10:18.730366 sshd[1564]: pam_unix(sshd:session): session closed for user core Sep 13 00:10:18.735420 systemd[1]: sshd@2-10.0.0.108:22-10.0.0.1:42568.service: Deactivated successfully. Sep 13 00:10:18.738349 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 00:10:18.739170 systemd-logind[1450]: Session 3 logged out. Waiting for processes to exit. Sep 13 00:10:18.740279 systemd-logind[1450]: Removed session 3. Sep 13 00:10:20.066465 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:10:20.068287 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 13 00:10:20.071894 systemd[1]: Startup finished in 970ms (kernel) + 10.484s (initrd) + 7.908s (userspace) = 19.363s. Sep 13 00:10:20.094220 (kubelet)[1575]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:10:21.049559 kubelet[1575]: E0913 00:10:21.049481 1575 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:10:21.053686 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:10:21.053947 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:10:21.054283 systemd[1]: kubelet.service: Consumed 2.153s CPU time. Sep 13 00:10:28.938550 systemd[1]: Started sshd@3-10.0.0.108:22-10.0.0.1:39182.service - OpenSSH per-connection server daemon (10.0.0.1:39182). Sep 13 00:10:28.972084 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 39182 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:10:28.973708 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:10:28.977947 systemd-logind[1450]: New session 4 of user core. Sep 13 00:10:28.987933 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 13 00:10:29.045079 sshd[1588]: pam_unix(sshd:session): session closed for user core Sep 13 00:10:29.060146 systemd[1]: sshd@3-10.0.0.108:22-10.0.0.1:39182.service: Deactivated successfully. Sep 13 00:10:29.061990 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:10:29.063258 systemd-logind[1450]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:10:29.064546 systemd[1]: Started sshd@4-10.0.0.108:22-10.0.0.1:39198.service - OpenSSH per-connection server daemon (10.0.0.1:39198). Sep 13 00:10:29.065283 systemd-logind[1450]: Removed session 4. Sep 13 00:10:29.099298 sshd[1595]: Accepted publickey for core from 10.0.0.1 port 39198 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:10:29.101132 sshd[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:10:29.105563 systemd-logind[1450]: New session 5 of user core. Sep 13 00:10:29.116019 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 13 00:10:29.167248 sshd[1595]: pam_unix(sshd:session): session closed for user core Sep 13 00:10:29.182101 systemd[1]: sshd@4-10.0.0.108:22-10.0.0.1:39198.service: Deactivated successfully. Sep 13 00:10:29.184017 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:10:29.185580 systemd-logind[1450]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:10:29.186918 systemd[1]: Started sshd@5-10.0.0.108:22-10.0.0.1:39200.service - OpenSSH per-connection server daemon (10.0.0.1:39200). Sep 13 00:10:29.187757 systemd-logind[1450]: Removed session 5. Sep 13 00:10:29.220864 sshd[1602]: Accepted publickey for core from 10.0.0.1 port 39200 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:10:29.222784 sshd[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:10:29.227726 systemd-logind[1450]: New session 6 of user core. Sep 13 00:10:29.236975 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 13 00:10:29.292848 sshd[1602]: pam_unix(sshd:session): session closed for user core Sep 13 00:10:29.301191 systemd[1]: sshd@5-10.0.0.108:22-10.0.0.1:39200.service: Deactivated successfully. Sep 13 00:10:29.303319 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:10:29.304684 systemd-logind[1450]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:10:29.305964 systemd[1]: Started sshd@6-10.0.0.108:22-10.0.0.1:39212.service - OpenSSH per-connection server daemon (10.0.0.1:39212). Sep 13 00:10:29.306644 systemd-logind[1450]: Removed session 6. Sep 13 00:10:29.340033 sshd[1609]: Accepted publickey for core from 10.0.0.1 port 39212 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:10:29.342075 sshd[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:10:29.346650 systemd-logind[1450]: New session 7 of user core. Sep 13 00:10:29.355968 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 13 00:10:29.420132 sudo[1612]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 13 00:10:29.420492 sudo[1612]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:10:29.442264 sudo[1612]: pam_unix(sudo:session): session closed for user root Sep 13 00:10:29.444857 sshd[1609]: pam_unix(sshd:session): session closed for user core Sep 13 00:10:29.458596 systemd[1]: sshd@6-10.0.0.108:22-10.0.0.1:39212.service: Deactivated successfully. Sep 13 00:10:29.460815 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:10:29.462868 systemd-logind[1450]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:10:29.472350 systemd[1]: Started sshd@7-10.0.0.108:22-10.0.0.1:39220.service - OpenSSH per-connection server daemon (10.0.0.1:39220). Sep 13 00:10:29.474081 systemd-logind[1450]: Removed session 7. Sep 13 00:10:29.504762 sshd[1617]: Accepted publickey for core from 10.0.0.1 port 39220 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:10:29.506952 sshd[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:10:29.513035 systemd-logind[1450]: New session 8 of user core. Sep 13 00:10:29.523084 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 13 00:10:29.580269 sudo[1621]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 13 00:10:29.580765 sudo[1621]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:10:29.586549 sudo[1621]: pam_unix(sudo:session): session closed for user root Sep 13 00:10:29.594279 sudo[1620]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 13 00:10:29.594723 sudo[1620]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:10:29.614290 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 13 00:10:29.616785 auditctl[1624]: No rules Sep 13 00:10:29.618482 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 00:10:29.618858 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 13 00:10:29.621267 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:10:29.661767 augenrules[1642]: No rules Sep 13 00:10:29.664061 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:10:29.665684 sudo[1620]: pam_unix(sudo:session): session closed for user root Sep 13 00:10:29.668599 sshd[1617]: pam_unix(sshd:session): session closed for user core Sep 13 00:10:29.677341 systemd[1]: sshd@7-10.0.0.108:22-10.0.0.1:39220.service: Deactivated successfully. Sep 13 00:10:29.679328 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:10:29.681103 systemd-logind[1450]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:10:29.687373 systemd[1]: Started sshd@8-10.0.0.108:22-10.0.0.1:39222.service - OpenSSH per-connection server daemon (10.0.0.1:39222). Sep 13 00:10:29.688589 systemd-logind[1450]: Removed session 8. Sep 13 00:10:29.719311 sshd[1650]: Accepted publickey for core from 10.0.0.1 port 39222 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:10:29.721062 sshd[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:10:29.725843 systemd-logind[1450]: New session 9 of user core. Sep 13 00:10:29.741107 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 13 00:10:29.797493 sudo[1653]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:10:29.797861 sudo[1653]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:10:30.508195 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 13 00:10:30.508347 (dockerd)[1671]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 13 00:10:31.050574 dockerd[1671]: time="2025-09-13T00:10:31.050481015Z" level=info msg="Starting up" Sep 13 00:10:31.149016 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:10:31.167214 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:10:31.534534 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:10:31.539272 (kubelet)[1700]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:10:31.648422 kubelet[1700]: E0913 00:10:31.648325 1700 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:10:31.655492 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:10:31.655818 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:10:31.718543 dockerd[1671]: time="2025-09-13T00:10:31.718465121Z" level=info msg="Loading containers: start." Sep 13 00:10:31.905832 kernel: Initializing XFRM netlink socket Sep 13 00:10:32.001901 systemd-networkd[1409]: docker0: Link UP Sep 13 00:10:32.038260 dockerd[1671]: time="2025-09-13T00:10:32.038204058Z" level=info msg="Loading containers: done." Sep 13 00:10:32.058019 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1871595729-merged.mount: Deactivated successfully. Sep 13 00:10:32.061445 dockerd[1671]: time="2025-09-13T00:10:32.061377258Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:10:32.061957 dockerd[1671]: time="2025-09-13T00:10:32.061530185Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 13 00:10:32.061957 dockerd[1671]: time="2025-09-13T00:10:32.061699436Z" level=info msg="Daemon has completed initialization" Sep 13 00:10:32.117200 dockerd[1671]: time="2025-09-13T00:10:32.117056952Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:10:32.117449 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 13 00:10:33.147663 containerd[1465]: time="2025-09-13T00:10:33.147612055Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 13 00:10:34.386971 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1024980659.mount: Deactivated successfully. Sep 13 00:10:35.745513 containerd[1465]: time="2025-09-13T00:10:35.745419426Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:35.746048 containerd[1465]: time="2025-09-13T00:10:35.745973038Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=28117124" Sep 13 00:10:35.806468 containerd[1465]: time="2025-09-13T00:10:35.806418392Z" level=info msg="ImageCreate event name:\"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:35.810285 containerd[1465]: time="2025-09-13T00:10:35.810247839Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:35.811341 containerd[1465]: time="2025-09-13T00:10:35.811274808Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"28113723\" in 2.663605531s" Sep 13 00:10:35.811341 containerd[1465]: time="2025-09-13T00:10:35.811336418Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 13 00:10:35.812261 containerd[1465]: time="2025-09-13T00:10:35.812217546Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 13 00:10:37.734484 containerd[1465]: time="2025-09-13T00:10:37.734368072Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:37.735502 containerd[1465]: time="2025-09-13T00:10:37.735461975Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=24716632" Sep 13 00:10:37.736627 containerd[1465]: time="2025-09-13T00:10:37.736582155Z" level=info msg="ImageCreate event name:\"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:37.740301 containerd[1465]: time="2025-09-13T00:10:37.740212488Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:37.741401 containerd[1465]: time="2025-09-13T00:10:37.741347699Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"26351311\" in 1.929085616s" Sep 13 00:10:37.741401 containerd[1465]: time="2025-09-13T00:10:37.741395931Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 13 00:10:37.742528 containerd[1465]: time="2025-09-13T00:10:37.742501129Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 13 00:10:40.320935 containerd[1465]: time="2025-09-13T00:10:40.320858019Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:40.322900 containerd[1465]: time="2025-09-13T00:10:40.322811335Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=18787698" Sep 13 00:10:40.330075 containerd[1465]: time="2025-09-13T00:10:40.330030121Z" level=info msg="ImageCreate event name:\"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:40.348863 containerd[1465]: time="2025-09-13T00:10:40.348752938Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:40.350217 containerd[1465]: time="2025-09-13T00:10:40.350153855Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"20422395\" in 2.607614536s" Sep 13 00:10:40.350217 containerd[1465]: time="2025-09-13T00:10:40.350206506Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 13 00:10:40.350881 containerd[1465]: time="2025-09-13T00:10:40.350851161Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 13 00:10:41.899313 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 00:10:41.913977 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:10:42.123048 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:10:42.123626 (kubelet)[1910]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:10:42.185682 kubelet[1910]: E0913 00:10:42.185482 1910 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:10:42.189249 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:10:42.189500 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:10:43.362038 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3120650269.mount: Deactivated successfully. Sep 13 00:10:44.662154 containerd[1465]: time="2025-09-13T00:10:44.662062514Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:44.667263 containerd[1465]: time="2025-09-13T00:10:44.667122278Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=30410252" Sep 13 00:10:44.669763 containerd[1465]: time="2025-09-13T00:10:44.669712748Z" level=info msg="ImageCreate event name:\"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:44.677899 containerd[1465]: time="2025-09-13T00:10:44.677833849Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:44.678762 containerd[1465]: time="2025-09-13T00:10:44.678684985Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"30409271\" in 4.327729818s" Sep 13 00:10:44.678762 containerd[1465]: time="2025-09-13T00:10:44.678755830Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 13 00:10:44.679570 containerd[1465]: time="2025-09-13T00:10:44.679536663Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 00:10:45.220804 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1216672493.mount: Deactivated successfully. Sep 13 00:10:48.458240 containerd[1465]: time="2025-09-13T00:10:48.458177378Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:48.461438 containerd[1465]: time="2025-09-13T00:10:48.461398556Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 13 00:10:48.462650 containerd[1465]: time="2025-09-13T00:10:48.462603460Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:48.465616 containerd[1465]: time="2025-09-13T00:10:48.465573853Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:48.466988 containerd[1465]: time="2025-09-13T00:10:48.466941789Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 3.787370316s" Sep 13 00:10:48.467054 containerd[1465]: time="2025-09-13T00:10:48.466985766Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 13 00:10:48.467570 containerd[1465]: time="2025-09-13T00:10:48.467547765Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:10:49.588123 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1460444675.mount: Deactivated successfully. Sep 13 00:10:49.601957 containerd[1465]: time="2025-09-13T00:10:49.601880661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:49.603017 containerd[1465]: time="2025-09-13T00:10:49.602959797Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 13 00:10:49.609462 containerd[1465]: time="2025-09-13T00:10:49.609402641Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:49.615119 containerd[1465]: time="2025-09-13T00:10:49.615067828Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:49.615892 containerd[1465]: time="2025-09-13T00:10:49.615835590Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.148250675s" Sep 13 00:10:49.615892 containerd[1465]: time="2025-09-13T00:10:49.615884435Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 00:10:49.616453 containerd[1465]: time="2025-09-13T00:10:49.616404038Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 13 00:10:50.716032 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount380364236.mount: Deactivated successfully. Sep 13 00:10:52.399085 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 13 00:10:52.414009 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:10:52.850452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:10:52.854928 (kubelet)[2042]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:10:52.932093 kubelet[2042]: E0913 00:10:52.932032 2042 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:10:52.936507 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:10:52.936722 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:10:53.370725 containerd[1465]: time="2025-09-13T00:10:53.370647835Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:53.372075 containerd[1465]: time="2025-09-13T00:10:53.372025027Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 13 00:10:53.373469 containerd[1465]: time="2025-09-13T00:10:53.373416527Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:53.376440 containerd[1465]: time="2025-09-13T00:10:53.376401499Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:53.377906 containerd[1465]: time="2025-09-13T00:10:53.377878240Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.761437954s" Sep 13 00:10:53.377962 containerd[1465]: time="2025-09-13T00:10:53.377908859Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 13 00:10:56.280101 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:10:56.294005 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:10:56.321033 systemd[1]: Reloading requested from client PID 2085 ('systemctl') (unit session-9.scope)... Sep 13 00:10:56.321046 systemd[1]: Reloading... Sep 13 00:10:56.404832 zram_generator::config[2125]: No configuration found. Sep 13 00:10:56.708245 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:10:56.806104 systemd[1]: Reloading finished in 484 ms. Sep 13 00:10:56.857029 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:10:56.860496 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:10:56.862467 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:10:56.862772 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:10:56.872113 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:10:57.053556 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:10:57.060374 (kubelet)[2174]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:10:57.107857 kubelet[2174]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:10:57.107857 kubelet[2174]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:10:57.107857 kubelet[2174]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:10:57.108405 kubelet[2174]: I0913 00:10:57.107925 2174 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:10:57.524704 kubelet[2174]: I0913 00:10:57.524653 2174 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:10:57.524704 kubelet[2174]: I0913 00:10:57.524689 2174 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:10:57.525042 kubelet[2174]: I0913 00:10:57.525020 2174 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:10:57.549647 kubelet[2174]: E0913 00:10:57.549587 2174 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.108:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:10:57.549813 kubelet[2174]: I0913 00:10:57.549693 2174 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:10:57.563584 kubelet[2174]: E0913 00:10:57.563527 2174 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:10:57.563584 kubelet[2174]: I0913 00:10:57.563567 2174 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:10:57.571415 kubelet[2174]: I0913 00:10:57.571379 2174 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:10:57.571547 kubelet[2174]: I0913 00:10:57.571513 2174 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:10:57.582728 kubelet[2174]: I0913 00:10:57.571688 2174 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:10:57.583016 kubelet[2174]: I0913 00:10:57.582731 2174 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:10:57.583171 kubelet[2174]: I0913 00:10:57.583033 2174 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:10:57.583171 kubelet[2174]: I0913 00:10:57.583047 2174 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:10:57.583240 kubelet[2174]: I0913 00:10:57.583227 2174 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:10:57.589703 kubelet[2174]: W0913 00:10:57.589600 2174 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Sep 13 00:10:57.589889 kubelet[2174]: E0913 00:10:57.589704 2174 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:10:57.590998 kubelet[2174]: I0913 00:10:57.590943 2174 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:10:57.591055 kubelet[2174]: I0913 00:10:57.591037 2174 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:10:57.591142 kubelet[2174]: I0913 00:10:57.591121 2174 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:10:57.591200 kubelet[2174]: I0913 00:10:57.591184 2174 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:10:57.593616 kubelet[2174]: W0913 00:10:57.592735 2174 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Sep 13 00:10:57.593616 kubelet[2174]: E0913 00:10:57.592828 2174 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:10:57.594263 kubelet[2174]: I0913 00:10:57.594225 2174 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:10:57.594672 kubelet[2174]: I0913 00:10:57.594648 2174 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:10:57.594742 kubelet[2174]: W0913 00:10:57.594727 2174 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:10:57.597041 kubelet[2174]: I0913 00:10:57.597019 2174 server.go:1274] "Started kubelet" Sep 13 00:10:57.598178 kubelet[2174]: I0913 00:10:57.597212 2174 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:10:57.598178 kubelet[2174]: I0913 00:10:57.597415 2174 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:10:57.598178 kubelet[2174]: I0913 00:10:57.597746 2174 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:10:57.598632 kubelet[2174]: I0913 00:10:57.598594 2174 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:10:57.599821 kubelet[2174]: I0913 00:10:57.599057 2174 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:10:57.599821 kubelet[2174]: I0913 00:10:57.599299 2174 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:10:57.602428 kubelet[2174]: E0913 00:10:57.602011 2174 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:10:57.602428 kubelet[2174]: I0913 00:10:57.602061 2174 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:10:57.602428 kubelet[2174]: I0913 00:10:57.602270 2174 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:10:57.602428 kubelet[2174]: I0913 00:10:57.602352 2174 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:10:57.607944 kubelet[2174]: W0913 00:10:57.607363 2174 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Sep 13 00:10:57.607944 kubelet[2174]: E0913 00:10:57.607429 2174 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:10:57.607944 kubelet[2174]: E0913 00:10:57.607709 2174 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.108:6443: connect: connection refused" interval="200ms" Sep 13 00:10:57.608413 kubelet[2174]: I0913 00:10:57.608393 2174 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:10:57.608662 kubelet[2174]: I0913 00:10:57.608636 2174 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:10:57.610159 kubelet[2174]: E0913 00:10:57.610138 2174 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:10:57.615019 kubelet[2174]: E0913 00:10:57.610540 2174 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.108:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.108:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864af1499ddd12c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-13 00:10:57.596993836 +0000 UTC m=+0.532531054,LastTimestamp:2025-09-13 00:10:57.596993836 +0000 UTC m=+0.532531054,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 13 00:10:57.617810 kubelet[2174]: I0913 00:10:57.617750 2174 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:10:57.634561 kubelet[2174]: I0913 00:10:57.634471 2174 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:10:57.637904 kubelet[2174]: I0913 00:10:57.637680 2174 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:10:57.637904 kubelet[2174]: I0913 00:10:57.637695 2174 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:10:57.637904 kubelet[2174]: I0913 00:10:57.637712 2174 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:10:57.638045 kubelet[2174]: I0913 00:10:57.638029 2174 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:10:57.638105 kubelet[2174]: I0913 00:10:57.638097 2174 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:10:57.638169 kubelet[2174]: I0913 00:10:57.638161 2174 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:10:57.638637 kubelet[2174]: E0913 00:10:57.638607 2174 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:10:57.638694 kubelet[2174]: W0913 00:10:57.638556 2174 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Sep 13 00:10:57.638765 kubelet[2174]: E0913 00:10:57.638752 2174 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:10:57.702701 kubelet[2174]: E0913 00:10:57.702675 2174 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:10:57.739079 kubelet[2174]: E0913 00:10:57.739049 2174 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 00:10:57.803731 kubelet[2174]: E0913 00:10:57.803636 2174 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:10:57.809254 kubelet[2174]: E0913 00:10:57.809213 2174 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.108:6443: connect: connection refused" interval="400ms" Sep 13 00:10:57.904551 kubelet[2174]: E0913 00:10:57.904499 2174 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:10:57.939756 kubelet[2174]: E0913 00:10:57.939712 2174 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 00:10:58.005262 kubelet[2174]: E0913 00:10:58.005210 2174 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:10:58.105503 kubelet[2174]: E0913 00:10:58.105343 2174 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:10:58.205956 kubelet[2174]: E0913 00:10:58.205888 2174 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:10:58.210606 kubelet[2174]: E0913 00:10:58.210561 2174 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.108:6443: connect: connection refused" interval="800ms" Sep 13 00:10:58.306826 kubelet[2174]: E0913 00:10:58.306761 2174 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:10:58.339997 kubelet[2174]: E0913 00:10:58.339950 2174 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 00:10:58.407435 kubelet[2174]: E0913 00:10:58.407389 2174 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:10:58.507948 kubelet[2174]: E0913 00:10:58.507892 2174 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:10:58.607286 kubelet[2174]: E0913 00:10:58.607151 2174 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.108:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.108:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864af1499ddd12c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-13 00:10:57.596993836 +0000 UTC m=+0.532531054,LastTimestamp:2025-09-13 00:10:57.596993836 +0000 UTC m=+0.532531054,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 13 00:10:58.608198 kubelet[2174]: E0913 00:10:58.608151 2174 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:10:58.657283 kubelet[2174]: I0913 00:10:58.657246 2174 policy_none.go:49] "None policy: Start" Sep 13 00:10:58.658094 kubelet[2174]: I0913 00:10:58.658012 2174 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:10:58.658094 kubelet[2174]: I0913 00:10:58.658041 2174 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:10:58.674809 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 13 00:10:58.694229 kubelet[2174]: W0913 00:10:58.694196 2174 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Sep 13 00:10:58.694349 kubelet[2174]: E0913 00:10:58.694239 2174 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:10:58.695059 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 13 00:10:58.698480 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 13 00:10:58.708566 kubelet[2174]: E0913 00:10:58.708531 2174 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:10:58.709776 kubelet[2174]: I0913 00:10:58.709750 2174 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:10:58.710034 kubelet[2174]: I0913 00:10:58.710004 2174 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:10:58.710093 kubelet[2174]: I0913 00:10:58.710020 2174 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:10:58.710290 kubelet[2174]: I0913 00:10:58.710249 2174 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:10:58.711241 kubelet[2174]: E0913 00:10:58.711215 2174 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 13 00:10:58.811986 kubelet[2174]: I0913 00:10:58.811933 2174 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:10:58.812329 kubelet[2174]: E0913 00:10:58.812307 2174 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.108:6443/api/v1/nodes\": dial tcp 10.0.0.108:6443: connect: connection refused" node="localhost" Sep 13 00:10:58.912435 kubelet[2174]: W0913 00:10:58.912263 2174 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Sep 13 00:10:58.912435 kubelet[2174]: E0913 00:10:58.912335 2174 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:10:59.011588 kubelet[2174]: E0913 00:10:59.011522 2174 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.108:6443: connect: connection refused" interval="1.6s" Sep 13 00:10:59.013622 kubelet[2174]: I0913 00:10:59.013586 2174 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:10:59.013868 kubelet[2174]: E0913 00:10:59.013830 2174 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.108:6443/api/v1/nodes\": dial tcp 10.0.0.108:6443: connect: connection refused" node="localhost" Sep 13 00:10:59.017152 kubelet[2174]: W0913 00:10:59.017084 2174 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Sep 13 00:10:59.017211 kubelet[2174]: E0913 00:10:59.017151 2174 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:10:59.121063 kubelet[2174]: W0913 00:10:59.121006 2174 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Sep 13 00:10:59.121063 kubelet[2174]: E0913 00:10:59.121059 2174 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:10:59.148589 systemd[1]: Created slice kubepods-burstable-pod489579fb697f1808ddc128450ea82d68.slice - libcontainer container kubepods-burstable-pod489579fb697f1808ddc128450ea82d68.slice. Sep 13 00:10:59.162196 systemd[1]: Created slice kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice - libcontainer container kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice. Sep 13 00:10:59.166215 systemd[1]: Created slice kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice - libcontainer container kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice. Sep 13 00:10:59.211620 kubelet[2174]: I0913 00:10:59.211546 2174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:10:59.211620 kubelet[2174]: I0913 00:10:59.211600 2174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:10:59.211620 kubelet[2174]: I0913 00:10:59.211633 2174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/489579fb697f1808ddc128450ea82d68-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"489579fb697f1808ddc128450ea82d68\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:10:59.212154 kubelet[2174]: I0913 00:10:59.211652 2174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/489579fb697f1808ddc128450ea82d68-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"489579fb697f1808ddc128450ea82d68\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:10:59.212154 kubelet[2174]: I0913 00:10:59.211671 2174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/489579fb697f1808ddc128450ea82d68-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"489579fb697f1808ddc128450ea82d68\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:10:59.212154 kubelet[2174]: I0913 00:10:59.211693 2174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 13 00:10:59.212154 kubelet[2174]: I0913 00:10:59.211727 2174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:10:59.212154 kubelet[2174]: I0913 00:10:59.211762 2174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:10:59.212284 kubelet[2174]: I0913 00:10:59.211809 2174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:10:59.416605 kubelet[2174]: I0913 00:10:59.416446 2174 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:10:59.416939 kubelet[2174]: E0913 00:10:59.416908 2174 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.108:6443/api/v1/nodes\": dial tcp 10.0.0.108:6443: connect: connection refused" node="localhost" Sep 13 00:10:59.460379 kubelet[2174]: E0913 00:10:59.460315 2174 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:10:59.461362 containerd[1465]: time="2025-09-13T00:10:59.461295762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:489579fb697f1808ddc128450ea82d68,Namespace:kube-system,Attempt:0,}" Sep 13 00:10:59.465455 kubelet[2174]: E0913 00:10:59.465415 2174 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:10:59.466028 containerd[1465]: time="2025-09-13T00:10:59.465989876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,}" Sep 13 00:10:59.469190 kubelet[2174]: E0913 00:10:59.469156 2174 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:10:59.469596 containerd[1465]: time="2025-09-13T00:10:59.469562054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,}" Sep 13 00:10:59.743100 kubelet[2174]: E0913 00:10:59.742927 2174 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.108:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:11:00.002274 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2792588330.mount: Deactivated successfully. Sep 13 00:11:00.011676 containerd[1465]: time="2025-09-13T00:11:00.011610854Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:11:00.012821 containerd[1465]: time="2025-09-13T00:11:00.012755074Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:11:00.013705 containerd[1465]: time="2025-09-13T00:11:00.013663357Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 13 00:11:00.014631 containerd[1465]: time="2025-09-13T00:11:00.014602541Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:11:00.015584 containerd[1465]: time="2025-09-13T00:11:00.015538840Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:11:00.016655 containerd[1465]: time="2025-09-13T00:11:00.016627369Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:11:00.017512 containerd[1465]: time="2025-09-13T00:11:00.017460654Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:11:00.020707 containerd[1465]: time="2025-09-13T00:11:00.020677084Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:11:00.023028 containerd[1465]: time="2025-09-13T00:11:00.022997601Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 556.917651ms" Sep 13 00:11:00.023801 containerd[1465]: time="2025-09-13T00:11:00.023757101Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 554.13338ms" Sep 13 00:11:00.024505 containerd[1465]: time="2025-09-13T00:11:00.024467143Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 563.070972ms" Sep 13 00:11:00.219515 kubelet[2174]: I0913 00:11:00.219237 2174 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:11:00.220009 kubelet[2174]: E0913 00:11:00.219791 2174 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.108:6443/api/v1/nodes\": dial tcp 10.0.0.108:6443: connect: connection refused" node="localhost" Sep 13 00:11:00.223009 containerd[1465]: time="2025-09-13T00:11:00.222738916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:11:00.223009 containerd[1465]: time="2025-09-13T00:11:00.222851914Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:11:00.223009 containerd[1465]: time="2025-09-13T00:11:00.222867029Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:00.223009 containerd[1465]: time="2025-09-13T00:11:00.222955729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:00.223313 containerd[1465]: time="2025-09-13T00:11:00.223045483Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:11:00.223313 containerd[1465]: time="2025-09-13T00:11:00.223114586Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:11:00.223313 containerd[1465]: time="2025-09-13T00:11:00.223128680Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:00.223700 containerd[1465]: time="2025-09-13T00:11:00.223550760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:00.224945 containerd[1465]: time="2025-09-13T00:11:00.224412511Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:11:00.224945 containerd[1465]: time="2025-09-13T00:11:00.224454751Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:11:00.224945 containerd[1465]: time="2025-09-13T00:11:00.224464715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:00.224945 containerd[1465]: time="2025-09-13T00:11:00.224539262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:00.266948 systemd[1]: Started cri-containerd-c39e11bce6876e125aab85121c2ff2721581d05bd87ba559f4b0aa041021d5e8.scope - libcontainer container c39e11bce6876e125aab85121c2ff2721581d05bd87ba559f4b0aa041021d5e8. Sep 13 00:11:00.273662 systemd[1]: Started cri-containerd-bf89b76b989a539ca14353e679e21561cee579db4dd1a30b11b906946ce9c2ea.scope - libcontainer container bf89b76b989a539ca14353e679e21561cee579db4dd1a30b11b906946ce9c2ea. Sep 13 00:11:00.276356 systemd[1]: Started cri-containerd-cf6a452f7ba7f0bf8d21b0a81e84d2dc74b94a01c55daac050796f73000dfb7f.scope - libcontainer container cf6a452f7ba7f0bf8d21b0a81e84d2dc74b94a01c55daac050796f73000dfb7f. Sep 13 00:11:00.340809 containerd[1465]: time="2025-09-13T00:11:00.339175381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"c39e11bce6876e125aab85121c2ff2721581d05bd87ba559f4b0aa041021d5e8\"" Sep 13 00:11:00.340952 kubelet[2174]: E0913 00:11:00.340438 2174 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:00.343197 containerd[1465]: time="2025-09-13T00:11:00.343154058Z" level=info msg="CreateContainer within sandbox \"c39e11bce6876e125aab85121c2ff2721581d05bd87ba559f4b0aa041021d5e8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:11:00.350258 containerd[1465]: time="2025-09-13T00:11:00.350210557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:489579fb697f1808ddc128450ea82d68,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf89b76b989a539ca14353e679e21561cee579db4dd1a30b11b906946ce9c2ea\"" Sep 13 00:11:00.350970 kubelet[2174]: E0913 00:11:00.350949 2174 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:00.355728 containerd[1465]: time="2025-09-13T00:11:00.355694450Z" level=info msg="CreateContainer within sandbox \"bf89b76b989a539ca14353e679e21561cee579db4dd1a30b11b906946ce9c2ea\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:11:00.360171 containerd[1465]: time="2025-09-13T00:11:00.360147319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf6a452f7ba7f0bf8d21b0a81e84d2dc74b94a01c55daac050796f73000dfb7f\"" Sep 13 00:11:00.360969 kubelet[2174]: E0913 00:11:00.360827 2174 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:00.362575 containerd[1465]: time="2025-09-13T00:11:00.362550042Z" level=info msg="CreateContainer within sandbox \"cf6a452f7ba7f0bf8d21b0a81e84d2dc74b94a01c55daac050796f73000dfb7f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:11:00.372446 containerd[1465]: time="2025-09-13T00:11:00.372320690Z" level=info msg="CreateContainer within sandbox \"c39e11bce6876e125aab85121c2ff2721581d05bd87ba559f4b0aa041021d5e8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ffde1324e7d14db5ad9e57b2c1e13689b5b32cde34b341ac8bbdae4eb5f3f9d1\"" Sep 13 00:11:00.373012 containerd[1465]: time="2025-09-13T00:11:00.372971332Z" level=info msg="StartContainer for \"ffde1324e7d14db5ad9e57b2c1e13689b5b32cde34b341ac8bbdae4eb5f3f9d1\"" Sep 13 00:11:00.393519 containerd[1465]: time="2025-09-13T00:11:00.393276731Z" level=info msg="CreateContainer within sandbox \"bf89b76b989a539ca14353e679e21561cee579db4dd1a30b11b906946ce9c2ea\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"13cdf8464d65cbe0cdc8d5d8aa548174056d3f0baaf271d522a28d8d487eb488\"" Sep 13 00:11:00.394822 containerd[1465]: time="2025-09-13T00:11:00.394722496Z" level=info msg="StartContainer for \"13cdf8464d65cbe0cdc8d5d8aa548174056d3f0baaf271d522a28d8d487eb488\"" Sep 13 00:11:00.397930 systemd[1]: Started cri-containerd-ffde1324e7d14db5ad9e57b2c1e13689b5b32cde34b341ac8bbdae4eb5f3f9d1.scope - libcontainer container ffde1324e7d14db5ad9e57b2c1e13689b5b32cde34b341ac8bbdae4eb5f3f9d1. Sep 13 00:11:00.398945 containerd[1465]: time="2025-09-13T00:11:00.398909647Z" level=info msg="CreateContainer within sandbox \"cf6a452f7ba7f0bf8d21b0a81e84d2dc74b94a01c55daac050796f73000dfb7f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"93fdd1b68c0b2ce5514be269294550397edf27ce93c7a1b5d32e893d79e7767a\"" Sep 13 00:11:00.399409 containerd[1465]: time="2025-09-13T00:11:00.399372112Z" level=info msg="StartContainer for \"93fdd1b68c0b2ce5514be269294550397edf27ce93c7a1b5d32e893d79e7767a\"" Sep 13 00:11:00.469101 systemd[1]: Started cri-containerd-13cdf8464d65cbe0cdc8d5d8aa548174056d3f0baaf271d522a28d8d487eb488.scope - libcontainer container 13cdf8464d65cbe0cdc8d5d8aa548174056d3f0baaf271d522a28d8d487eb488. Sep 13 00:11:00.472958 systemd[1]: Started cri-containerd-93fdd1b68c0b2ce5514be269294550397edf27ce93c7a1b5d32e893d79e7767a.scope - libcontainer container 93fdd1b68c0b2ce5514be269294550397edf27ce93c7a1b5d32e893d79e7767a. Sep 13 00:11:00.490869 containerd[1465]: time="2025-09-13T00:11:00.490690618Z" level=info msg="StartContainer for \"ffde1324e7d14db5ad9e57b2c1e13689b5b32cde34b341ac8bbdae4eb5f3f9d1\" returns successfully" Sep 13 00:11:00.533249 containerd[1465]: time="2025-09-13T00:11:00.533109077Z" level=info msg="StartContainer for \"93fdd1b68c0b2ce5514be269294550397edf27ce93c7a1b5d32e893d79e7767a\" returns successfully" Sep 13 00:11:00.533249 containerd[1465]: time="2025-09-13T00:11:00.533200654Z" level=info msg="StartContainer for \"13cdf8464d65cbe0cdc8d5d8aa548174056d3f0baaf271d522a28d8d487eb488\" returns successfully" Sep 13 00:11:00.648629 kubelet[2174]: E0913 00:11:00.648539 2174 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:00.662790 kubelet[2174]: E0913 00:11:00.662749 2174 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:00.664096 kubelet[2174]: E0913 00:11:00.664071 2174 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:01.686518 kubelet[2174]: E0913 00:11:01.686483 2174 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:01.710768 update_engine[1452]: I20250913 00:11:01.709837 1452 update_attempter.cc:509] Updating boot flags... Sep 13 00:11:01.795813 kubelet[2174]: E0913 00:11:01.794646 2174 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 13 00:11:01.802810 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2464) Sep 13 00:11:01.822036 kubelet[2174]: I0913 00:11:01.821987 2174 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:11:01.847887 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2468) Sep 13 00:11:01.962879 kubelet[2174]: I0913 00:11:01.962709 2174 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 13 00:11:02.595214 kubelet[2174]: I0913 00:11:02.595147 2174 apiserver.go:52] "Watching apiserver" Sep 13 00:11:02.603014 kubelet[2174]: I0913 00:11:02.602982 2174 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:11:02.890026 kubelet[2174]: E0913 00:11:02.889872 2174 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:03.800448 kubelet[2174]: E0913 00:11:03.800411 2174 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:03.851874 systemd[1]: Reloading requested from client PID 2472 ('systemctl') (unit session-9.scope)... Sep 13 00:11:03.851892 systemd[1]: Reloading... Sep 13 00:11:03.933820 zram_generator::config[2515]: No configuration found. Sep 13 00:11:04.041732 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:11:04.140822 systemd[1]: Reloading finished in 288 ms. Sep 13 00:11:04.191697 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:11:04.215574 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:11:04.215891 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:11:04.224993 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:11:04.393561 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:11:04.405306 (kubelet)[2556]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:11:04.445000 kubelet[2556]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:11:04.445000 kubelet[2556]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:11:04.445000 kubelet[2556]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:11:04.445521 kubelet[2556]: I0913 00:11:04.445065 2556 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:11:04.452930 kubelet[2556]: I0913 00:11:04.452873 2556 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:11:04.452930 kubelet[2556]: I0913 00:11:04.452917 2556 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:11:04.453254 kubelet[2556]: I0913 00:11:04.453231 2556 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:11:04.454567 kubelet[2556]: I0913 00:11:04.454549 2556 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 00:11:04.457040 kubelet[2556]: I0913 00:11:04.456860 2556 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:11:04.460375 kubelet[2556]: E0913 00:11:04.460348 2556 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:11:04.460375 kubelet[2556]: I0913 00:11:04.460373 2556 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:11:04.466534 kubelet[2556]: I0913 00:11:04.466495 2556 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:11:04.466663 kubelet[2556]: I0913 00:11:04.466642 2556 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:11:04.466860 kubelet[2556]: I0913 00:11:04.466772 2556 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:11:04.467072 kubelet[2556]: I0913 00:11:04.466834 2556 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:11:04.467072 kubelet[2556]: I0913 00:11:04.467029 2556 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:11:04.467072 kubelet[2556]: I0913 00:11:04.467038 2556 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:11:04.467072 kubelet[2556]: I0913 00:11:04.467068 2556 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:11:04.467279 kubelet[2556]: I0913 00:11:04.467202 2556 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:11:04.467279 kubelet[2556]: I0913 00:11:04.467217 2556 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:11:04.467279 kubelet[2556]: I0913 00:11:04.467260 2556 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:11:04.467279 kubelet[2556]: I0913 00:11:04.467272 2556 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:11:04.469129 kubelet[2556]: I0913 00:11:04.469098 2556 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:11:04.474660 kubelet[2556]: I0913 00:11:04.473773 2556 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:11:04.474660 kubelet[2556]: I0913 00:11:04.474647 2556 server.go:1274] "Started kubelet" Sep 13 00:11:04.476608 kubelet[2556]: I0913 00:11:04.476577 2556 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:11:04.477350 kubelet[2556]: I0913 00:11:04.477303 2556 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:11:04.477725 kubelet[2556]: I0913 00:11:04.477678 2556 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:11:04.478441 kubelet[2556]: I0913 00:11:04.478417 2556 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:11:04.480337 kubelet[2556]: I0913 00:11:04.480157 2556 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:11:04.481648 kubelet[2556]: I0913 00:11:04.480165 2556 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:11:04.481722 kubelet[2556]: I0913 00:11:04.481656 2556 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:11:04.481859 kubelet[2556]: I0913 00:11:04.481811 2556 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:11:04.485314 kubelet[2556]: I0913 00:11:04.484697 2556 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:11:04.490755 kubelet[2556]: I0913 00:11:04.490723 2556 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:11:04.492646 kubelet[2556]: I0913 00:11:04.492624 2556 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:11:04.492987 kubelet[2556]: I0913 00:11:04.492963 2556 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:11:04.493444 kubelet[2556]: E0913 00:11:04.493421 2556 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:11:04.497147 kubelet[2556]: I0913 00:11:04.497079 2556 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:11:04.499257 kubelet[2556]: I0913 00:11:04.499203 2556 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:11:04.499257 kubelet[2556]: I0913 00:11:04.499231 2556 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:11:04.499257 kubelet[2556]: I0913 00:11:04.499249 2556 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:11:04.499556 kubelet[2556]: E0913 00:11:04.499293 2556 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:11:04.530587 kubelet[2556]: I0913 00:11:04.530541 2556 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:11:04.530587 kubelet[2556]: I0913 00:11:04.530565 2556 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:11:04.530587 kubelet[2556]: I0913 00:11:04.530593 2556 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:11:04.530810 kubelet[2556]: I0913 00:11:04.530793 2556 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:11:04.530843 kubelet[2556]: I0913 00:11:04.530810 2556 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:11:04.530843 kubelet[2556]: I0913 00:11:04.530833 2556 policy_none.go:49] "None policy: Start" Sep 13 00:11:04.532390 kubelet[2556]: I0913 00:11:04.531398 2556 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:11:04.532390 kubelet[2556]: I0913 00:11:04.531427 2556 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:11:04.532390 kubelet[2556]: I0913 00:11:04.531599 2556 state_mem.go:75] "Updated machine memory state" Sep 13 00:11:04.536115 kubelet[2556]: I0913 00:11:04.536081 2556 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:11:04.536619 kubelet[2556]: I0913 00:11:04.536311 2556 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:11:04.536619 kubelet[2556]: I0913 00:11:04.536330 2556 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:11:04.536619 kubelet[2556]: I0913 00:11:04.536583 2556 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:11:04.607407 kubelet[2556]: E0913 00:11:04.607347 2556 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 13 00:11:04.643966 kubelet[2556]: I0913 00:11:04.643831 2556 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:11:04.651462 kubelet[2556]: I0913 00:11:04.651429 2556 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 13 00:11:04.651619 kubelet[2556]: I0913 00:11:04.651506 2556 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 13 00:11:04.693961 kubelet[2556]: I0913 00:11:04.693902 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/489579fb697f1808ddc128450ea82d68-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"489579fb697f1808ddc128450ea82d68\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:11:04.693961 kubelet[2556]: I0913 00:11:04.693940 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:11:04.693961 kubelet[2556]: I0913 00:11:04.693965 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:11:04.694214 kubelet[2556]: I0913 00:11:04.694001 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:11:04.694214 kubelet[2556]: I0913 00:11:04.694022 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/489579fb697f1808ddc128450ea82d68-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"489579fb697f1808ddc128450ea82d68\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:11:04.694214 kubelet[2556]: I0913 00:11:04.694076 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/489579fb697f1808ddc128450ea82d68-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"489579fb697f1808ddc128450ea82d68\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:11:04.694214 kubelet[2556]: I0913 00:11:04.694098 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:11:04.694214 kubelet[2556]: I0913 00:11:04.694113 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:11:04.694330 kubelet[2556]: I0913 00:11:04.694128 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 13 00:11:04.907128 kubelet[2556]: E0913 00:11:04.907060 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:04.907971 kubelet[2556]: E0913 00:11:04.907914 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:04.908077 kubelet[2556]: E0913 00:11:04.908044 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:05.484068 kubelet[2556]: I0913 00:11:05.483807 2556 apiserver.go:52] "Watching apiserver" Sep 13 00:11:05.493585 kubelet[2556]: I0913 00:11:05.493070 2556 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:11:05.514497 kubelet[2556]: E0913 00:11:05.514455 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:05.514962 kubelet[2556]: E0913 00:11:05.514931 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:05.515219 kubelet[2556]: E0913 00:11:05.515199 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:05.533283 kubelet[2556]: I0913 00:11:05.533205 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.533183071 podStartE2EDuration="3.533183071s" podCreationTimestamp="2025-09-13 00:11:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:11:05.533148974 +0000 UTC m=+1.122969516" watchObservedRunningTime="2025-09-13 00:11:05.533183071 +0000 UTC m=+1.123003593" Sep 13 00:11:05.879006 kubelet[2556]: I0913 00:11:05.878843 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.878818852 podStartE2EDuration="1.878818852s" podCreationTimestamp="2025-09-13 00:11:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:11:05.843451367 +0000 UTC m=+1.433271899" watchObservedRunningTime="2025-09-13 00:11:05.878818852 +0000 UTC m=+1.468639374" Sep 13 00:11:05.879006 kubelet[2556]: I0913 00:11:05.878984 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.878978552 podStartE2EDuration="1.878978552s" podCreationTimestamp="2025-09-13 00:11:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:11:05.878914708 +0000 UTC m=+1.468735230" watchObservedRunningTime="2025-09-13 00:11:05.878978552 +0000 UTC m=+1.468799074" Sep 13 00:11:06.515392 kubelet[2556]: E0913 00:11:06.515355 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:09.892924 kubelet[2556]: I0913 00:11:09.892886 2556 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:11:09.893359 containerd[1465]: time="2025-09-13T00:11:09.893259143Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:11:09.893670 kubelet[2556]: I0913 00:11:09.893426 2556 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:11:10.818279 systemd[1]: Created slice kubepods-besteffort-pod6810fca4_3a9b_4ef3_b83e_7ceaf8e01489.slice - libcontainer container kubepods-besteffort-pod6810fca4_3a9b_4ef3_b83e_7ceaf8e01489.slice. Sep 13 00:11:10.830034 kubelet[2556]: I0913 00:11:10.829975 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6810fca4-3a9b-4ef3-b83e-7ceaf8e01489-kube-proxy\") pod \"kube-proxy-np6d6\" (UID: \"6810fca4-3a9b-4ef3-b83e-7ceaf8e01489\") " pod="kube-system/kube-proxy-np6d6" Sep 13 00:11:10.830034 kubelet[2556]: I0913 00:11:10.830018 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-672pm\" (UniqueName: \"kubernetes.io/projected/6810fca4-3a9b-4ef3-b83e-7ceaf8e01489-kube-api-access-672pm\") pod \"kube-proxy-np6d6\" (UID: \"6810fca4-3a9b-4ef3-b83e-7ceaf8e01489\") " pod="kube-system/kube-proxy-np6d6" Sep 13 00:11:10.830034 kubelet[2556]: I0913 00:11:10.830039 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6810fca4-3a9b-4ef3-b83e-7ceaf8e01489-xtables-lock\") pod \"kube-proxy-np6d6\" (UID: \"6810fca4-3a9b-4ef3-b83e-7ceaf8e01489\") " pod="kube-system/kube-proxy-np6d6" Sep 13 00:11:10.830244 kubelet[2556]: I0913 00:11:10.830056 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6810fca4-3a9b-4ef3-b83e-7ceaf8e01489-lib-modules\") pod \"kube-proxy-np6d6\" (UID: \"6810fca4-3a9b-4ef3-b83e-7ceaf8e01489\") " pod="kube-system/kube-proxy-np6d6" Sep 13 00:11:10.973930 kubelet[2556]: E0913 00:11:10.973877 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:11.034328 systemd[1]: Created slice kubepods-besteffort-podaaa43830_68b5_4485_a7cf_04d79778d971.slice - libcontainer container kubepods-besteffort-podaaa43830_68b5_4485_a7cf_04d79778d971.slice. Sep 13 00:11:11.126326 kubelet[2556]: E0913 00:11:11.126196 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:11.126950 containerd[1465]: time="2025-09-13T00:11:11.126911555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-np6d6,Uid:6810fca4-3a9b-4ef3-b83e-7ceaf8e01489,Namespace:kube-system,Attempt:0,}" Sep 13 00:11:11.131301 kubelet[2556]: I0913 00:11:11.131268 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jfqv\" (UniqueName: \"kubernetes.io/projected/aaa43830-68b5-4485-a7cf-04d79778d971-kube-api-access-5jfqv\") pod \"tigera-operator-58fc44c59b-fzcxn\" (UID: \"aaa43830-68b5-4485-a7cf-04d79778d971\") " pod="tigera-operator/tigera-operator-58fc44c59b-fzcxn" Sep 13 00:11:11.131369 kubelet[2556]: I0913 00:11:11.131311 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/aaa43830-68b5-4485-a7cf-04d79778d971-var-lib-calico\") pod \"tigera-operator-58fc44c59b-fzcxn\" (UID: \"aaa43830-68b5-4485-a7cf-04d79778d971\") " pod="tigera-operator/tigera-operator-58fc44c59b-fzcxn" Sep 13 00:11:11.155471 containerd[1465]: time="2025-09-13T00:11:11.155262082Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:11:11.155471 containerd[1465]: time="2025-09-13T00:11:11.155330139Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:11:11.155471 containerd[1465]: time="2025-09-13T00:11:11.155341955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:11.155471 containerd[1465]: time="2025-09-13T00:11:11.155427591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:11.188956 systemd[1]: Started cri-containerd-27ada8e4e2658e94a7f515f98be35d39bbabd967ef71db4e5aafb3dc2f59ff17.scope - libcontainer container 27ada8e4e2658e94a7f515f98be35d39bbabd967ef71db4e5aafb3dc2f59ff17. Sep 13 00:11:11.190538 kubelet[2556]: E0913 00:11:11.190512 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:11.281827 containerd[1465]: time="2025-09-13T00:11:11.281313163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-np6d6,Uid:6810fca4-3a9b-4ef3-b83e-7ceaf8e01489,Namespace:kube-system,Attempt:0,} returns sandbox id \"27ada8e4e2658e94a7f515f98be35d39bbabd967ef71db4e5aafb3dc2f59ff17\"" Sep 13 00:11:11.282229 kubelet[2556]: E0913 00:11:11.282199 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:11.285367 containerd[1465]: time="2025-09-13T00:11:11.285335117Z" level=info msg="CreateContainer within sandbox \"27ada8e4e2658e94a7f515f98be35d39bbabd967ef71db4e5aafb3dc2f59ff17\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:11:11.304017 containerd[1465]: time="2025-09-13T00:11:11.303962543Z" level=info msg="CreateContainer within sandbox \"27ada8e4e2658e94a7f515f98be35d39bbabd967ef71db4e5aafb3dc2f59ff17\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9a0ee1eede47eb34bf1035595052b83bd50837361e1aa5707e20c40087f3ad88\"" Sep 13 00:11:11.304597 containerd[1465]: time="2025-09-13T00:11:11.304569389Z" level=info msg="StartContainer for \"9a0ee1eede47eb34bf1035595052b83bd50837361e1aa5707e20c40087f3ad88\"" Sep 13 00:11:11.333918 systemd[1]: Started cri-containerd-9a0ee1eede47eb34bf1035595052b83bd50837361e1aa5707e20c40087f3ad88.scope - libcontainer container 9a0ee1eede47eb34bf1035595052b83bd50837361e1aa5707e20c40087f3ad88. Sep 13 00:11:11.337131 containerd[1465]: time="2025-09-13T00:11:11.337093839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-fzcxn,Uid:aaa43830-68b5-4485-a7cf-04d79778d971,Namespace:tigera-operator,Attempt:0,}" Sep 13 00:11:11.368468 containerd[1465]: time="2025-09-13T00:11:11.368317127Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:11:11.368468 containerd[1465]: time="2025-09-13T00:11:11.368378429Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:11:11.368468 containerd[1465]: time="2025-09-13T00:11:11.368408004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:11.368772 containerd[1465]: time="2025-09-13T00:11:11.368506107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:11.374482 containerd[1465]: time="2025-09-13T00:11:11.373208868Z" level=info msg="StartContainer for \"9a0ee1eede47eb34bf1035595052b83bd50837361e1aa5707e20c40087f3ad88\" returns successfully" Sep 13 00:11:11.394973 systemd[1]: Started cri-containerd-5923ddcf7b944403c89e85b33380af2ac17b2dbcb720cd510c8f69f4f82856b7.scope - libcontainer container 5923ddcf7b944403c89e85b33380af2ac17b2dbcb720cd510c8f69f4f82856b7. Sep 13 00:11:11.439391 containerd[1465]: time="2025-09-13T00:11:11.439351119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-fzcxn,Uid:aaa43830-68b5-4485-a7cf-04d79778d971,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"5923ddcf7b944403c89e85b33380af2ac17b2dbcb720cd510c8f69f4f82856b7\"" Sep 13 00:11:11.441429 containerd[1465]: time="2025-09-13T00:11:11.441392608Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 13 00:11:11.524717 kubelet[2556]: E0913 00:11:11.524648 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:11.525304 kubelet[2556]: E0913 00:11:11.525267 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:11.525444 kubelet[2556]: E0913 00:11:11.525422 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:11.635847 kubelet[2556]: I0913 00:11:11.635799 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-np6d6" podStartSLOduration=1.63576184 podStartE2EDuration="1.63576184s" podCreationTimestamp="2025-09-13 00:11:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:11:11.635636037 +0000 UTC m=+7.225456569" watchObservedRunningTime="2025-09-13 00:11:11.63576184 +0000 UTC m=+7.225582352" Sep 13 00:11:12.527396 kubelet[2556]: E0913 00:11:12.527352 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:12.841198 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3638011550.mount: Deactivated successfully. Sep 13 00:11:13.178952 containerd[1465]: time="2025-09-13T00:11:13.178901035Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:13.179888 containerd[1465]: time="2025-09-13T00:11:13.179846433Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 13 00:11:13.181027 containerd[1465]: time="2025-09-13T00:11:13.180981378Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:13.184709 containerd[1465]: time="2025-09-13T00:11:13.184668420Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:13.185662 containerd[1465]: time="2025-09-13T00:11:13.185632518Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 1.744204504s" Sep 13 00:11:13.185696 containerd[1465]: time="2025-09-13T00:11:13.185665870Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 13 00:11:13.187559 containerd[1465]: time="2025-09-13T00:11:13.187528807Z" level=info msg="CreateContainer within sandbox \"5923ddcf7b944403c89e85b33380af2ac17b2dbcb720cd510c8f69f4f82856b7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 13 00:11:13.202991 containerd[1465]: time="2025-09-13T00:11:13.202947852Z" level=info msg="CreateContainer within sandbox \"5923ddcf7b944403c89e85b33380af2ac17b2dbcb720cd510c8f69f4f82856b7\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a58f5cd2d2fd95f96141cf4a251cde26af457298270be005b5f1a8080ff27733\"" Sep 13 00:11:13.203545 containerd[1465]: time="2025-09-13T00:11:13.203515340Z" level=info msg="StartContainer for \"a58f5cd2d2fd95f96141cf4a251cde26af457298270be005b5f1a8080ff27733\"" Sep 13 00:11:13.246908 systemd[1]: Started cri-containerd-a58f5cd2d2fd95f96141cf4a251cde26af457298270be005b5f1a8080ff27733.scope - libcontainer container a58f5cd2d2fd95f96141cf4a251cde26af457298270be005b5f1a8080ff27733. Sep 13 00:11:13.383162 containerd[1465]: time="2025-09-13T00:11:13.383113712Z" level=info msg="StartContainer for \"a58f5cd2d2fd95f96141cf4a251cde26af457298270be005b5f1a8080ff27733\" returns successfully" Sep 13 00:11:13.681876 kubelet[2556]: I0913 00:11:13.681806 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-fzcxn" podStartSLOduration=1.935986124 podStartE2EDuration="3.681776049s" podCreationTimestamp="2025-09-13 00:11:10 +0000 UTC" firstStartedPulling="2025-09-13 00:11:11.440604378 +0000 UTC m=+7.030424900" lastFinishedPulling="2025-09-13 00:11:13.186394303 +0000 UTC m=+8.776214825" observedRunningTime="2025-09-13 00:11:13.681670372 +0000 UTC m=+9.271490894" watchObservedRunningTime="2025-09-13 00:11:13.681776049 +0000 UTC m=+9.271596561" Sep 13 00:11:14.394633 kubelet[2556]: E0913 00:11:14.394582 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:14.532701 kubelet[2556]: E0913 00:11:14.532652 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:19.605583 sudo[1653]: pam_unix(sudo:session): session closed for user root Sep 13 00:11:19.823420 sshd[1650]: pam_unix(sshd:session): session closed for user core Sep 13 00:11:19.827351 systemd[1]: sshd@8-10.0.0.108:22-10.0.0.1:39222.service: Deactivated successfully. Sep 13 00:11:19.830853 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:11:19.831317 systemd[1]: session-9.scope: Consumed 5.723s CPU time, 155.4M memory peak, 0B memory swap peak. Sep 13 00:11:19.832054 systemd-logind[1450]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:11:19.834903 systemd-logind[1450]: Removed session 9. Sep 13 00:11:22.516210 systemd[1]: Created slice kubepods-besteffort-pod09e76429_c569_4597_8a56_c159082bb53d.slice - libcontainer container kubepods-besteffort-pod09e76429_c569_4597_8a56_c159082bb53d.slice. Sep 13 00:11:22.521310 kubelet[2556]: I0913 00:11:22.521258 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/09e76429-c569-4597-8a56-c159082bb53d-typha-certs\") pod \"calico-typha-84c4ccf776-bfpb4\" (UID: \"09e76429-c569-4597-8a56-c159082bb53d\") " pod="calico-system/calico-typha-84c4ccf776-bfpb4" Sep 13 00:11:22.521859 kubelet[2556]: I0913 00:11:22.521344 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09e76429-c569-4597-8a56-c159082bb53d-tigera-ca-bundle\") pod \"calico-typha-84c4ccf776-bfpb4\" (UID: \"09e76429-c569-4597-8a56-c159082bb53d\") " pod="calico-system/calico-typha-84c4ccf776-bfpb4" Sep 13 00:11:22.521859 kubelet[2556]: I0913 00:11:22.521375 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rdsf\" (UniqueName: \"kubernetes.io/projected/09e76429-c569-4597-8a56-c159082bb53d-kube-api-access-5rdsf\") pod \"calico-typha-84c4ccf776-bfpb4\" (UID: \"09e76429-c569-4597-8a56-c159082bb53d\") " pod="calico-system/calico-typha-84c4ccf776-bfpb4" Sep 13 00:11:22.597276 systemd[1]: Created slice kubepods-besteffort-pod424c836b_c9d5_452a_9d1a_d8fd765df63d.slice - libcontainer container kubepods-besteffort-pod424c836b_c9d5_452a_9d1a_d8fd765df63d.slice. Sep 13 00:11:22.622081 kubelet[2556]: I0913 00:11:22.622022 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/424c836b-c9d5-452a-9d1a-d8fd765df63d-cni-bin-dir\") pod \"calico-node-sp2d7\" (UID: \"424c836b-c9d5-452a-9d1a-d8fd765df63d\") " pod="calico-system/calico-node-sp2d7" Sep 13 00:11:22.622289 kubelet[2556]: I0913 00:11:22.622102 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/424c836b-c9d5-452a-9d1a-d8fd765df63d-flexvol-driver-host\") pod \"calico-node-sp2d7\" (UID: \"424c836b-c9d5-452a-9d1a-d8fd765df63d\") " pod="calico-system/calico-node-sp2d7" Sep 13 00:11:22.622380 kubelet[2556]: I0913 00:11:22.622350 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/424c836b-c9d5-452a-9d1a-d8fd765df63d-var-lib-calico\") pod \"calico-node-sp2d7\" (UID: \"424c836b-c9d5-452a-9d1a-d8fd765df63d\") " pod="calico-system/calico-node-sp2d7" Sep 13 00:11:22.622438 kubelet[2556]: I0913 00:11:22.622398 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/424c836b-c9d5-452a-9d1a-d8fd765df63d-cni-net-dir\") pod \"calico-node-sp2d7\" (UID: \"424c836b-c9d5-452a-9d1a-d8fd765df63d\") " pod="calico-system/calico-node-sp2d7" Sep 13 00:11:22.622438 kubelet[2556]: I0913 00:11:22.622431 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8xqz\" (UniqueName: \"kubernetes.io/projected/424c836b-c9d5-452a-9d1a-d8fd765df63d-kube-api-access-c8xqz\") pod \"calico-node-sp2d7\" (UID: \"424c836b-c9d5-452a-9d1a-d8fd765df63d\") " pod="calico-system/calico-node-sp2d7" Sep 13 00:11:22.622504 kubelet[2556]: I0913 00:11:22.622453 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/424c836b-c9d5-452a-9d1a-d8fd765df63d-lib-modules\") pod \"calico-node-sp2d7\" (UID: \"424c836b-c9d5-452a-9d1a-d8fd765df63d\") " pod="calico-system/calico-node-sp2d7" Sep 13 00:11:22.622504 kubelet[2556]: I0913 00:11:22.622476 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/424c836b-c9d5-452a-9d1a-d8fd765df63d-tigera-ca-bundle\") pod \"calico-node-sp2d7\" (UID: \"424c836b-c9d5-452a-9d1a-d8fd765df63d\") " pod="calico-system/calico-node-sp2d7" Sep 13 00:11:22.622504 kubelet[2556]: I0913 00:11:22.622496 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/424c836b-c9d5-452a-9d1a-d8fd765df63d-xtables-lock\") pod \"calico-node-sp2d7\" (UID: \"424c836b-c9d5-452a-9d1a-d8fd765df63d\") " pod="calico-system/calico-node-sp2d7" Sep 13 00:11:22.622609 kubelet[2556]: I0913 00:11:22.622519 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/424c836b-c9d5-452a-9d1a-d8fd765df63d-policysync\") pod \"calico-node-sp2d7\" (UID: \"424c836b-c9d5-452a-9d1a-d8fd765df63d\") " pod="calico-system/calico-node-sp2d7" Sep 13 00:11:22.622609 kubelet[2556]: I0913 00:11:22.622558 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/424c836b-c9d5-452a-9d1a-d8fd765df63d-var-run-calico\") pod \"calico-node-sp2d7\" (UID: \"424c836b-c9d5-452a-9d1a-d8fd765df63d\") " pod="calico-system/calico-node-sp2d7" Sep 13 00:11:22.622609 kubelet[2556]: I0913 00:11:22.622602 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/424c836b-c9d5-452a-9d1a-d8fd765df63d-node-certs\") pod \"calico-node-sp2d7\" (UID: \"424c836b-c9d5-452a-9d1a-d8fd765df63d\") " pod="calico-system/calico-node-sp2d7" Sep 13 00:11:22.622708 kubelet[2556]: I0913 00:11:22.622636 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/424c836b-c9d5-452a-9d1a-d8fd765df63d-cni-log-dir\") pod \"calico-node-sp2d7\" (UID: \"424c836b-c9d5-452a-9d1a-d8fd765df63d\") " pod="calico-system/calico-node-sp2d7" Sep 13 00:11:22.704502 kubelet[2556]: E0913 00:11:22.704442 2556 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6249r" podUID="093a73a0-183b-402a-9ff9-9d907062e092" Sep 13 00:11:22.723420 kubelet[2556]: I0913 00:11:22.723350 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/093a73a0-183b-402a-9ff9-9d907062e092-kubelet-dir\") pod \"csi-node-driver-6249r\" (UID: \"093a73a0-183b-402a-9ff9-9d907062e092\") " pod="calico-system/csi-node-driver-6249r" Sep 13 00:11:22.723420 kubelet[2556]: I0913 00:11:22.723417 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/093a73a0-183b-402a-9ff9-9d907062e092-socket-dir\") pod \"csi-node-driver-6249r\" (UID: \"093a73a0-183b-402a-9ff9-9d907062e092\") " pod="calico-system/csi-node-driver-6249r" Sep 13 00:11:22.723684 kubelet[2556]: I0913 00:11:22.723463 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/093a73a0-183b-402a-9ff9-9d907062e092-varrun\") pod \"csi-node-driver-6249r\" (UID: \"093a73a0-183b-402a-9ff9-9d907062e092\") " pod="calico-system/csi-node-driver-6249r" Sep 13 00:11:22.723684 kubelet[2556]: I0913 00:11:22.723506 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/093a73a0-183b-402a-9ff9-9d907062e092-registration-dir\") pod \"csi-node-driver-6249r\" (UID: \"093a73a0-183b-402a-9ff9-9d907062e092\") " pod="calico-system/csi-node-driver-6249r" Sep 13 00:11:22.723684 kubelet[2556]: I0913 00:11:22.723521 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm9g8\" (UniqueName: \"kubernetes.io/projected/093a73a0-183b-402a-9ff9-9d907062e092-kube-api-access-wm9g8\") pod \"csi-node-driver-6249r\" (UID: \"093a73a0-183b-402a-9ff9-9d907062e092\") " pod="calico-system/csi-node-driver-6249r" Sep 13 00:11:22.728040 kubelet[2556]: E0913 00:11:22.727997 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:22.728040 kubelet[2556]: W0913 00:11:22.728019 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:22.728040 kubelet[2556]: E0913 00:11:22.728050 2556 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:22.731886 kubelet[2556]: E0913 00:11:22.731763 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:22.731886 kubelet[2556]: W0913 00:11:22.731880 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:22.732038 kubelet[2556]: E0913 00:11:22.731911 2556 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:22.824795 kubelet[2556]: E0913 00:11:22.824655 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:22.824795 kubelet[2556]: W0913 00:11:22.824687 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:22.824795 kubelet[2556]: E0913 00:11:22.824715 2556 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:22.825117 kubelet[2556]: E0913 00:11:22.825087 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:22.825158 kubelet[2556]: W0913 00:11:22.825112 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:22.825158 kubelet[2556]: E0913 00:11:22.825150 2556 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:22.825701 kubelet[2556]: E0913 00:11:22.825561 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:22.825701 kubelet[2556]: W0913 00:11:22.825575 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:22.825701 kubelet[2556]: E0913 00:11:22.825590 2556 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:22.825877 kubelet[2556]: E0913 00:11:22.825862 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:22.825877 kubelet[2556]: W0913 00:11:22.825874 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:22.825929 kubelet[2556]: E0913 00:11:22.825889 2556 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:22.826164 kubelet[2556]: E0913 00:11:22.826138 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:22.826164 kubelet[2556]: W0913 00:11:22.826148 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:22.826218 kubelet[2556]: E0913 00:11:22.826183 2556 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:22.826384 kubelet[2556]: E0913 00:11:22.826373 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:22.826384 kubelet[2556]: W0913 00:11:22.826383 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:22.826436 kubelet[2556]: E0913 00:11:22.826403 2556 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:22.826587 kubelet[2556]: E0913 00:11:22.826575 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:22.826587 kubelet[2556]: W0913 00:11:22.826583 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:22.826639 kubelet[2556]: E0913 00:11:22.826626 2556 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:22.826797 kubelet[2556]: E0913 00:11:22.826770 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:22.826797 kubelet[2556]: W0913 00:11:22.826790 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:22.826848 kubelet[2556]: E0913 00:11:22.826802 2556 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:22.827007 kubelet[2556]: E0913 00:11:22.826996 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:22.827007 kubelet[2556]: W0913 00:11:22.827005 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:22.827078 kubelet[2556]: E0913 00:11:22.827002 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:22.827209 kubelet[2556]: E0913 00:11:22.827191 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:22.827209 kubelet[2556]: W0913 00:11:22.827205 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:22.827274 kubelet[2556]: E0913 00:11:22.827219 2556 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:22.828421 kubelet[2556]: E0913 00:11:22.827013 2556 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:22.828421 kubelet[2556]: E0913 00:11:22.827442 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:22.828421 kubelet[2556]: W0913 00:11:22.827479 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:22.828421 kubelet[2556]: E0913 00:11:22.827491 2556 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:22.828421 kubelet[2556]: E0913 00:11:22.827726 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:22.828421 kubelet[2556]: W0913 00:11:22.827735 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:22.828421 kubelet[2556]: E0913 00:11:22.827816 2556 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:22.828421 kubelet[2556]: E0913 00:11:22.827976 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:22.828421 kubelet[2556]: W0913 00:11:22.827985 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:22.828421 kubelet[2556]: E0913 00:11:22.827996 2556 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:22.828738 containerd[1465]: time="2025-09-13T00:11:22.827681844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84c4ccf776-bfpb4,Uid:09e76429-c569-4597-8a56-c159082bb53d,Namespace:calico-system,Attempt:0,}" Sep 13 00:11:22.829131 kubelet[2556]: E0913 00:11:22.828367 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:22.829131 kubelet[2556]: W0913 00:11:22.828382 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:22.829131 kubelet[2556]: E0913 00:11:22.828423 2556 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:22.829131 kubelet[2556]: E0913 00:11:22.828658 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:22.829131 kubelet[2556]: W0913 00:11:22.828669 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:22.829131 kubelet[2556]: E0913 00:11:22.828824 2556 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:22.829131 kubelet[2556]: E0913 00:11:22.828947 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:22.829131 kubelet[2556]: W0913 00:11:22.828959 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:22.829131 kubelet[2556]: E0913 00:11:22.828990 2556 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:22.829477 kubelet[2556]: E0913 00:11:22.829256 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:22.829477 kubelet[2556]: W0913 00:11:22.829266 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:22.829477 kubelet[2556]: E0913 00:11:22.829292 2556 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:22.829606 kubelet[2556]: E0913 00:11:22.829586 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:22.829606 kubelet[2556]: W0913 00:11:22.829601 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:22.829655 kubelet[2556]: E0913 00:11:22.829637 2556 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:22.829904 kubelet[2556]: E0913 00:11:22.829889 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:22.829904 kubelet[2556]: W0913 00:11:22.829902 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:22.830009 kubelet[2556]: E0913 00:11:22.829919 2556 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:22.830159 kubelet[2556]: E0913 00:11:22.830147 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:22.830187 kubelet[2556]: W0913 00:11:22.830159 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:22.830187 kubelet[2556]: E0913 00:11:22.830172 2556 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:22.830391 kubelet[2556]: E0913 00:11:22.830376 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:22.830391 kubelet[2556]: W0913 00:11:22.830386 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:22.830488 kubelet[2556]: E0913 00:11:22.830420 2556 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:22.830565 kubelet[2556]: E0913 00:11:22.830554 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:22.830565 kubelet[2556]: W0913 00:11:22.830562 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:22.830634 kubelet[2556]: E0913 00:11:22.830587 2556 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:22.830843 kubelet[2556]: E0913 00:11:22.830739 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:22.830843 kubelet[2556]: W0913 00:11:22.830752 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:22.830843 kubelet[2556]: E0913 00:11:22.830772 2556 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:22.831130 kubelet[2556]: E0913 00:11:22.831115 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:22.831130 kubelet[2556]: W0913 00:11:22.831127 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:22.831208 kubelet[2556]: E0913 00:11:22.831170 2556 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:22.831438 kubelet[2556]: E0913 00:11:22.831397 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:22.831438 kubelet[2556]: W0913 00:11:22.831422 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:22.831438 kubelet[2556]: E0913 00:11:22.831434 2556 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:22.901067 containerd[1465]: time="2025-09-13T00:11:22.901005015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sp2d7,Uid:424c836b-c9d5-452a-9d1a-d8fd765df63d,Namespace:calico-system,Attempt:0,}" Sep 13 00:11:22.926943 kubelet[2556]: E0913 00:11:22.926883 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:22.926943 kubelet[2556]: W0913 00:11:22.926911 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:22.927180 kubelet[2556]: E0913 00:11:22.926980 2556 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:22.965115 kubelet[2556]: E0913 00:11:22.964904 2556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:22.965115 kubelet[2556]: W0913 00:11:22.964986 2556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:22.965115 kubelet[2556]: E0913 00:11:22.965011 2556 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:22.998283 containerd[1465]: time="2025-09-13T00:11:22.998020117Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:11:22.998615 containerd[1465]: time="2025-09-13T00:11:22.998454636Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:11:22.998894 containerd[1465]: time="2025-09-13T00:11:22.998680764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:22.998982 containerd[1465]: time="2025-09-13T00:11:22.998874706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:23.023013 containerd[1465]: time="2025-09-13T00:11:23.022541905Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:11:23.023746 containerd[1465]: time="2025-09-13T00:11:23.023597234Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:11:23.023746 containerd[1465]: time="2025-09-13T00:11:23.023651185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:23.023868 containerd[1465]: time="2025-09-13T00:11:23.023769580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:23.026036 systemd[1]: Started cri-containerd-cf7c61e57fb7c5f8e047c9f121557db32d3f69eab9f7d91b33efeb4724776d69.scope - libcontainer container cf7c61e57fb7c5f8e047c9f121557db32d3f69eab9f7d91b33efeb4724776d69. Sep 13 00:11:23.048087 systemd[1]: Started cri-containerd-239337a09257ddebb0a0bce81218537a663bc505d953889e29ed6132c1b86bb4.scope - libcontainer container 239337a09257ddebb0a0bce81218537a663bc505d953889e29ed6132c1b86bb4. Sep 13 00:11:23.081601 containerd[1465]: time="2025-09-13T00:11:23.081289104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sp2d7,Uid:424c836b-c9d5-452a-9d1a-d8fd765df63d,Namespace:calico-system,Attempt:0,} returns sandbox id \"239337a09257ddebb0a0bce81218537a663bc505d953889e29ed6132c1b86bb4\"" Sep 13 00:11:23.085854 containerd[1465]: time="2025-09-13T00:11:23.085745416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 13 00:11:23.090164 containerd[1465]: time="2025-09-13T00:11:23.090121421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84c4ccf776-bfpb4,Uid:09e76429-c569-4597-8a56-c159082bb53d,Namespace:calico-system,Attempt:0,} returns sandbox id \"cf7c61e57fb7c5f8e047c9f121557db32d3f69eab9f7d91b33efeb4724776d69\"" Sep 13 00:11:23.091220 kubelet[2556]: E0913 00:11:23.091196 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:24.500749 kubelet[2556]: E0913 00:11:24.500677 2556 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6249r" podUID="093a73a0-183b-402a-9ff9-9d907062e092" Sep 13 00:11:24.943459 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2155837485.mount: Deactivated successfully. Sep 13 00:11:25.011958 containerd[1465]: time="2025-09-13T00:11:25.011891316Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:25.013064 containerd[1465]: time="2025-09-13T00:11:25.012989292Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=5939501" Sep 13 00:11:25.014150 containerd[1465]: time="2025-09-13T00:11:25.014084391Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:25.017984 containerd[1465]: time="2025-09-13T00:11:25.017925580Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:25.018668 containerd[1465]: time="2025-09-13T00:11:25.018630027Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.932801369s" Sep 13 00:11:25.018668 containerd[1465]: time="2025-09-13T00:11:25.018665490Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 13 00:11:25.020221 containerd[1465]: time="2025-09-13T00:11:25.020188438Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 13 00:11:25.021726 containerd[1465]: time="2025-09-13T00:11:25.021681976Z" level=info msg="CreateContainer within sandbox \"239337a09257ddebb0a0bce81218537a663bc505d953889e29ed6132c1b86bb4\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 13 00:11:25.236954 containerd[1465]: time="2025-09-13T00:11:25.236480173Z" level=info msg="CreateContainer within sandbox \"239337a09257ddebb0a0bce81218537a663bc505d953889e29ed6132c1b86bb4\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3e045a791d4cd8c4ffe18c6ba1b40b2940f6c1b3108057f06cb06b7f0a5c3b24\"" Sep 13 00:11:25.237447 containerd[1465]: time="2025-09-13T00:11:25.237399882Z" level=info msg="StartContainer for \"3e045a791d4cd8c4ffe18c6ba1b40b2940f6c1b3108057f06cb06b7f0a5c3b24\"" Sep 13 00:11:25.274062 systemd[1]: Started cri-containerd-3e045a791d4cd8c4ffe18c6ba1b40b2940f6c1b3108057f06cb06b7f0a5c3b24.scope - libcontainer container 3e045a791d4cd8c4ffe18c6ba1b40b2940f6c1b3108057f06cb06b7f0a5c3b24. Sep 13 00:11:25.311568 containerd[1465]: time="2025-09-13T00:11:25.311519049Z" level=info msg="StartContainer for \"3e045a791d4cd8c4ffe18c6ba1b40b2940f6c1b3108057f06cb06b7f0a5c3b24\" returns successfully" Sep 13 00:11:25.324357 systemd[1]: cri-containerd-3e045a791d4cd8c4ffe18c6ba1b40b2940f6c1b3108057f06cb06b7f0a5c3b24.scope: Deactivated successfully. Sep 13 00:11:25.419586 containerd[1465]: time="2025-09-13T00:11:25.419497704Z" level=info msg="shim disconnected" id=3e045a791d4cd8c4ffe18c6ba1b40b2940f6c1b3108057f06cb06b7f0a5c3b24 namespace=k8s.io Sep 13 00:11:25.419586 containerd[1465]: time="2025-09-13T00:11:25.419576465Z" level=warning msg="cleaning up after shim disconnected" id=3e045a791d4cd8c4ffe18c6ba1b40b2940f6c1b3108057f06cb06b7f0a5c3b24 namespace=k8s.io Sep 13 00:11:25.419586 containerd[1465]: time="2025-09-13T00:11:25.419587688Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:11:25.919566 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e045a791d4cd8c4ffe18c6ba1b40b2940f6c1b3108057f06cb06b7f0a5c3b24-rootfs.mount: Deactivated successfully. Sep 13 00:11:26.500058 kubelet[2556]: E0913 00:11:26.499980 2556 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6249r" podUID="093a73a0-183b-402a-9ff9-9d907062e092" Sep 13 00:11:28.500679 kubelet[2556]: E0913 00:11:28.500595 2556 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6249r" podUID="093a73a0-183b-402a-9ff9-9d907062e092" Sep 13 00:11:28.903128 containerd[1465]: time="2025-09-13T00:11:28.903058999Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:28.903731 containerd[1465]: time="2025-09-13T00:11:28.903688343Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33744548" Sep 13 00:11:28.905520 containerd[1465]: time="2025-09-13T00:11:28.905484568Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:28.907672 containerd[1465]: time="2025-09-13T00:11:28.907633571Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:28.908306 containerd[1465]: time="2025-09-13T00:11:28.908251242Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 3.888030658s" Sep 13 00:11:28.908306 containerd[1465]: time="2025-09-13T00:11:28.908289230Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 13 00:11:28.916742 containerd[1465]: time="2025-09-13T00:11:28.916703105Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 13 00:11:28.936653 containerd[1465]: time="2025-09-13T00:11:28.936596450Z" level=info msg="CreateContainer within sandbox \"cf7c61e57fb7c5f8e047c9f121557db32d3f69eab9f7d91b33efeb4724776d69\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 13 00:11:28.951591 containerd[1465]: time="2025-09-13T00:11:28.951552243Z" level=info msg="CreateContainer within sandbox \"cf7c61e57fb7c5f8e047c9f121557db32d3f69eab9f7d91b33efeb4724776d69\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a8f5a54c38b71b1503e8f04c54129265e8e4a729746017758e8ecf8a08f29c53\"" Sep 13 00:11:28.955355 containerd[1465]: time="2025-09-13T00:11:28.955311191Z" level=info msg="StartContainer for \"a8f5a54c38b71b1503e8f04c54129265e8e4a729746017758e8ecf8a08f29c53\"" Sep 13 00:11:28.990043 systemd[1]: Started cri-containerd-a8f5a54c38b71b1503e8f04c54129265e8e4a729746017758e8ecf8a08f29c53.scope - libcontainer container a8f5a54c38b71b1503e8f04c54129265e8e4a729746017758e8ecf8a08f29c53. Sep 13 00:11:29.036632 containerd[1465]: time="2025-09-13T00:11:29.036573254Z" level=info msg="StartContainer for \"a8f5a54c38b71b1503e8f04c54129265e8e4a729746017758e8ecf8a08f29c53\" returns successfully" Sep 13 00:11:29.573392 kubelet[2556]: E0913 00:11:29.573347 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:29.598922 kubelet[2556]: I0913 00:11:29.598766 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-84c4ccf776-bfpb4" podStartSLOduration=1.775213732 podStartE2EDuration="7.59874115s" podCreationTimestamp="2025-09-13 00:11:22 +0000 UTC" firstStartedPulling="2025-09-13 00:11:23.092360444 +0000 UTC m=+18.682180966" lastFinishedPulling="2025-09-13 00:11:28.915887862 +0000 UTC m=+24.505708384" observedRunningTime="2025-09-13 00:11:29.598273888 +0000 UTC m=+25.188094410" watchObservedRunningTime="2025-09-13 00:11:29.59874115 +0000 UTC m=+25.188561672" Sep 13 00:11:30.500843 kubelet[2556]: E0913 00:11:30.500795 2556 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6249r" podUID="093a73a0-183b-402a-9ff9-9d907062e092" Sep 13 00:11:30.574392 kubelet[2556]: I0913 00:11:30.574354 2556 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:11:30.574970 kubelet[2556]: E0913 00:11:30.574692 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:32.500502 kubelet[2556]: E0913 00:11:32.500429 2556 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6249r" podUID="093a73a0-183b-402a-9ff9-9d907062e092" Sep 13 00:11:33.668164 containerd[1465]: time="2025-09-13T00:11:33.668104826Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:33.668958 containerd[1465]: time="2025-09-13T00:11:33.668910565Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 13 00:11:33.670221 containerd[1465]: time="2025-09-13T00:11:33.670188370Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:33.672848 containerd[1465]: time="2025-09-13T00:11:33.672769480Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:33.673953 containerd[1465]: time="2025-09-13T00:11:33.673904264Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 4.757153401s" Sep 13 00:11:33.674031 containerd[1465]: time="2025-09-13T00:11:33.673959927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 13 00:11:33.676434 containerd[1465]: time="2025-09-13T00:11:33.676385081Z" level=info msg="CreateContainer within sandbox \"239337a09257ddebb0a0bce81218537a663bc505d953889e29ed6132c1b86bb4\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 13 00:11:33.697487 containerd[1465]: time="2025-09-13T00:11:33.697418698Z" level=info msg="CreateContainer within sandbox \"239337a09257ddebb0a0bce81218537a663bc505d953889e29ed6132c1b86bb4\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4eaaca74a4f31517d2c134432b6d0ef272e2f5268c59acb9e969705ad7626f77\"" Sep 13 00:11:33.698192 containerd[1465]: time="2025-09-13T00:11:33.698150508Z" level=info msg="StartContainer for \"4eaaca74a4f31517d2c134432b6d0ef272e2f5268c59acb9e969705ad7626f77\"" Sep 13 00:11:33.733935 systemd[1]: Started cri-containerd-4eaaca74a4f31517d2c134432b6d0ef272e2f5268c59acb9e969705ad7626f77.scope - libcontainer container 4eaaca74a4f31517d2c134432b6d0ef272e2f5268c59acb9e969705ad7626f77. Sep 13 00:11:33.776074 containerd[1465]: time="2025-09-13T00:11:33.776010836Z" level=info msg="StartContainer for \"4eaaca74a4f31517d2c134432b6d0ef272e2f5268c59acb9e969705ad7626f77\" returns successfully" Sep 13 00:11:34.500300 kubelet[2556]: E0913 00:11:34.500218 2556 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6249r" podUID="093a73a0-183b-402a-9ff9-9d907062e092" Sep 13 00:11:35.252980 systemd[1]: cri-containerd-4eaaca74a4f31517d2c134432b6d0ef272e2f5268c59acb9e969705ad7626f77.scope: Deactivated successfully. Sep 13 00:11:35.272018 kubelet[2556]: I0913 00:11:35.271979 2556 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 13 00:11:35.276842 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4eaaca74a4f31517d2c134432b6d0ef272e2f5268c59acb9e969705ad7626f77-rootfs.mount: Deactivated successfully. Sep 13 00:11:35.311479 kubelet[2556]: I0913 00:11:35.309534 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/572e4883-43be-47f8-9d71-340af499cdf4-config-volume\") pod \"coredns-7c65d6cfc9-8ddht\" (UID: \"572e4883-43be-47f8-9d71-340af499cdf4\") " pod="kube-system/coredns-7c65d6cfc9-8ddht" Sep 13 00:11:35.311479 kubelet[2556]: I0913 00:11:35.309569 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4mq9\" (UniqueName: \"kubernetes.io/projected/efb62d5d-3a33-4337-b3ca-e67aed5932c5-kube-api-access-f4mq9\") pod \"coredns-7c65d6cfc9-t2khr\" (UID: \"efb62d5d-3a33-4337-b3ca-e67aed5932c5\") " pod="kube-system/coredns-7c65d6cfc9-t2khr" Sep 13 00:11:35.311479 kubelet[2556]: I0913 00:11:35.309590 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkskg\" (UniqueName: \"kubernetes.io/projected/572e4883-43be-47f8-9d71-340af499cdf4-kube-api-access-nkskg\") pod \"coredns-7c65d6cfc9-8ddht\" (UID: \"572e4883-43be-47f8-9d71-340af499cdf4\") " pod="kube-system/coredns-7c65d6cfc9-8ddht" Sep 13 00:11:35.311479 kubelet[2556]: I0913 00:11:35.309605 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gx5v\" (UniqueName: \"kubernetes.io/projected/7f30d77d-673c-4aff-b8fd-abd4bc5cd3dc-kube-api-access-6gx5v\") pod \"goldmane-7988f88666-6x2wp\" (UID: \"7f30d77d-673c-4aff-b8fd-abd4bc5cd3dc\") " pod="calico-system/goldmane-7988f88666-6x2wp" Sep 13 00:11:35.311479 kubelet[2556]: I0913 00:11:35.309621 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f30d77d-673c-4aff-b8fd-abd4bc5cd3dc-goldmane-ca-bundle\") pod \"goldmane-7988f88666-6x2wp\" (UID: \"7f30d77d-673c-4aff-b8fd-abd4bc5cd3dc\") " pod="calico-system/goldmane-7988f88666-6x2wp" Sep 13 00:11:35.310879 systemd[1]: Created slice kubepods-burstable-pod572e4883_43be_47f8_9d71_340af499cdf4.slice - libcontainer container kubepods-burstable-pod572e4883_43be_47f8_9d71_340af499cdf4.slice. Sep 13 00:11:35.311887 kubelet[2556]: I0913 00:11:35.309635 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/7f30d77d-673c-4aff-b8fd-abd4bc5cd3dc-goldmane-key-pair\") pod \"goldmane-7988f88666-6x2wp\" (UID: \"7f30d77d-673c-4aff-b8fd-abd4bc5cd3dc\") " pod="calico-system/goldmane-7988f88666-6x2wp" Sep 13 00:11:35.311887 kubelet[2556]: I0913 00:11:35.309651 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5s4x\" (UniqueName: \"kubernetes.io/projected/093eb2fa-5825-44f6-93c6-61d3114099e0-kube-api-access-r5s4x\") pod \"calico-apiserver-58f9bc44bc-q854x\" (UID: \"093eb2fa-5825-44f6-93c6-61d3114099e0\") " pod="calico-apiserver/calico-apiserver-58f9bc44bc-q854x" Sep 13 00:11:35.311887 kubelet[2556]: I0913 00:11:35.309666 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f30d77d-673c-4aff-b8fd-abd4bc5cd3dc-config\") pod \"goldmane-7988f88666-6x2wp\" (UID: \"7f30d77d-673c-4aff-b8fd-abd4bc5cd3dc\") " pod="calico-system/goldmane-7988f88666-6x2wp" Sep 13 00:11:35.311887 kubelet[2556]: I0913 00:11:35.309682 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/efb62d5d-3a33-4337-b3ca-e67aed5932c5-config-volume\") pod \"coredns-7c65d6cfc9-t2khr\" (UID: \"efb62d5d-3a33-4337-b3ca-e67aed5932c5\") " pod="kube-system/coredns-7c65d6cfc9-t2khr" Sep 13 00:11:35.311887 kubelet[2556]: I0913 00:11:35.309696 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/093eb2fa-5825-44f6-93c6-61d3114099e0-calico-apiserver-certs\") pod \"calico-apiserver-58f9bc44bc-q854x\" (UID: \"093eb2fa-5825-44f6-93c6-61d3114099e0\") " pod="calico-apiserver/calico-apiserver-58f9bc44bc-q854x" Sep 13 00:11:35.317894 systemd[1]: Created slice kubepods-besteffort-pod18e54af7_8bec_40b9_9191_5e12c28dbbdd.slice - libcontainer container kubepods-besteffort-pod18e54af7_8bec_40b9_9191_5e12c28dbbdd.slice. Sep 13 00:11:35.322126 systemd[1]: Created slice kubepods-besteffort-pod7a177557_980a_4069_9ba1_1de68d33d2df.slice - libcontainer container kubepods-besteffort-pod7a177557_980a_4069_9ba1_1de68d33d2df.slice. Sep 13 00:11:35.326827 systemd[1]: Created slice kubepods-besteffort-pod093eb2fa_5825_44f6_93c6_61d3114099e0.slice - libcontainer container kubepods-besteffort-pod093eb2fa_5825_44f6_93c6_61d3114099e0.slice. Sep 13 00:11:35.330680 systemd[1]: Created slice kubepods-besteffort-pod7f30d77d_673c_4aff_b8fd_abd4bc5cd3dc.slice - libcontainer container kubepods-besteffort-pod7f30d77d_673c_4aff_b8fd_abd4bc5cd3dc.slice. Sep 13 00:11:35.334895 systemd[1]: Created slice kubepods-besteffort-pod9e768d90_8245_40d6_9d8b_1cd06fa1a338.slice - libcontainer container kubepods-besteffort-pod9e768d90_8245_40d6_9d8b_1cd06fa1a338.slice. Sep 13 00:11:35.339001 systemd[1]: Created slice kubepods-burstable-podefb62d5d_3a33_4337_b3ca_e67aed5932c5.slice - libcontainer container kubepods-burstable-podefb62d5d_3a33_4337_b3ca_e67aed5932c5.slice. Sep 13 00:11:35.411071 kubelet[2556]: I0913 00:11:35.410976 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a177557-980a-4069-9ba1-1de68d33d2df-tigera-ca-bundle\") pod \"calico-kube-controllers-6bb584c8c5-4nwz8\" (UID: \"7a177557-980a-4069-9ba1-1de68d33d2df\") " pod="calico-system/calico-kube-controllers-6bb584c8c5-4nwz8" Sep 13 00:11:35.411376 kubelet[2556]: I0913 00:11:35.411092 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtrcz\" (UniqueName: \"kubernetes.io/projected/9e768d90-8245-40d6-9d8b-1cd06fa1a338-kube-api-access-rtrcz\") pod \"whisker-8bcf56c4d-dgrfq\" (UID: \"9e768d90-8245-40d6-9d8b-1cd06fa1a338\") " pod="calico-system/whisker-8bcf56c4d-dgrfq" Sep 13 00:11:35.411376 kubelet[2556]: I0913 00:11:35.411158 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk8qq\" (UniqueName: \"kubernetes.io/projected/18e54af7-8bec-40b9-9191-5e12c28dbbdd-kube-api-access-kk8qq\") pod \"calico-apiserver-58f9bc44bc-gqfmq\" (UID: \"18e54af7-8bec-40b9-9191-5e12c28dbbdd\") " pod="calico-apiserver/calico-apiserver-58f9bc44bc-gqfmq" Sep 13 00:11:35.411523 kubelet[2556]: I0913 00:11:35.411401 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96fr9\" (UniqueName: \"kubernetes.io/projected/7a177557-980a-4069-9ba1-1de68d33d2df-kube-api-access-96fr9\") pod \"calico-kube-controllers-6bb584c8c5-4nwz8\" (UID: \"7a177557-980a-4069-9ba1-1de68d33d2df\") " pod="calico-system/calico-kube-controllers-6bb584c8c5-4nwz8" Sep 13 00:11:35.411523 kubelet[2556]: I0913 00:11:35.411484 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9e768d90-8245-40d6-9d8b-1cd06fa1a338-whisker-backend-key-pair\") pod \"whisker-8bcf56c4d-dgrfq\" (UID: \"9e768d90-8245-40d6-9d8b-1cd06fa1a338\") " pod="calico-system/whisker-8bcf56c4d-dgrfq" Sep 13 00:11:35.411523 kubelet[2556]: I0913 00:11:35.411503 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e768d90-8245-40d6-9d8b-1cd06fa1a338-whisker-ca-bundle\") pod \"whisker-8bcf56c4d-dgrfq\" (UID: \"9e768d90-8245-40d6-9d8b-1cd06fa1a338\") " pod="calico-system/whisker-8bcf56c4d-dgrfq" Sep 13 00:11:35.411603 kubelet[2556]: I0913 00:11:35.411553 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/18e54af7-8bec-40b9-9191-5e12c28dbbdd-calico-apiserver-certs\") pod \"calico-apiserver-58f9bc44bc-gqfmq\" (UID: \"18e54af7-8bec-40b9-9191-5e12c28dbbdd\") " pod="calico-apiserver/calico-apiserver-58f9bc44bc-gqfmq" Sep 13 00:11:35.453459 containerd[1465]: time="2025-09-13T00:11:35.453382542Z" level=info msg="shim disconnected" id=4eaaca74a4f31517d2c134432b6d0ef272e2f5268c59acb9e969705ad7626f77 namespace=k8s.io Sep 13 00:11:35.453459 containerd[1465]: time="2025-09-13T00:11:35.453451922Z" level=warning msg="cleaning up after shim disconnected" id=4eaaca74a4f31517d2c134432b6d0ef272e2f5268c59acb9e969705ad7626f77 namespace=k8s.io Sep 13 00:11:35.453459 containerd[1465]: time="2025-09-13T00:11:35.453463917Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:11:35.616302 kubelet[2556]: E0913 00:11:35.616156 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:35.641405 kubelet[2556]: E0913 00:11:35.641339 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:35.663406 containerd[1465]: time="2025-09-13T00:11:35.663307248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8ddht,Uid:572e4883-43be-47f8-9d71-340af499cdf4,Namespace:kube-system,Attempt:0,}" Sep 13 00:11:35.663582 containerd[1465]: time="2025-09-13T00:11:35.663487962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-6x2wp,Uid:7f30d77d-673c-4aff-b8fd-abd4bc5cd3dc,Namespace:calico-system,Attempt:0,}" Sep 13 00:11:35.663813 containerd[1465]: time="2025-09-13T00:11:35.663739580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58f9bc44bc-q854x,Uid:093eb2fa-5825-44f6-93c6-61d3114099e0,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:11:35.664037 containerd[1465]: time="2025-09-13T00:11:35.663977009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-t2khr,Uid:efb62d5d-3a33-4337-b3ca-e67aed5932c5,Namespace:kube-system,Attempt:0,}" Sep 13 00:11:35.696329 containerd[1465]: time="2025-09-13T00:11:35.696283810Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 13 00:11:35.883025 containerd[1465]: time="2025-09-13T00:11:35.882851723Z" level=error msg="Failed to destroy network for sandbox \"4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:35.885202 containerd[1465]: time="2025-09-13T00:11:35.885139697Z" level=error msg="encountered an error cleaning up failed sandbox \"4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:35.885325 containerd[1465]: time="2025-09-13T00:11:35.885223236Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-6x2wp,Uid:7f30d77d-673c-4aff-b8fd-abd4bc5cd3dc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:35.887757 containerd[1465]: time="2025-09-13T00:11:35.887584137Z" level=error msg="Failed to destroy network for sandbox \"0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:35.888134 containerd[1465]: time="2025-09-13T00:11:35.888097682Z" level=error msg="encountered an error cleaning up failed sandbox \"0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:35.888196 containerd[1465]: time="2025-09-13T00:11:35.888170930Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8ddht,Uid:572e4883-43be-47f8-9d71-340af499cdf4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:35.889880 containerd[1465]: time="2025-09-13T00:11:35.889812781Z" level=error msg="Failed to destroy network for sandbox \"bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:35.890317 containerd[1465]: time="2025-09-13T00:11:35.890284162Z" level=error msg="encountered an error cleaning up failed sandbox \"bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:35.890374 containerd[1465]: time="2025-09-13T00:11:35.890346057Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-t2khr,Uid:efb62d5d-3a33-4337-b3ca-e67aed5932c5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:35.892323 containerd[1465]: time="2025-09-13T00:11:35.892269966Z" level=error msg="Failed to destroy network for sandbox \"63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:35.892734 containerd[1465]: time="2025-09-13T00:11:35.892696507Z" level=error msg="encountered an error cleaning up failed sandbox \"63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:35.892768 containerd[1465]: time="2025-09-13T00:11:35.892752881Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58f9bc44bc-q854x,Uid:093eb2fa-5825-44f6-93c6-61d3114099e0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:35.899050 kubelet[2556]: E0913 00:11:35.898949 2556 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:35.899050 kubelet[2556]: E0913 00:11:35.898960 2556 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:35.899161 kubelet[2556]: E0913 00:11:35.899056 2556 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-58f9bc44bc-q854x" Sep 13 00:11:35.899161 kubelet[2556]: E0913 00:11:35.899084 2556 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-58f9bc44bc-q854x" Sep 13 00:11:35.899161 kubelet[2556]: E0913 00:11:35.898953 2556 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:35.899238 kubelet[2556]: E0913 00:11:35.899154 2556 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-58f9bc44bc-q854x_calico-apiserver(093eb2fa-5825-44f6-93c6-61d3114099e0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-58f9bc44bc-q854x_calico-apiserver(093eb2fa-5825-44f6-93c6-61d3114099e0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-58f9bc44bc-q854x" podUID="093eb2fa-5825-44f6-93c6-61d3114099e0" Sep 13 00:11:35.899238 kubelet[2556]: E0913 00:11:35.899204 2556 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-t2khr" Sep 13 00:11:35.899324 kubelet[2556]: E0913 00:11:35.899236 2556 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-t2khr" Sep 13 00:11:35.899324 kubelet[2556]: E0913 00:11:35.898949 2556 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:35.899324 kubelet[2556]: E0913 00:11:35.899292 2556 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-t2khr_kube-system(efb62d5d-3a33-4337-b3ca-e67aed5932c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-t2khr_kube-system(efb62d5d-3a33-4337-b3ca-e67aed5932c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-t2khr" podUID="efb62d5d-3a33-4337-b3ca-e67aed5932c5" Sep 13 00:11:35.899418 kubelet[2556]: E0913 00:11:35.899334 2556 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-6x2wp" Sep 13 00:11:35.899418 kubelet[2556]: E0913 00:11:35.899364 2556 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-6x2wp" Sep 13 00:11:35.899481 kubelet[2556]: E0913 00:11:35.899414 2556 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-6x2wp_calico-system(7f30d77d-673c-4aff-b8fd-abd4bc5cd3dc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-6x2wp_calico-system(7f30d77d-673c-4aff-b8fd-abd4bc5cd3dc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-6x2wp" podUID="7f30d77d-673c-4aff-b8fd-abd4bc5cd3dc" Sep 13 00:11:35.899595 kubelet[2556]: E0913 00:11:35.899555 2556 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-8ddht" Sep 13 00:11:35.899595 kubelet[2556]: E0913 00:11:35.899587 2556 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-8ddht" Sep 13 00:11:35.899921 kubelet[2556]: E0913 00:11:35.899640 2556 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-8ddht_kube-system(572e4883-43be-47f8-9d71-340af499cdf4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-8ddht_kube-system(572e4883-43be-47f8-9d71-340af499cdf4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-8ddht" podUID="572e4883-43be-47f8-9d71-340af499cdf4" Sep 13 00:11:35.920486 containerd[1465]: time="2025-09-13T00:11:35.920414965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58f9bc44bc-gqfmq,Uid:18e54af7-8bec-40b9-9191-5e12c28dbbdd,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:11:35.925541 containerd[1465]: time="2025-09-13T00:11:35.925486893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bb584c8c5-4nwz8,Uid:7a177557-980a-4069-9ba1-1de68d33d2df,Namespace:calico-system,Attempt:0,}" Sep 13 00:11:35.937407 containerd[1465]: time="2025-09-13T00:11:35.937356581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8bcf56c4d-dgrfq,Uid:9e768d90-8245-40d6-9d8b-1cd06fa1a338,Namespace:calico-system,Attempt:0,}" Sep 13 00:11:35.999091 containerd[1465]: time="2025-09-13T00:11:35.997625722Z" level=error msg="Failed to destroy network for sandbox \"0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:35.999091 containerd[1465]: time="2025-09-13T00:11:35.998017813Z" level=error msg="encountered an error cleaning up failed sandbox \"0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:35.999091 containerd[1465]: time="2025-09-13T00:11:35.998062002Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58f9bc44bc-gqfmq,Uid:18e54af7-8bec-40b9-9191-5e12c28dbbdd,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:35.999391 kubelet[2556]: E0913 00:11:35.998310 2556 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:35.999391 kubelet[2556]: E0913 00:11:35.998383 2556 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-58f9bc44bc-gqfmq" Sep 13 00:11:35.999391 kubelet[2556]: E0913 00:11:35.998403 2556 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-58f9bc44bc-gqfmq" Sep 13 00:11:35.999520 kubelet[2556]: E0913 00:11:35.998449 2556 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-58f9bc44bc-gqfmq_calico-apiserver(18e54af7-8bec-40b9-9191-5e12c28dbbdd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-58f9bc44bc-gqfmq_calico-apiserver(18e54af7-8bec-40b9-9191-5e12c28dbbdd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-58f9bc44bc-gqfmq" podUID="18e54af7-8bec-40b9-9191-5e12c28dbbdd" Sep 13 00:11:36.017563 containerd[1465]: time="2025-09-13T00:11:36.017482323Z" level=error msg="Failed to destroy network for sandbox \"dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:36.018000 containerd[1465]: time="2025-09-13T00:11:36.017959505Z" level=error msg="encountered an error cleaning up failed sandbox \"dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:36.018163 containerd[1465]: time="2025-09-13T00:11:36.018033974Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8bcf56c4d-dgrfq,Uid:9e768d90-8245-40d6-9d8b-1cd06fa1a338,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:36.018356 kubelet[2556]: E0913 00:11:36.018296 2556 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:36.018452 kubelet[2556]: E0913 00:11:36.018392 2556 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8bcf56c4d-dgrfq" Sep 13 00:11:36.018452 kubelet[2556]: E0913 00:11:36.018423 2556 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8bcf56c4d-dgrfq" Sep 13 00:11:36.018521 kubelet[2556]: E0913 00:11:36.018487 2556 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-8bcf56c4d-dgrfq_calico-system(9e768d90-8245-40d6-9d8b-1cd06fa1a338)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-8bcf56c4d-dgrfq_calico-system(9e768d90-8245-40d6-9d8b-1cd06fa1a338)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-8bcf56c4d-dgrfq" podUID="9e768d90-8245-40d6-9d8b-1cd06fa1a338" Sep 13 00:11:36.051286 containerd[1465]: time="2025-09-13T00:11:36.051216447Z" level=error msg="Failed to destroy network for sandbox \"870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:36.051623 containerd[1465]: time="2025-09-13T00:11:36.051595581Z" level=error msg="encountered an error cleaning up failed sandbox \"870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:36.051667 containerd[1465]: time="2025-09-13T00:11:36.051649379Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bb584c8c5-4nwz8,Uid:7a177557-980a-4069-9ba1-1de68d33d2df,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:36.051972 kubelet[2556]: E0913 00:11:36.051919 2556 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:36.052053 kubelet[2556]: E0913 00:11:36.052001 2556 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6bb584c8c5-4nwz8" Sep 13 00:11:36.052053 kubelet[2556]: E0913 00:11:36.052029 2556 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6bb584c8c5-4nwz8" Sep 13 00:11:36.052123 kubelet[2556]: E0913 00:11:36.052089 2556 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6bb584c8c5-4nwz8_calico-system(7a177557-980a-4069-9ba1-1de68d33d2df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6bb584c8c5-4nwz8_calico-system(7a177557-980a-4069-9ba1-1de68d33d2df)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6bb584c8c5-4nwz8" podUID="7a177557-980a-4069-9ba1-1de68d33d2df" Sep 13 00:11:36.506239 systemd[1]: Created slice kubepods-besteffort-pod093a73a0_183b_402a_9ff9_9d907062e092.slice - libcontainer container kubepods-besteffort-pod093a73a0_183b_402a_9ff9_9d907062e092.slice. Sep 13 00:11:36.508824 containerd[1465]: time="2025-09-13T00:11:36.508768434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6249r,Uid:093a73a0-183b-402a-9ff9-9d907062e092,Namespace:calico-system,Attempt:0,}" Sep 13 00:11:36.570584 containerd[1465]: time="2025-09-13T00:11:36.570528273Z" level=error msg="Failed to destroy network for sandbox \"b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:36.570960 containerd[1465]: time="2025-09-13T00:11:36.570927006Z" level=error msg="encountered an error cleaning up failed sandbox \"b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:36.570999 containerd[1465]: time="2025-09-13T00:11:36.570974252Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6249r,Uid:093a73a0-183b-402a-9ff9-9d907062e092,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:36.571297 kubelet[2556]: E0913 00:11:36.571241 2556 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:36.571417 kubelet[2556]: E0913 00:11:36.571395 2556 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6249r" Sep 13 00:11:36.571466 kubelet[2556]: E0913 00:11:36.571449 2556 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6249r" Sep 13 00:11:36.571543 kubelet[2556]: E0913 00:11:36.571517 2556 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6249r_calico-system(093a73a0-183b-402a-9ff9-9d907062e092)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6249r_calico-system(093a73a0-183b-402a-9ff9-9d907062e092)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6249r" podUID="093a73a0-183b-402a-9ff9-9d907062e092" Sep 13 00:11:36.573989 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735-shm.mount: Deactivated successfully. Sep 13 00:11:36.697665 kubelet[2556]: I0913 00:11:36.697614 2556 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735" Sep 13 00:11:36.698234 kubelet[2556]: I0913 00:11:36.698213 2556 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473" Sep 13 00:11:36.700318 kubelet[2556]: I0913 00:11:36.700292 2556 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38" Sep 13 00:11:36.701165 containerd[1465]: time="2025-09-13T00:11:36.701130937Z" level=info msg="StopPodSandbox for \"dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38\"" Sep 13 00:11:36.701225 containerd[1465]: time="2025-09-13T00:11:36.701164625Z" level=info msg="StopPodSandbox for \"870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473\"" Sep 13 00:11:36.701937 kubelet[2556]: I0913 00:11:36.701736 2556 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314" Sep 13 00:11:36.702191 containerd[1465]: time="2025-09-13T00:11:36.702108226Z" level=info msg="StopPodSandbox for \"b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735\"" Sep 13 00:11:36.702418 containerd[1465]: time="2025-09-13T00:11:36.702371817Z" level=info msg="StopPodSandbox for \"0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314\"" Sep 13 00:11:36.706449 containerd[1465]: time="2025-09-13T00:11:36.706253334Z" level=info msg="Ensure that sandbox 0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314 in task-service has been cleanup successfully" Sep 13 00:11:36.706449 containerd[1465]: time="2025-09-13T00:11:36.706333004Z" level=info msg="Ensure that sandbox b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735 in task-service has been cleanup successfully" Sep 13 00:11:36.706850 containerd[1465]: time="2025-09-13T00:11:36.706252723Z" level=info msg="Ensure that sandbox 870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473 in task-service has been cleanup successfully" Sep 13 00:11:36.708822 containerd[1465]: time="2025-09-13T00:11:36.706255780Z" level=info msg="Ensure that sandbox dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38 in task-service has been cleanup successfully" Sep 13 00:11:36.711392 kubelet[2556]: I0913 00:11:36.711331 2556 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182" Sep 13 00:11:36.713987 containerd[1465]: time="2025-09-13T00:11:36.713449788Z" level=info msg="StopPodSandbox for \"bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182\"" Sep 13 00:11:36.713987 containerd[1465]: time="2025-09-13T00:11:36.713687106Z" level=info msg="Ensure that sandbox bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182 in task-service has been cleanup successfully" Sep 13 00:11:36.717679 kubelet[2556]: I0913 00:11:36.717646 2556 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a" Sep 13 00:11:36.719247 containerd[1465]: time="2025-09-13T00:11:36.718734603Z" level=info msg="StopPodSandbox for \"4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a\"" Sep 13 00:11:36.719247 containerd[1465]: time="2025-09-13T00:11:36.719011451Z" level=info msg="Ensure that sandbox 4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a in task-service has been cleanup successfully" Sep 13 00:11:36.721535 kubelet[2556]: I0913 00:11:36.721502 2556 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c" Sep 13 00:11:36.722424 containerd[1465]: time="2025-09-13T00:11:36.722394184Z" level=info msg="StopPodSandbox for \"63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c\"" Sep 13 00:11:36.723256 containerd[1465]: time="2025-09-13T00:11:36.723230798Z" level=info msg="Ensure that sandbox 63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c in task-service has been cleanup successfully" Sep 13 00:11:36.723630 kubelet[2556]: I0913 00:11:36.723609 2556 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce" Sep 13 00:11:36.724748 containerd[1465]: time="2025-09-13T00:11:36.724723064Z" level=info msg="StopPodSandbox for \"0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce\"" Sep 13 00:11:36.728141 containerd[1465]: time="2025-09-13T00:11:36.728101749Z" level=info msg="Ensure that sandbox 0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce in task-service has been cleanup successfully" Sep 13 00:11:36.772630 containerd[1465]: time="2025-09-13T00:11:36.772381301Z" level=error msg="StopPodSandbox for \"0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314\" failed" error="failed to destroy network for sandbox \"0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:36.773838 kubelet[2556]: E0913 00:11:36.773621 2556 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314" Sep 13 00:11:36.773838 kubelet[2556]: E0913 00:11:36.773688 2556 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314"} Sep 13 00:11:36.773838 kubelet[2556]: E0913 00:11:36.773752 2556 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"18e54af7-8bec-40b9-9191-5e12c28dbbdd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:11:36.773838 kubelet[2556]: E0913 00:11:36.773793 2556 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"18e54af7-8bec-40b9-9191-5e12c28dbbdd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-58f9bc44bc-gqfmq" podUID="18e54af7-8bec-40b9-9191-5e12c28dbbdd" Sep 13 00:11:36.777930 containerd[1465]: time="2025-09-13T00:11:36.777810527Z" level=error msg="StopPodSandbox for \"63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c\" failed" error="failed to destroy network for sandbox \"63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:36.778874 kubelet[2556]: E0913 00:11:36.778320 2556 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c" Sep 13 00:11:36.778874 kubelet[2556]: E0913 00:11:36.778386 2556 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c"} Sep 13 00:11:36.778874 kubelet[2556]: E0913 00:11:36.778424 2556 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"093eb2fa-5825-44f6-93c6-61d3114099e0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:11:36.778874 kubelet[2556]: E0913 00:11:36.778453 2556 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"093eb2fa-5825-44f6-93c6-61d3114099e0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-58f9bc44bc-q854x" podUID="093eb2fa-5825-44f6-93c6-61d3114099e0" Sep 13 00:11:36.783096 containerd[1465]: time="2025-09-13T00:11:36.783033075Z" level=error msg="StopPodSandbox for \"870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473\" failed" error="failed to destroy network for sandbox \"870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:36.784149 kubelet[2556]: E0913 00:11:36.783926 2556 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473" Sep 13 00:11:36.784149 kubelet[2556]: E0913 00:11:36.784007 2556 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473"} Sep 13 00:11:36.784149 kubelet[2556]: E0913 00:11:36.784065 2556 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7a177557-980a-4069-9ba1-1de68d33d2df\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:11:36.784149 kubelet[2556]: E0913 00:11:36.784094 2556 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7a177557-980a-4069-9ba1-1de68d33d2df\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6bb584c8c5-4nwz8" podUID="7a177557-980a-4069-9ba1-1de68d33d2df" Sep 13 00:11:36.790010 containerd[1465]: time="2025-09-13T00:11:36.789954164Z" level=error msg="StopPodSandbox for \"bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182\" failed" error="failed to destroy network for sandbox \"bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:36.790010 containerd[1465]: time="2025-09-13T00:11:36.789998765Z" level=error msg="StopPodSandbox for \"dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38\" failed" error="failed to destroy network for sandbox \"dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:36.790166 containerd[1465]: time="2025-09-13T00:11:36.789954204Z" level=error msg="StopPodSandbox for \"b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735\" failed" error="failed to destroy network for sandbox \"b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:36.790351 kubelet[2556]: E0913 00:11:36.790308 2556 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182" Sep 13 00:11:36.790597 kubelet[2556]: E0913 00:11:36.790469 2556 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182"} Sep 13 00:11:36.790597 kubelet[2556]: E0913 00:11:36.790311 2556 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735" Sep 13 00:11:36.790597 kubelet[2556]: E0913 00:11:36.790538 2556 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735"} Sep 13 00:11:36.790997 kubelet[2556]: E0913 00:11:36.790705 2556 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"efb62d5d-3a33-4337-b3ca-e67aed5932c5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:11:36.790997 kubelet[2556]: E0913 00:11:36.790728 2556 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"efb62d5d-3a33-4337-b3ca-e67aed5932c5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-t2khr" podUID="efb62d5d-3a33-4337-b3ca-e67aed5932c5" Sep 13 00:11:36.790997 kubelet[2556]: E0913 00:11:36.790354 2556 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38" Sep 13 00:11:36.790997 kubelet[2556]: E0913 00:11:36.790791 2556 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38"} Sep 13 00:11:36.791139 kubelet[2556]: E0913 00:11:36.790810 2556 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9e768d90-8245-40d6-9d8b-1cd06fa1a338\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:11:36.791139 kubelet[2556]: E0913 00:11:36.790839 2556 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9e768d90-8245-40d6-9d8b-1cd06fa1a338\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-8bcf56c4d-dgrfq" podUID="9e768d90-8245-40d6-9d8b-1cd06fa1a338" Sep 13 00:11:36.791318 kubelet[2556]: E0913 00:11:36.790572 2556 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"093a73a0-183b-402a-9ff9-9d907062e092\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:11:36.791318 kubelet[2556]: E0913 00:11:36.791278 2556 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"093a73a0-183b-402a-9ff9-9d907062e092\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6249r" podUID="093a73a0-183b-402a-9ff9-9d907062e092" Sep 13 00:11:36.798852 containerd[1465]: time="2025-09-13T00:11:36.798762976Z" level=error msg="StopPodSandbox for \"4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a\" failed" error="failed to destroy network for sandbox \"4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:36.799151 kubelet[2556]: E0913 00:11:36.799098 2556 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a" Sep 13 00:11:36.799221 kubelet[2556]: E0913 00:11:36.799167 2556 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a"} Sep 13 00:11:36.799221 kubelet[2556]: E0913 00:11:36.799205 2556 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7f30d77d-673c-4aff-b8fd-abd4bc5cd3dc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:11:36.799362 kubelet[2556]: E0913 00:11:36.799230 2556 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7f30d77d-673c-4aff-b8fd-abd4bc5cd3dc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-6x2wp" podUID="7f30d77d-673c-4aff-b8fd-abd4bc5cd3dc" Sep 13 00:11:36.802199 containerd[1465]: time="2025-09-13T00:11:36.802138405Z" level=error msg="StopPodSandbox for \"0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce\" failed" error="failed to destroy network for sandbox \"0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:36.802466 kubelet[2556]: E0913 00:11:36.802429 2556 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce" Sep 13 00:11:36.802466 kubelet[2556]: E0913 00:11:36.802457 2556 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce"} Sep 13 00:11:36.802554 kubelet[2556]: E0913 00:11:36.802479 2556 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"572e4883-43be-47f8-9d71-340af499cdf4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:11:36.802554 kubelet[2556]: E0913 00:11:36.802498 2556 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"572e4883-43be-47f8-9d71-340af499cdf4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-8ddht" podUID="572e4883-43be-47f8-9d71-340af499cdf4" Sep 13 00:11:42.547062 systemd[1]: Started sshd@9-10.0.0.108:22-10.0.0.1:59676.service - OpenSSH per-connection server daemon (10.0.0.1:59676). Sep 13 00:11:42.616947 sshd[3699]: Accepted publickey for core from 10.0.0.1 port 59676 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:11:42.618952 sshd[3699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:11:42.625459 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount316109119.mount: Deactivated successfully. Sep 13 00:11:42.628543 systemd-logind[1450]: New session 10 of user core. Sep 13 00:11:42.636946 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 13 00:11:42.959520 kubelet[2556]: I0913 00:11:42.959468 2556 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:11:42.960212 kubelet[2556]: E0913 00:11:42.959994 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:45.269297 kubelet[2556]: E0913 00:11:45.269181 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:45.384604 containerd[1465]: time="2025-09-13T00:11:45.383694893Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:45.387734 containerd[1465]: time="2025-09-13T00:11:45.387684451Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 13 00:11:45.389372 containerd[1465]: time="2025-09-13T00:11:45.389334476Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:45.394461 containerd[1465]: time="2025-09-13T00:11:45.394393539Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:45.395403 containerd[1465]: time="2025-09-13T00:11:45.395363917Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 9.699033193s" Sep 13 00:11:45.395479 containerd[1465]: time="2025-09-13T00:11:45.395407114Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 13 00:11:45.419279 containerd[1465]: time="2025-09-13T00:11:45.419215371Z" level=info msg="CreateContainer within sandbox \"239337a09257ddebb0a0bce81218537a663bc505d953889e29ed6132c1b86bb4\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 13 00:11:45.456925 sshd[3699]: pam_unix(sshd:session): session closed for user core Sep 13 00:11:45.461862 systemd[1]: sshd@9-10.0.0.108:22-10.0.0.1:59676.service: Deactivated successfully. Sep 13 00:11:45.463257 containerd[1465]: time="2025-09-13T00:11:45.463206394Z" level=info msg="CreateContainer within sandbox \"239337a09257ddebb0a0bce81218537a663bc505d953889e29ed6132c1b86bb4\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a132fc9746bcc1b4cfa39d6e8c32310ed177759e8852d8f3c292104e0641663f\"" Sep 13 00:11:45.464323 containerd[1465]: time="2025-09-13T00:11:45.464274518Z" level=info msg="StartContainer for \"a132fc9746bcc1b4cfa39d6e8c32310ed177759e8852d8f3c292104e0641663f\"" Sep 13 00:11:45.464553 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:11:45.465950 systemd-logind[1450]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:11:45.467552 systemd-logind[1450]: Removed session 10. Sep 13 00:11:45.532027 systemd[1]: Started cri-containerd-a132fc9746bcc1b4cfa39d6e8c32310ed177759e8852d8f3c292104e0641663f.scope - libcontainer container a132fc9746bcc1b4cfa39d6e8c32310ed177759e8852d8f3c292104e0641663f. Sep 13 00:11:45.574811 containerd[1465]: time="2025-09-13T00:11:45.574721671Z" level=info msg="StartContainer for \"a132fc9746bcc1b4cfa39d6e8c32310ed177759e8852d8f3c292104e0641663f\" returns successfully" Sep 13 00:11:45.677496 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 13 00:11:45.677665 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 13 00:11:45.802975 containerd[1465]: time="2025-09-13T00:11:45.802470023Z" level=info msg="StopPodSandbox for \"dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38\"" Sep 13 00:11:46.445329 containerd[1465]: 2025-09-13 00:11:45.914 [INFO][3784] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38" Sep 13 00:11:46.445329 containerd[1465]: 2025-09-13 00:11:45.915 [INFO][3784] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38" iface="eth0" netns="/var/run/netns/cni-33468cd8-4568-9e47-369b-62307bb8908d" Sep 13 00:11:46.445329 containerd[1465]: 2025-09-13 00:11:45.916 [INFO][3784] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38" iface="eth0" netns="/var/run/netns/cni-33468cd8-4568-9e47-369b-62307bb8908d" Sep 13 00:11:46.445329 containerd[1465]: 2025-09-13 00:11:45.916 [INFO][3784] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38" iface="eth0" netns="/var/run/netns/cni-33468cd8-4568-9e47-369b-62307bb8908d" Sep 13 00:11:46.445329 containerd[1465]: 2025-09-13 00:11:45.916 [INFO][3784] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38" Sep 13 00:11:46.445329 containerd[1465]: 2025-09-13 00:11:45.916 [INFO][3784] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38" Sep 13 00:11:46.445329 containerd[1465]: 2025-09-13 00:11:46.421 [INFO][3792] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38" HandleID="k8s-pod-network.dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38" Workload="localhost-k8s-whisker--8bcf56c4d--dgrfq-eth0" Sep 13 00:11:46.445329 containerd[1465]: 2025-09-13 00:11:46.423 [INFO][3792] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:46.445329 containerd[1465]: 2025-09-13 00:11:46.423 [INFO][3792] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:46.445329 containerd[1465]: 2025-09-13 00:11:46.432 [WARNING][3792] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38" HandleID="k8s-pod-network.dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38" Workload="localhost-k8s-whisker--8bcf56c4d--dgrfq-eth0" Sep 13 00:11:46.445329 containerd[1465]: 2025-09-13 00:11:46.432 [INFO][3792] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38" HandleID="k8s-pod-network.dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38" Workload="localhost-k8s-whisker--8bcf56c4d--dgrfq-eth0" Sep 13 00:11:46.445329 containerd[1465]: 2025-09-13 00:11:46.434 [INFO][3792] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:46.445329 containerd[1465]: 2025-09-13 00:11:46.441 [INFO][3784] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38" Sep 13 00:11:46.446357 containerd[1465]: time="2025-09-13T00:11:46.445544569Z" level=info msg="TearDown network for sandbox \"dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38\" successfully" Sep 13 00:11:46.446357 containerd[1465]: time="2025-09-13T00:11:46.445584879Z" level=info msg="StopPodSandbox for \"dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38\" returns successfully" Sep 13 00:11:46.451518 systemd[1]: run-netns-cni\x2d33468cd8\x2d4568\x2d9e47\x2d369b\x2d62307bb8908d.mount: Deactivated successfully. Sep 13 00:11:46.577461 kubelet[2556]: I0913 00:11:46.577396 2556 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtrcz\" (UniqueName: \"kubernetes.io/projected/9e768d90-8245-40d6-9d8b-1cd06fa1a338-kube-api-access-rtrcz\") pod \"9e768d90-8245-40d6-9d8b-1cd06fa1a338\" (UID: \"9e768d90-8245-40d6-9d8b-1cd06fa1a338\") " Sep 13 00:11:46.577461 kubelet[2556]: I0913 00:11:46.577441 2556 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9e768d90-8245-40d6-9d8b-1cd06fa1a338-whisker-backend-key-pair\") pod \"9e768d90-8245-40d6-9d8b-1cd06fa1a338\" (UID: \"9e768d90-8245-40d6-9d8b-1cd06fa1a338\") " Sep 13 00:11:46.577461 kubelet[2556]: I0913 00:11:46.577470 2556 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e768d90-8245-40d6-9d8b-1cd06fa1a338-whisker-ca-bundle\") pod \"9e768d90-8245-40d6-9d8b-1cd06fa1a338\" (UID: \"9e768d90-8245-40d6-9d8b-1cd06fa1a338\") " Sep 13 00:11:46.578203 kubelet[2556]: I0913 00:11:46.578124 2556 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e768d90-8245-40d6-9d8b-1cd06fa1a338-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "9e768d90-8245-40d6-9d8b-1cd06fa1a338" (UID: "9e768d90-8245-40d6-9d8b-1cd06fa1a338"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:11:46.583045 kubelet[2556]: I0913 00:11:46.583014 2556 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e768d90-8245-40d6-9d8b-1cd06fa1a338-kube-api-access-rtrcz" (OuterVolumeSpecName: "kube-api-access-rtrcz") pod "9e768d90-8245-40d6-9d8b-1cd06fa1a338" (UID: "9e768d90-8245-40d6-9d8b-1cd06fa1a338"). InnerVolumeSpecName "kube-api-access-rtrcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:11:46.583136 kubelet[2556]: I0913 00:11:46.583102 2556 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e768d90-8245-40d6-9d8b-1cd06fa1a338-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "9e768d90-8245-40d6-9d8b-1cd06fa1a338" (UID: "9e768d90-8245-40d6-9d8b-1cd06fa1a338"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:11:46.585935 systemd[1]: var-lib-kubelet-pods-9e768d90\x2d8245\x2d40d6\x2d9d8b\x2d1cd06fa1a338-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drtrcz.mount: Deactivated successfully. Sep 13 00:11:46.586100 systemd[1]: var-lib-kubelet-pods-9e768d90\x2d8245\x2d40d6\x2d9d8b\x2d1cd06fa1a338-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 13 00:11:46.678436 kubelet[2556]: I0913 00:11:46.678371 2556 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtrcz\" (UniqueName: \"kubernetes.io/projected/9e768d90-8245-40d6-9d8b-1cd06fa1a338-kube-api-access-rtrcz\") on node \"localhost\" DevicePath \"\"" Sep 13 00:11:46.678436 kubelet[2556]: I0913 00:11:46.678417 2556 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9e768d90-8245-40d6-9d8b-1cd06fa1a338-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 13 00:11:46.678436 kubelet[2556]: I0913 00:11:46.678429 2556 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e768d90-8245-40d6-9d8b-1cd06fa1a338-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 13 00:11:47.289813 systemd[1]: Removed slice kubepods-besteffort-pod9e768d90_8245_40d6_9d8b_1cd06fa1a338.slice - libcontainer container kubepods-besteffort-pod9e768d90_8245_40d6_9d8b_1cd06fa1a338.slice. Sep 13 00:11:47.307773 kubelet[2556]: I0913 00:11:47.306454 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-sp2d7" podStartSLOduration=2.9930900879999998 podStartE2EDuration="25.306433446s" podCreationTimestamp="2025-09-13 00:11:22 +0000 UTC" firstStartedPulling="2025-09-13 00:11:23.083134454 +0000 UTC m=+18.672954976" lastFinishedPulling="2025-09-13 00:11:45.396477812 +0000 UTC m=+40.986298334" observedRunningTime="2025-09-13 00:11:46.374624792 +0000 UTC m=+41.964445324" watchObservedRunningTime="2025-09-13 00:11:47.306433446 +0000 UTC m=+42.896253968" Sep 13 00:11:47.352094 systemd[1]: Created slice kubepods-besteffort-podf7caa556_3739_4e50_98e0_291bb1e6f4b7.slice - libcontainer container kubepods-besteffort-podf7caa556_3739_4e50_98e0_291bb1e6f4b7.slice. Sep 13 00:11:47.383818 kubelet[2556]: I0913 00:11:47.382564 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7caa556-3739-4e50-98e0-291bb1e6f4b7-whisker-ca-bundle\") pod \"whisker-5758544b55-26wzs\" (UID: \"f7caa556-3739-4e50-98e0-291bb1e6f4b7\") " pod="calico-system/whisker-5758544b55-26wzs" Sep 13 00:11:47.383818 kubelet[2556]: I0913 00:11:47.382626 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f7caa556-3739-4e50-98e0-291bb1e6f4b7-whisker-backend-key-pair\") pod \"whisker-5758544b55-26wzs\" (UID: \"f7caa556-3739-4e50-98e0-291bb1e6f4b7\") " pod="calico-system/whisker-5758544b55-26wzs" Sep 13 00:11:47.383818 kubelet[2556]: I0913 00:11:47.382641 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kcvs\" (UniqueName: \"kubernetes.io/projected/f7caa556-3739-4e50-98e0-291bb1e6f4b7-kube-api-access-4kcvs\") pod \"whisker-5758544b55-26wzs\" (UID: \"f7caa556-3739-4e50-98e0-291bb1e6f4b7\") " pod="calico-system/whisker-5758544b55-26wzs" Sep 13 00:11:47.510351 containerd[1465]: time="2025-09-13T00:11:47.508583148Z" level=info msg="StopPodSandbox for \"63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c\"" Sep 13 00:11:47.657824 containerd[1465]: time="2025-09-13T00:11:47.657751570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5758544b55-26wzs,Uid:f7caa556-3739-4e50-98e0-291bb1e6f4b7,Namespace:calico-system,Attempt:0,}" Sep 13 00:11:47.663639 containerd[1465]: 2025-09-13 00:11:47.600 [INFO][3952] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c" Sep 13 00:11:47.663639 containerd[1465]: 2025-09-13 00:11:47.600 [INFO][3952] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c" iface="eth0" netns="/var/run/netns/cni-220b8865-80e8-1821-26a7-e339d3b03753" Sep 13 00:11:47.663639 containerd[1465]: 2025-09-13 00:11:47.600 [INFO][3952] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c" iface="eth0" netns="/var/run/netns/cni-220b8865-80e8-1821-26a7-e339d3b03753" Sep 13 00:11:47.663639 containerd[1465]: 2025-09-13 00:11:47.601 [INFO][3952] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c" iface="eth0" netns="/var/run/netns/cni-220b8865-80e8-1821-26a7-e339d3b03753" Sep 13 00:11:47.663639 containerd[1465]: 2025-09-13 00:11:47.601 [INFO][3952] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c" Sep 13 00:11:47.663639 containerd[1465]: 2025-09-13 00:11:47.601 [INFO][3952] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c" Sep 13 00:11:47.663639 containerd[1465]: 2025-09-13 00:11:47.644 [INFO][3971] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c" HandleID="k8s-pod-network.63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c" Workload="localhost-k8s-calico--apiserver--58f9bc44bc--q854x-eth0" Sep 13 00:11:47.663639 containerd[1465]: 2025-09-13 00:11:47.644 [INFO][3971] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:47.663639 containerd[1465]: 2025-09-13 00:11:47.644 [INFO][3971] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:47.663639 containerd[1465]: 2025-09-13 00:11:47.652 [WARNING][3971] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c" HandleID="k8s-pod-network.63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c" Workload="localhost-k8s-calico--apiserver--58f9bc44bc--q854x-eth0" Sep 13 00:11:47.663639 containerd[1465]: 2025-09-13 00:11:47.652 [INFO][3971] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c" HandleID="k8s-pod-network.63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c" Workload="localhost-k8s-calico--apiserver--58f9bc44bc--q854x-eth0" Sep 13 00:11:47.663639 containerd[1465]: 2025-09-13 00:11:47.654 [INFO][3971] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:47.663639 containerd[1465]: 2025-09-13 00:11:47.658 [INFO][3952] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c" Sep 13 00:11:47.664704 containerd[1465]: time="2025-09-13T00:11:47.664224330Z" level=info msg="TearDown network for sandbox \"63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c\" successfully" Sep 13 00:11:47.664704 containerd[1465]: time="2025-09-13T00:11:47.664256374Z" level=info msg="StopPodSandbox for \"63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c\" returns successfully" Sep 13 00:11:47.665225 containerd[1465]: time="2025-09-13T00:11:47.665196900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58f9bc44bc-q854x,Uid:093eb2fa-5825-44f6-93c6-61d3114099e0,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:11:47.669361 systemd[1]: run-netns-cni\x2d220b8865\x2d80e8\x2d1821\x2d26a7\x2de339d3b03753.mount: Deactivated successfully. Sep 13 00:11:47.803823 kernel: bpftool[4051]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 13 00:11:47.892106 systemd-networkd[1409]: cali5917225f54c: Link UP Sep 13 00:11:47.892406 systemd-networkd[1409]: cali5917225f54c: Gained carrier Sep 13 00:11:47.911009 containerd[1465]: 2025-09-13 00:11:47.756 [INFO][4006] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:11:47.911009 containerd[1465]: 2025-09-13 00:11:47.778 [INFO][4006] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--58f9bc44bc--q854x-eth0 calico-apiserver-58f9bc44bc- calico-apiserver 093eb2fa-5825-44f6-93c6-61d3114099e0 961 0 2025-09-13 00:11:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:58f9bc44bc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-58f9bc44bc-q854x eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5917225f54c [] [] }} ContainerID="c60aa7b749f46a0213cfd5fc86946caf5294b9cb6a8be0f4213c6559e4dc151e" Namespace="calico-apiserver" Pod="calico-apiserver-58f9bc44bc-q854x" WorkloadEndpoint="localhost-k8s-calico--apiserver--58f9bc44bc--q854x-" Sep 13 00:11:47.911009 containerd[1465]: 2025-09-13 00:11:47.778 [INFO][4006] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c60aa7b749f46a0213cfd5fc86946caf5294b9cb6a8be0f4213c6559e4dc151e" Namespace="calico-apiserver" Pod="calico-apiserver-58f9bc44bc-q854x" WorkloadEndpoint="localhost-k8s-calico--apiserver--58f9bc44bc--q854x-eth0" Sep 13 00:11:47.911009 containerd[1465]: 2025-09-13 00:11:47.818 [INFO][4034] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c60aa7b749f46a0213cfd5fc86946caf5294b9cb6a8be0f4213c6559e4dc151e" HandleID="k8s-pod-network.c60aa7b749f46a0213cfd5fc86946caf5294b9cb6a8be0f4213c6559e4dc151e" Workload="localhost-k8s-calico--apiserver--58f9bc44bc--q854x-eth0" Sep 13 00:11:47.911009 containerd[1465]: 2025-09-13 00:11:47.818 [INFO][4034] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c60aa7b749f46a0213cfd5fc86946caf5294b9cb6a8be0f4213c6559e4dc151e" HandleID="k8s-pod-network.c60aa7b749f46a0213cfd5fc86946caf5294b9cb6a8be0f4213c6559e4dc151e" Workload="localhost-k8s-calico--apiserver--58f9bc44bc--q854x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000324150), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-58f9bc44bc-q854x", "timestamp":"2025-09-13 00:11:47.818058187 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:11:47.911009 containerd[1465]: 2025-09-13 00:11:47.818 [INFO][4034] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:47.911009 containerd[1465]: 2025-09-13 00:11:47.818 [INFO][4034] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:47.911009 containerd[1465]: 2025-09-13 00:11:47.818 [INFO][4034] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:11:47.911009 containerd[1465]: 2025-09-13 00:11:47.831 [INFO][4034] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c60aa7b749f46a0213cfd5fc86946caf5294b9cb6a8be0f4213c6559e4dc151e" host="localhost" Sep 13 00:11:47.911009 containerd[1465]: 2025-09-13 00:11:47.853 [INFO][4034] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:11:47.911009 containerd[1465]: 2025-09-13 00:11:47.858 [INFO][4034] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:11:47.911009 containerd[1465]: 2025-09-13 00:11:47.860 [INFO][4034] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:11:47.911009 containerd[1465]: 2025-09-13 00:11:47.862 [INFO][4034] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:11:47.911009 containerd[1465]: 2025-09-13 00:11:47.862 [INFO][4034] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c60aa7b749f46a0213cfd5fc86946caf5294b9cb6a8be0f4213c6559e4dc151e" host="localhost" Sep 13 00:11:47.911009 containerd[1465]: 2025-09-13 00:11:47.863 [INFO][4034] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c60aa7b749f46a0213cfd5fc86946caf5294b9cb6a8be0f4213c6559e4dc151e Sep 13 00:11:47.911009 containerd[1465]: 2025-09-13 00:11:47.867 [INFO][4034] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c60aa7b749f46a0213cfd5fc86946caf5294b9cb6a8be0f4213c6559e4dc151e" host="localhost" Sep 13 00:11:47.911009 containerd[1465]: 2025-09-13 00:11:47.873 [INFO][4034] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.c60aa7b749f46a0213cfd5fc86946caf5294b9cb6a8be0f4213c6559e4dc151e" host="localhost" Sep 13 00:11:47.911009 containerd[1465]: 2025-09-13 00:11:47.873 [INFO][4034] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.c60aa7b749f46a0213cfd5fc86946caf5294b9cb6a8be0f4213c6559e4dc151e" host="localhost" Sep 13 00:11:47.911009 containerd[1465]: 2025-09-13 00:11:47.873 [INFO][4034] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:47.911009 containerd[1465]: 2025-09-13 00:11:47.873 [INFO][4034] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="c60aa7b749f46a0213cfd5fc86946caf5294b9cb6a8be0f4213c6559e4dc151e" HandleID="k8s-pod-network.c60aa7b749f46a0213cfd5fc86946caf5294b9cb6a8be0f4213c6559e4dc151e" Workload="localhost-k8s-calico--apiserver--58f9bc44bc--q854x-eth0" Sep 13 00:11:47.911882 containerd[1465]: 2025-09-13 00:11:47.877 [INFO][4006] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c60aa7b749f46a0213cfd5fc86946caf5294b9cb6a8be0f4213c6559e4dc151e" Namespace="calico-apiserver" Pod="calico-apiserver-58f9bc44bc-q854x" WorkloadEndpoint="localhost-k8s-calico--apiserver--58f9bc44bc--q854x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--58f9bc44bc--q854x-eth0", GenerateName:"calico-apiserver-58f9bc44bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"093eb2fa-5825-44f6-93c6-61d3114099e0", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58f9bc44bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-58f9bc44bc-q854x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5917225f54c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:47.911882 containerd[1465]: 2025-09-13 00:11:47.878 [INFO][4006] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="c60aa7b749f46a0213cfd5fc86946caf5294b9cb6a8be0f4213c6559e4dc151e" Namespace="calico-apiserver" Pod="calico-apiserver-58f9bc44bc-q854x" WorkloadEndpoint="localhost-k8s-calico--apiserver--58f9bc44bc--q854x-eth0" Sep 13 00:11:47.911882 containerd[1465]: 2025-09-13 00:11:47.878 [INFO][4006] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5917225f54c ContainerID="c60aa7b749f46a0213cfd5fc86946caf5294b9cb6a8be0f4213c6559e4dc151e" Namespace="calico-apiserver" Pod="calico-apiserver-58f9bc44bc-q854x" WorkloadEndpoint="localhost-k8s-calico--apiserver--58f9bc44bc--q854x-eth0" Sep 13 00:11:47.911882 containerd[1465]: 2025-09-13 00:11:47.893 [INFO][4006] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c60aa7b749f46a0213cfd5fc86946caf5294b9cb6a8be0f4213c6559e4dc151e" Namespace="calico-apiserver" Pod="calico-apiserver-58f9bc44bc-q854x" WorkloadEndpoint="localhost-k8s-calico--apiserver--58f9bc44bc--q854x-eth0" Sep 13 00:11:47.911882 containerd[1465]: 2025-09-13 00:11:47.893 [INFO][4006] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c60aa7b749f46a0213cfd5fc86946caf5294b9cb6a8be0f4213c6559e4dc151e" Namespace="calico-apiserver" Pod="calico-apiserver-58f9bc44bc-q854x" WorkloadEndpoint="localhost-k8s-calico--apiserver--58f9bc44bc--q854x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--58f9bc44bc--q854x-eth0", GenerateName:"calico-apiserver-58f9bc44bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"093eb2fa-5825-44f6-93c6-61d3114099e0", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58f9bc44bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c60aa7b749f46a0213cfd5fc86946caf5294b9cb6a8be0f4213c6559e4dc151e", Pod:"calico-apiserver-58f9bc44bc-q854x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5917225f54c", MAC:"42:7a:28:51:b9:41", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:47.911882 containerd[1465]: 2025-09-13 00:11:47.908 [INFO][4006] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c60aa7b749f46a0213cfd5fc86946caf5294b9cb6a8be0f4213c6559e4dc151e" Namespace="calico-apiserver" Pod="calico-apiserver-58f9bc44bc-q854x" WorkloadEndpoint="localhost-k8s-calico--apiserver--58f9bc44bc--q854x-eth0" Sep 13 00:11:47.952559 containerd[1465]: time="2025-09-13T00:11:47.952378445Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:11:47.952559 containerd[1465]: time="2025-09-13T00:11:47.952499998Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:11:47.952559 containerd[1465]: time="2025-09-13T00:11:47.952520168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:47.952897 containerd[1465]: time="2025-09-13T00:11:47.952674976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:47.979882 systemd[1]: Started cri-containerd-c60aa7b749f46a0213cfd5fc86946caf5294b9cb6a8be0f4213c6559e4dc151e.scope - libcontainer container c60aa7b749f46a0213cfd5fc86946caf5294b9cb6a8be0f4213c6559e4dc151e. Sep 13 00:11:47.998745 systemd-networkd[1409]: cali87d9a0fe7d3: Link UP Sep 13 00:11:48.001494 systemd-networkd[1409]: cali87d9a0fe7d3: Gained carrier Sep 13 00:11:48.003381 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:11:48.021291 containerd[1465]: 2025-09-13 00:11:47.760 [INFO][3984] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:11:48.021291 containerd[1465]: 2025-09-13 00:11:47.777 [INFO][3984] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5758544b55--26wzs-eth0 whisker-5758544b55- calico-system f7caa556-3739-4e50-98e0-291bb1e6f4b7 956 0 2025-09-13 00:11:47 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5758544b55 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5758544b55-26wzs eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali87d9a0fe7d3 [] [] }} ContainerID="90f4ff3b69d65f750e543e408e5092a16b2367214f5c594e655ae7b02077bf3c" Namespace="calico-system" Pod="whisker-5758544b55-26wzs" WorkloadEndpoint="localhost-k8s-whisker--5758544b55--26wzs-" Sep 13 00:11:48.021291 containerd[1465]: 2025-09-13 00:11:47.777 [INFO][3984] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="90f4ff3b69d65f750e543e408e5092a16b2367214f5c594e655ae7b02077bf3c" Namespace="calico-system" Pod="whisker-5758544b55-26wzs" WorkloadEndpoint="localhost-k8s-whisker--5758544b55--26wzs-eth0" Sep 13 00:11:48.021291 containerd[1465]: 2025-09-13 00:11:47.825 [INFO][4037] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="90f4ff3b69d65f750e543e408e5092a16b2367214f5c594e655ae7b02077bf3c" HandleID="k8s-pod-network.90f4ff3b69d65f750e543e408e5092a16b2367214f5c594e655ae7b02077bf3c" Workload="localhost-k8s-whisker--5758544b55--26wzs-eth0" Sep 13 00:11:48.021291 containerd[1465]: 2025-09-13 00:11:47.825 [INFO][4037] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="90f4ff3b69d65f750e543e408e5092a16b2367214f5c594e655ae7b02077bf3c" HandleID="k8s-pod-network.90f4ff3b69d65f750e543e408e5092a16b2367214f5c594e655ae7b02077bf3c" Workload="localhost-k8s-whisker--5758544b55--26wzs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f730), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5758544b55-26wzs", "timestamp":"2025-09-13 00:11:47.825426504 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:11:48.021291 containerd[1465]: 2025-09-13 00:11:47.825 [INFO][4037] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:48.021291 containerd[1465]: 2025-09-13 00:11:47.873 [INFO][4037] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:48.021291 containerd[1465]: 2025-09-13 00:11:47.873 [INFO][4037] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:11:48.021291 containerd[1465]: 2025-09-13 00:11:47.930 [INFO][4037] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.90f4ff3b69d65f750e543e408e5092a16b2367214f5c594e655ae7b02077bf3c" host="localhost" Sep 13 00:11:48.021291 containerd[1465]: 2025-09-13 00:11:47.939 [INFO][4037] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:11:48.021291 containerd[1465]: 2025-09-13 00:11:47.962 [INFO][4037] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:11:48.021291 containerd[1465]: 2025-09-13 00:11:47.964 [INFO][4037] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:11:48.021291 containerd[1465]: 2025-09-13 00:11:47.967 [INFO][4037] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:11:48.021291 containerd[1465]: 2025-09-13 00:11:47.967 [INFO][4037] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.90f4ff3b69d65f750e543e408e5092a16b2367214f5c594e655ae7b02077bf3c" host="localhost" Sep 13 00:11:48.021291 containerd[1465]: 2025-09-13 00:11:47.971 [INFO][4037] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.90f4ff3b69d65f750e543e408e5092a16b2367214f5c594e655ae7b02077bf3c Sep 13 00:11:48.021291 containerd[1465]: 2025-09-13 00:11:47.980 [INFO][4037] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.90f4ff3b69d65f750e543e408e5092a16b2367214f5c594e655ae7b02077bf3c" host="localhost" Sep 13 00:11:48.021291 containerd[1465]: 2025-09-13 00:11:47.989 [INFO][4037] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.90f4ff3b69d65f750e543e408e5092a16b2367214f5c594e655ae7b02077bf3c" host="localhost" Sep 13 00:11:48.021291 containerd[1465]: 2025-09-13 00:11:47.989 [INFO][4037] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.90f4ff3b69d65f750e543e408e5092a16b2367214f5c594e655ae7b02077bf3c" host="localhost" Sep 13 00:11:48.021291 containerd[1465]: 2025-09-13 00:11:47.989 [INFO][4037] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:48.021291 containerd[1465]: 2025-09-13 00:11:47.989 [INFO][4037] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="90f4ff3b69d65f750e543e408e5092a16b2367214f5c594e655ae7b02077bf3c" HandleID="k8s-pod-network.90f4ff3b69d65f750e543e408e5092a16b2367214f5c594e655ae7b02077bf3c" Workload="localhost-k8s-whisker--5758544b55--26wzs-eth0" Sep 13 00:11:48.022333 containerd[1465]: 2025-09-13 00:11:47.993 [INFO][3984] cni-plugin/k8s.go 418: Populated endpoint ContainerID="90f4ff3b69d65f750e543e408e5092a16b2367214f5c594e655ae7b02077bf3c" Namespace="calico-system" Pod="whisker-5758544b55-26wzs" WorkloadEndpoint="localhost-k8s-whisker--5758544b55--26wzs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5758544b55--26wzs-eth0", GenerateName:"whisker-5758544b55-", Namespace:"calico-system", SelfLink:"", UID:"f7caa556-3739-4e50-98e0-291bb1e6f4b7", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5758544b55", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5758544b55-26wzs", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali87d9a0fe7d3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:48.022333 containerd[1465]: 2025-09-13 00:11:47.993 [INFO][3984] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="90f4ff3b69d65f750e543e408e5092a16b2367214f5c594e655ae7b02077bf3c" Namespace="calico-system" Pod="whisker-5758544b55-26wzs" WorkloadEndpoint="localhost-k8s-whisker--5758544b55--26wzs-eth0" Sep 13 00:11:48.022333 containerd[1465]: 2025-09-13 00:11:47.993 [INFO][3984] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali87d9a0fe7d3 ContainerID="90f4ff3b69d65f750e543e408e5092a16b2367214f5c594e655ae7b02077bf3c" Namespace="calico-system" Pod="whisker-5758544b55-26wzs" WorkloadEndpoint="localhost-k8s-whisker--5758544b55--26wzs-eth0" Sep 13 00:11:48.022333 containerd[1465]: 2025-09-13 00:11:48.003 [INFO][3984] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="90f4ff3b69d65f750e543e408e5092a16b2367214f5c594e655ae7b02077bf3c" Namespace="calico-system" Pod="whisker-5758544b55-26wzs" WorkloadEndpoint="localhost-k8s-whisker--5758544b55--26wzs-eth0" Sep 13 00:11:48.022333 containerd[1465]: 2025-09-13 00:11:48.003 [INFO][3984] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="90f4ff3b69d65f750e543e408e5092a16b2367214f5c594e655ae7b02077bf3c" Namespace="calico-system" Pod="whisker-5758544b55-26wzs" WorkloadEndpoint="localhost-k8s-whisker--5758544b55--26wzs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5758544b55--26wzs-eth0", GenerateName:"whisker-5758544b55-", Namespace:"calico-system", SelfLink:"", UID:"f7caa556-3739-4e50-98e0-291bb1e6f4b7", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5758544b55", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"90f4ff3b69d65f750e543e408e5092a16b2367214f5c594e655ae7b02077bf3c", Pod:"whisker-5758544b55-26wzs", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali87d9a0fe7d3", MAC:"f2:99:d0:1c:6e:ae", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:48.022333 containerd[1465]: 2025-09-13 00:11:48.017 [INFO][3984] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="90f4ff3b69d65f750e543e408e5092a16b2367214f5c594e655ae7b02077bf3c" Namespace="calico-system" Pod="whisker-5758544b55-26wzs" WorkloadEndpoint="localhost-k8s-whisker--5758544b55--26wzs-eth0" Sep 13 00:11:48.042551 containerd[1465]: time="2025-09-13T00:11:48.039490403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58f9bc44bc-q854x,Uid:093eb2fa-5825-44f6-93c6-61d3114099e0,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"c60aa7b749f46a0213cfd5fc86946caf5294b9cb6a8be0f4213c6559e4dc151e\"" Sep 13 00:11:48.045346 containerd[1465]: time="2025-09-13T00:11:48.044298757Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:11:48.048601 containerd[1465]: time="2025-09-13T00:11:48.048333560Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:11:48.048601 containerd[1465]: time="2025-09-13T00:11:48.048525182Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:11:48.048601 containerd[1465]: time="2025-09-13T00:11:48.048580422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:48.048891 containerd[1465]: time="2025-09-13T00:11:48.048836132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:48.081057 systemd[1]: Started cri-containerd-90f4ff3b69d65f750e543e408e5092a16b2367214f5c594e655ae7b02077bf3c.scope - libcontainer container 90f4ff3b69d65f750e543e408e5092a16b2367214f5c594e655ae7b02077bf3c. Sep 13 00:11:48.099645 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:11:48.135268 systemd-networkd[1409]: vxlan.calico: Link UP Sep 13 00:11:48.135716 systemd-networkd[1409]: vxlan.calico: Gained carrier Sep 13 00:11:48.144236 containerd[1465]: time="2025-09-13T00:11:48.144183817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5758544b55-26wzs,Uid:f7caa556-3739-4e50-98e0-291bb1e6f4b7,Namespace:calico-system,Attempt:0,} returns sandbox id \"90f4ff3b69d65f750e543e408e5092a16b2367214f5c594e655ae7b02077bf3c\"" Sep 13 00:11:48.500853 containerd[1465]: time="2025-09-13T00:11:48.500807812Z" level=info msg="StopPodSandbox for \"870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473\"" Sep 13 00:11:48.503191 kubelet[2556]: I0913 00:11:48.503136 2556 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e768d90-8245-40d6-9d8b-1cd06fa1a338" path="/var/lib/kubelet/pods/9e768d90-8245-40d6-9d8b-1cd06fa1a338/volumes" Sep 13 00:11:48.609150 containerd[1465]: 2025-09-13 00:11:48.567 [INFO][4215] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473" Sep 13 00:11:48.609150 containerd[1465]: 2025-09-13 00:11:48.567 [INFO][4215] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473" iface="eth0" netns="/var/run/netns/cni-e015979a-b8f0-b9dd-5eb7-8a4ea96a8c2a" Sep 13 00:11:48.609150 containerd[1465]: 2025-09-13 00:11:48.567 [INFO][4215] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473" iface="eth0" netns="/var/run/netns/cni-e015979a-b8f0-b9dd-5eb7-8a4ea96a8c2a" Sep 13 00:11:48.609150 containerd[1465]: 2025-09-13 00:11:48.568 [INFO][4215] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473" iface="eth0" netns="/var/run/netns/cni-e015979a-b8f0-b9dd-5eb7-8a4ea96a8c2a" Sep 13 00:11:48.609150 containerd[1465]: 2025-09-13 00:11:48.568 [INFO][4215] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473" Sep 13 00:11:48.609150 containerd[1465]: 2025-09-13 00:11:48.568 [INFO][4215] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473" Sep 13 00:11:48.609150 containerd[1465]: 2025-09-13 00:11:48.592 [INFO][4242] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473" HandleID="k8s-pod-network.870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473" Workload="localhost-k8s-calico--kube--controllers--6bb584c8c5--4nwz8-eth0" Sep 13 00:11:48.609150 containerd[1465]: 2025-09-13 00:11:48.593 [INFO][4242] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:48.609150 containerd[1465]: 2025-09-13 00:11:48.593 [INFO][4242] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:48.609150 containerd[1465]: 2025-09-13 00:11:48.601 [WARNING][4242] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473" HandleID="k8s-pod-network.870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473" Workload="localhost-k8s-calico--kube--controllers--6bb584c8c5--4nwz8-eth0" Sep 13 00:11:48.609150 containerd[1465]: 2025-09-13 00:11:48.601 [INFO][4242] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473" HandleID="k8s-pod-network.870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473" Workload="localhost-k8s-calico--kube--controllers--6bb584c8c5--4nwz8-eth0" Sep 13 00:11:48.609150 containerd[1465]: 2025-09-13 00:11:48.603 [INFO][4242] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:48.609150 containerd[1465]: 2025-09-13 00:11:48.605 [INFO][4215] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473" Sep 13 00:11:48.610127 containerd[1465]: time="2025-09-13T00:11:48.609328835Z" level=info msg="TearDown network for sandbox \"870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473\" successfully" Sep 13 00:11:48.610127 containerd[1465]: time="2025-09-13T00:11:48.609367622Z" level=info msg="StopPodSandbox for \"870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473\" returns successfully" Sep 13 00:11:48.610264 containerd[1465]: time="2025-09-13T00:11:48.610226805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bb584c8c5-4nwz8,Uid:7a177557-980a-4069-9ba1-1de68d33d2df,Namespace:calico-system,Attempt:1,}" Sep 13 00:11:48.612499 systemd[1]: run-netns-cni\x2de015979a\x2db8f0\x2db9dd\x2d5eb7\x2d8a4ea96a8c2a.mount: Deactivated successfully. Sep 13 00:11:48.747404 systemd-networkd[1409]: cali856b6608dc8: Link UP Sep 13 00:11:48.747595 systemd-networkd[1409]: cali856b6608dc8: Gained carrier Sep 13 00:11:48.764211 containerd[1465]: 2025-09-13 00:11:48.678 [INFO][4255] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6bb584c8c5--4nwz8-eth0 calico-kube-controllers-6bb584c8c5- calico-system 7a177557-980a-4069-9ba1-1de68d33d2df 976 0 2025-09-13 00:11:22 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6bb584c8c5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6bb584c8c5-4nwz8 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali856b6608dc8 [] [] }} ContainerID="346a26d8e21ef37ccd6a7798620d35267200b16fc7168f26dcb26501fa1c7a98" Namespace="calico-system" Pod="calico-kube-controllers-6bb584c8c5-4nwz8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bb584c8c5--4nwz8-" Sep 13 00:11:48.764211 containerd[1465]: 2025-09-13 00:11:48.678 [INFO][4255] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="346a26d8e21ef37ccd6a7798620d35267200b16fc7168f26dcb26501fa1c7a98" Namespace="calico-system" Pod="calico-kube-controllers-6bb584c8c5-4nwz8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bb584c8c5--4nwz8-eth0" Sep 13 00:11:48.764211 containerd[1465]: 2025-09-13 00:11:48.709 [INFO][4270] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="346a26d8e21ef37ccd6a7798620d35267200b16fc7168f26dcb26501fa1c7a98" HandleID="k8s-pod-network.346a26d8e21ef37ccd6a7798620d35267200b16fc7168f26dcb26501fa1c7a98" Workload="localhost-k8s-calico--kube--controllers--6bb584c8c5--4nwz8-eth0" Sep 13 00:11:48.764211 containerd[1465]: 2025-09-13 00:11:48.709 [INFO][4270] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="346a26d8e21ef37ccd6a7798620d35267200b16fc7168f26dcb26501fa1c7a98" HandleID="k8s-pod-network.346a26d8e21ef37ccd6a7798620d35267200b16fc7168f26dcb26501fa1c7a98" Workload="localhost-k8s-calico--kube--controllers--6bb584c8c5--4nwz8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7100), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6bb584c8c5-4nwz8", "timestamp":"2025-09-13 00:11:48.709598622 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:11:48.764211 containerd[1465]: 2025-09-13 00:11:48.709 [INFO][4270] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:48.764211 containerd[1465]: 2025-09-13 00:11:48.709 [INFO][4270] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:48.764211 containerd[1465]: 2025-09-13 00:11:48.710 [INFO][4270] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:11:48.764211 containerd[1465]: 2025-09-13 00:11:48.716 [INFO][4270] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.346a26d8e21ef37ccd6a7798620d35267200b16fc7168f26dcb26501fa1c7a98" host="localhost" Sep 13 00:11:48.764211 containerd[1465]: 2025-09-13 00:11:48.720 [INFO][4270] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:11:48.764211 containerd[1465]: 2025-09-13 00:11:48.724 [INFO][4270] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:11:48.764211 containerd[1465]: 2025-09-13 00:11:48.726 [INFO][4270] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:11:48.764211 containerd[1465]: 2025-09-13 00:11:48.728 [INFO][4270] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:11:48.764211 containerd[1465]: 2025-09-13 00:11:48.728 [INFO][4270] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.346a26d8e21ef37ccd6a7798620d35267200b16fc7168f26dcb26501fa1c7a98" host="localhost" Sep 13 00:11:48.764211 containerd[1465]: 2025-09-13 00:11:48.729 [INFO][4270] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.346a26d8e21ef37ccd6a7798620d35267200b16fc7168f26dcb26501fa1c7a98 Sep 13 00:11:48.764211 containerd[1465]: 2025-09-13 00:11:48.733 [INFO][4270] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.346a26d8e21ef37ccd6a7798620d35267200b16fc7168f26dcb26501fa1c7a98" host="localhost" Sep 13 00:11:48.764211 containerd[1465]: 2025-09-13 00:11:48.740 [INFO][4270] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.346a26d8e21ef37ccd6a7798620d35267200b16fc7168f26dcb26501fa1c7a98" host="localhost" Sep 13 00:11:48.764211 containerd[1465]: 2025-09-13 00:11:48.740 [INFO][4270] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.346a26d8e21ef37ccd6a7798620d35267200b16fc7168f26dcb26501fa1c7a98" host="localhost" Sep 13 00:11:48.764211 containerd[1465]: 2025-09-13 00:11:48.740 [INFO][4270] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:48.764211 containerd[1465]: 2025-09-13 00:11:48.740 [INFO][4270] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="346a26d8e21ef37ccd6a7798620d35267200b16fc7168f26dcb26501fa1c7a98" HandleID="k8s-pod-network.346a26d8e21ef37ccd6a7798620d35267200b16fc7168f26dcb26501fa1c7a98" Workload="localhost-k8s-calico--kube--controllers--6bb584c8c5--4nwz8-eth0" Sep 13 00:11:48.765363 containerd[1465]: 2025-09-13 00:11:48.744 [INFO][4255] cni-plugin/k8s.go 418: Populated endpoint ContainerID="346a26d8e21ef37ccd6a7798620d35267200b16fc7168f26dcb26501fa1c7a98" Namespace="calico-system" Pod="calico-kube-controllers-6bb584c8c5-4nwz8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bb584c8c5--4nwz8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6bb584c8c5--4nwz8-eth0", GenerateName:"calico-kube-controllers-6bb584c8c5-", Namespace:"calico-system", SelfLink:"", UID:"7a177557-980a-4069-9ba1-1de68d33d2df", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bb584c8c5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6bb584c8c5-4nwz8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali856b6608dc8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:48.765363 containerd[1465]: 2025-09-13 00:11:48.744 [INFO][4255] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="346a26d8e21ef37ccd6a7798620d35267200b16fc7168f26dcb26501fa1c7a98" Namespace="calico-system" Pod="calico-kube-controllers-6bb584c8c5-4nwz8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bb584c8c5--4nwz8-eth0" Sep 13 00:11:48.765363 containerd[1465]: 2025-09-13 00:11:48.744 [INFO][4255] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali856b6608dc8 ContainerID="346a26d8e21ef37ccd6a7798620d35267200b16fc7168f26dcb26501fa1c7a98" Namespace="calico-system" Pod="calico-kube-controllers-6bb584c8c5-4nwz8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bb584c8c5--4nwz8-eth0" Sep 13 00:11:48.765363 containerd[1465]: 2025-09-13 00:11:48.746 [INFO][4255] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="346a26d8e21ef37ccd6a7798620d35267200b16fc7168f26dcb26501fa1c7a98" Namespace="calico-system" Pod="calico-kube-controllers-6bb584c8c5-4nwz8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bb584c8c5--4nwz8-eth0" Sep 13 00:11:48.765363 containerd[1465]: 2025-09-13 00:11:48.746 [INFO][4255] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="346a26d8e21ef37ccd6a7798620d35267200b16fc7168f26dcb26501fa1c7a98" Namespace="calico-system" Pod="calico-kube-controllers-6bb584c8c5-4nwz8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bb584c8c5--4nwz8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6bb584c8c5--4nwz8-eth0", GenerateName:"calico-kube-controllers-6bb584c8c5-", Namespace:"calico-system", SelfLink:"", UID:"7a177557-980a-4069-9ba1-1de68d33d2df", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bb584c8c5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"346a26d8e21ef37ccd6a7798620d35267200b16fc7168f26dcb26501fa1c7a98", Pod:"calico-kube-controllers-6bb584c8c5-4nwz8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali856b6608dc8", MAC:"42:f8:50:7d:7b:2a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:48.765363 containerd[1465]: 2025-09-13 00:11:48.760 [INFO][4255] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="346a26d8e21ef37ccd6a7798620d35267200b16fc7168f26dcb26501fa1c7a98" Namespace="calico-system" Pod="calico-kube-controllers-6bb584c8c5-4nwz8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bb584c8c5--4nwz8-eth0" Sep 13 00:11:48.804048 containerd[1465]: time="2025-09-13T00:11:48.803876514Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:11:48.804048 containerd[1465]: time="2025-09-13T00:11:48.803992496Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:11:48.804048 containerd[1465]: time="2025-09-13T00:11:48.804011734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:48.804258 containerd[1465]: time="2025-09-13T00:11:48.804158186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:48.827127 systemd[1]: Started cri-containerd-346a26d8e21ef37ccd6a7798620d35267200b16fc7168f26dcb26501fa1c7a98.scope - libcontainer container 346a26d8e21ef37ccd6a7798620d35267200b16fc7168f26dcb26501fa1c7a98. Sep 13 00:11:48.843115 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:11:48.869128 containerd[1465]: time="2025-09-13T00:11:48.869083731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bb584c8c5-4nwz8,Uid:7a177557-980a-4069-9ba1-1de68d33d2df,Namespace:calico-system,Attempt:1,} returns sandbox id \"346a26d8e21ef37ccd6a7798620d35267200b16fc7168f26dcb26501fa1c7a98\"" Sep 13 00:11:49.318042 systemd-networkd[1409]: cali5917225f54c: Gained IPv6LL Sep 13 00:11:49.501022 containerd[1465]: time="2025-09-13T00:11:49.500931168Z" level=info msg="StopPodSandbox for \"0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314\"" Sep 13 00:11:49.596295 containerd[1465]: 2025-09-13 00:11:49.557 [INFO][4339] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314" Sep 13 00:11:49.596295 containerd[1465]: 2025-09-13 00:11:49.557 [INFO][4339] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314" iface="eth0" netns="/var/run/netns/cni-a2d385a2-ed58-0350-60b9-0cd219372449" Sep 13 00:11:49.596295 containerd[1465]: 2025-09-13 00:11:49.558 [INFO][4339] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314" iface="eth0" netns="/var/run/netns/cni-a2d385a2-ed58-0350-60b9-0cd219372449" Sep 13 00:11:49.596295 containerd[1465]: 2025-09-13 00:11:49.558 [INFO][4339] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314" iface="eth0" netns="/var/run/netns/cni-a2d385a2-ed58-0350-60b9-0cd219372449" Sep 13 00:11:49.596295 containerd[1465]: 2025-09-13 00:11:49.558 [INFO][4339] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314" Sep 13 00:11:49.596295 containerd[1465]: 2025-09-13 00:11:49.558 [INFO][4339] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314" Sep 13 00:11:49.596295 containerd[1465]: 2025-09-13 00:11:49.581 [INFO][4349] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314" HandleID="k8s-pod-network.0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314" Workload="localhost-k8s-calico--apiserver--58f9bc44bc--gqfmq-eth0" Sep 13 00:11:49.596295 containerd[1465]: 2025-09-13 00:11:49.581 [INFO][4349] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:49.596295 containerd[1465]: 2025-09-13 00:11:49.581 [INFO][4349] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:49.596295 containerd[1465]: 2025-09-13 00:11:49.587 [WARNING][4349] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314" HandleID="k8s-pod-network.0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314" Workload="localhost-k8s-calico--apiserver--58f9bc44bc--gqfmq-eth0" Sep 13 00:11:49.596295 containerd[1465]: 2025-09-13 00:11:49.587 [INFO][4349] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314" HandleID="k8s-pod-network.0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314" Workload="localhost-k8s-calico--apiserver--58f9bc44bc--gqfmq-eth0" Sep 13 00:11:49.596295 containerd[1465]: 2025-09-13 00:11:49.589 [INFO][4349] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:49.596295 containerd[1465]: 2025-09-13 00:11:49.592 [INFO][4339] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314" Sep 13 00:11:49.598983 containerd[1465]: time="2025-09-13T00:11:49.598916473Z" level=info msg="TearDown network for sandbox \"0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314\" successfully" Sep 13 00:11:49.598983 containerd[1465]: time="2025-09-13T00:11:49.598969028Z" level=info msg="StopPodSandbox for \"0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314\" returns successfully" Sep 13 00:11:49.599285 systemd[1]: run-netns-cni\x2da2d385a2\x2ded58\x2d0350\x2d60b9\x2d0cd219372449.mount: Deactivated successfully. Sep 13 00:11:49.599583 containerd[1465]: time="2025-09-13T00:11:49.599561117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58f9bc44bc-gqfmq,Uid:18e54af7-8bec-40b9-9191-5e12c28dbbdd,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:11:49.829069 systemd-networkd[1409]: cali87d9a0fe7d3: Gained IPv6LL Sep 13 00:11:50.149000 systemd-networkd[1409]: vxlan.calico: Gained IPv6LL Sep 13 00:11:50.341071 systemd-networkd[1409]: cali856b6608dc8: Gained IPv6LL Sep 13 00:11:50.478239 systemd[1]: Started sshd@10-10.0.0.108:22-10.0.0.1:37810.service - OpenSSH per-connection server daemon (10.0.0.1:37810). Sep 13 00:11:50.534352 sshd[4358]: Accepted publickey for core from 10.0.0.1 port 37810 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:11:50.536435 sshd[4358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:11:50.541850 systemd-logind[1450]: New session 11 of user core. Sep 13 00:11:50.547004 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 13 00:11:50.882612 systemd-networkd[1409]: cali331ed5d21b3: Link UP Sep 13 00:11:50.883698 systemd-networkd[1409]: cali331ed5d21b3: Gained carrier Sep 13 00:11:50.959715 containerd[1465]: 2025-09-13 00:11:50.534 [INFO][4365] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--58f9bc44bc--gqfmq-eth0 calico-apiserver-58f9bc44bc- calico-apiserver 18e54af7-8bec-40b9-9191-5e12c28dbbdd 986 0 2025-09-13 00:11:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:58f9bc44bc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-58f9bc44bc-gqfmq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali331ed5d21b3 [] [] }} ContainerID="53a7670203e647a7c9c52cf04754272058b6ec928e306a911a0c078e04808623" Namespace="calico-apiserver" Pod="calico-apiserver-58f9bc44bc-gqfmq" WorkloadEndpoint="localhost-k8s-calico--apiserver--58f9bc44bc--gqfmq-" Sep 13 00:11:50.959715 containerd[1465]: 2025-09-13 00:11:50.534 [INFO][4365] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="53a7670203e647a7c9c52cf04754272058b6ec928e306a911a0c078e04808623" Namespace="calico-apiserver" Pod="calico-apiserver-58f9bc44bc-gqfmq" WorkloadEndpoint="localhost-k8s-calico--apiserver--58f9bc44bc--gqfmq-eth0" Sep 13 00:11:50.959715 containerd[1465]: 2025-09-13 00:11:50.576 [INFO][4375] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="53a7670203e647a7c9c52cf04754272058b6ec928e306a911a0c078e04808623" HandleID="k8s-pod-network.53a7670203e647a7c9c52cf04754272058b6ec928e306a911a0c078e04808623" Workload="localhost-k8s-calico--apiserver--58f9bc44bc--gqfmq-eth0" Sep 13 00:11:50.959715 containerd[1465]: 2025-09-13 00:11:50.576 [INFO][4375] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="53a7670203e647a7c9c52cf04754272058b6ec928e306a911a0c078e04808623" HandleID="k8s-pod-network.53a7670203e647a7c9c52cf04754272058b6ec928e306a911a0c078e04808623" Workload="localhost-k8s-calico--apiserver--58f9bc44bc--gqfmq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000bee80), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-58f9bc44bc-gqfmq", "timestamp":"2025-09-13 00:11:50.576345688 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:11:50.959715 containerd[1465]: 2025-09-13 00:11:50.576 [INFO][4375] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:50.959715 containerd[1465]: 2025-09-13 00:11:50.576 [INFO][4375] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:50.959715 containerd[1465]: 2025-09-13 00:11:50.576 [INFO][4375] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:11:50.959715 containerd[1465]: 2025-09-13 00:11:50.582 [INFO][4375] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.53a7670203e647a7c9c52cf04754272058b6ec928e306a911a0c078e04808623" host="localhost" Sep 13 00:11:50.959715 containerd[1465]: 2025-09-13 00:11:50.587 [INFO][4375] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:11:50.959715 containerd[1465]: 2025-09-13 00:11:50.590 [INFO][4375] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:11:50.959715 containerd[1465]: 2025-09-13 00:11:50.592 [INFO][4375] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:11:50.959715 containerd[1465]: 2025-09-13 00:11:50.596 [INFO][4375] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:11:50.959715 containerd[1465]: 2025-09-13 00:11:50.596 [INFO][4375] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.53a7670203e647a7c9c52cf04754272058b6ec928e306a911a0c078e04808623" host="localhost" Sep 13 00:11:50.959715 containerd[1465]: 2025-09-13 00:11:50.778 [INFO][4375] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.53a7670203e647a7c9c52cf04754272058b6ec928e306a911a0c078e04808623 Sep 13 00:11:50.959715 containerd[1465]: 2025-09-13 00:11:50.819 [INFO][4375] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.53a7670203e647a7c9c52cf04754272058b6ec928e306a911a0c078e04808623" host="localhost" Sep 13 00:11:50.959715 containerd[1465]: 2025-09-13 00:11:50.876 [INFO][4375] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.53a7670203e647a7c9c52cf04754272058b6ec928e306a911a0c078e04808623" host="localhost" Sep 13 00:11:50.959715 containerd[1465]: 2025-09-13 00:11:50.876 [INFO][4375] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.53a7670203e647a7c9c52cf04754272058b6ec928e306a911a0c078e04808623" host="localhost" Sep 13 00:11:50.959715 containerd[1465]: 2025-09-13 00:11:50.876 [INFO][4375] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:50.959715 containerd[1465]: 2025-09-13 00:11:50.876 [INFO][4375] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="53a7670203e647a7c9c52cf04754272058b6ec928e306a911a0c078e04808623" HandleID="k8s-pod-network.53a7670203e647a7c9c52cf04754272058b6ec928e306a911a0c078e04808623" Workload="localhost-k8s-calico--apiserver--58f9bc44bc--gqfmq-eth0" Sep 13 00:11:50.960649 containerd[1465]: 2025-09-13 00:11:50.880 [INFO][4365] cni-plugin/k8s.go 418: Populated endpoint ContainerID="53a7670203e647a7c9c52cf04754272058b6ec928e306a911a0c078e04808623" Namespace="calico-apiserver" Pod="calico-apiserver-58f9bc44bc-gqfmq" WorkloadEndpoint="localhost-k8s-calico--apiserver--58f9bc44bc--gqfmq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--58f9bc44bc--gqfmq-eth0", GenerateName:"calico-apiserver-58f9bc44bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"18e54af7-8bec-40b9-9191-5e12c28dbbdd", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58f9bc44bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-58f9bc44bc-gqfmq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali331ed5d21b3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:50.960649 containerd[1465]: 2025-09-13 00:11:50.880 [INFO][4365] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="53a7670203e647a7c9c52cf04754272058b6ec928e306a911a0c078e04808623" Namespace="calico-apiserver" Pod="calico-apiserver-58f9bc44bc-gqfmq" WorkloadEndpoint="localhost-k8s-calico--apiserver--58f9bc44bc--gqfmq-eth0" Sep 13 00:11:50.960649 containerd[1465]: 2025-09-13 00:11:50.880 [INFO][4365] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali331ed5d21b3 ContainerID="53a7670203e647a7c9c52cf04754272058b6ec928e306a911a0c078e04808623" Namespace="calico-apiserver" Pod="calico-apiserver-58f9bc44bc-gqfmq" WorkloadEndpoint="localhost-k8s-calico--apiserver--58f9bc44bc--gqfmq-eth0" Sep 13 00:11:50.960649 containerd[1465]: 2025-09-13 00:11:50.883 [INFO][4365] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="53a7670203e647a7c9c52cf04754272058b6ec928e306a911a0c078e04808623" Namespace="calico-apiserver" Pod="calico-apiserver-58f9bc44bc-gqfmq" WorkloadEndpoint="localhost-k8s-calico--apiserver--58f9bc44bc--gqfmq-eth0" Sep 13 00:11:50.960649 containerd[1465]: 2025-09-13 00:11:50.884 [INFO][4365] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="53a7670203e647a7c9c52cf04754272058b6ec928e306a911a0c078e04808623" Namespace="calico-apiserver" Pod="calico-apiserver-58f9bc44bc-gqfmq" WorkloadEndpoint="localhost-k8s-calico--apiserver--58f9bc44bc--gqfmq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--58f9bc44bc--gqfmq-eth0", GenerateName:"calico-apiserver-58f9bc44bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"18e54af7-8bec-40b9-9191-5e12c28dbbdd", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58f9bc44bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"53a7670203e647a7c9c52cf04754272058b6ec928e306a911a0c078e04808623", Pod:"calico-apiserver-58f9bc44bc-gqfmq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali331ed5d21b3", MAC:"de:4b:a1:ef:73:73", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:50.960649 containerd[1465]: 2025-09-13 00:11:50.955 [INFO][4365] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="53a7670203e647a7c9c52cf04754272058b6ec928e306a911a0c078e04808623" Namespace="calico-apiserver" Pod="calico-apiserver-58f9bc44bc-gqfmq" WorkloadEndpoint="localhost-k8s-calico--apiserver--58f9bc44bc--gqfmq-eth0" Sep 13 00:11:51.466175 sshd[4358]: pam_unix(sshd:session): session closed for user core Sep 13 00:11:51.470009 systemd[1]: sshd@10-10.0.0.108:22-10.0.0.1:37810.service: Deactivated successfully. Sep 13 00:11:51.473409 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:11:51.476469 systemd-logind[1450]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:11:51.478254 systemd-logind[1450]: Removed session 11. Sep 13 00:11:51.501442 containerd[1465]: time="2025-09-13T00:11:51.501023617Z" level=info msg="StopPodSandbox for \"b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735\"" Sep 13 00:11:51.501731 containerd[1465]: time="2025-09-13T00:11:51.501708180Z" level=info msg="StopPodSandbox for \"4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a\"" Sep 13 00:11:51.501943 containerd[1465]: time="2025-09-13T00:11:51.501913458Z" level=info msg="StopPodSandbox for \"0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce\"" Sep 13 00:11:51.502395 containerd[1465]: time="2025-09-13T00:11:51.501767828Z" level=info msg="StopPodSandbox for \"bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182\"" Sep 13 00:11:51.606760 containerd[1465]: time="2025-09-13T00:11:51.606581715Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:11:51.606760 containerd[1465]: time="2025-09-13T00:11:51.606727143Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:11:51.606760 containerd[1465]: time="2025-09-13T00:11:51.606749939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:51.607032 containerd[1465]: time="2025-09-13T00:11:51.606963685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:51.636985 systemd[1]: Started cri-containerd-53a7670203e647a7c9c52cf04754272058b6ec928e306a911a0c078e04808623.scope - libcontainer container 53a7670203e647a7c9c52cf04754272058b6ec928e306a911a0c078e04808623. Sep 13 00:11:51.652192 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:11:51.693830 containerd[1465]: time="2025-09-13T00:11:51.691569935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58f9bc44bc-gqfmq,Uid:18e54af7-8bec-40b9-9191-5e12c28dbbdd,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"53a7670203e647a7c9c52cf04754272058b6ec928e306a911a0c078e04808623\"" Sep 13 00:11:51.867585 containerd[1465]: 2025-09-13 00:11:51.689 [INFO][4458] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182" Sep 13 00:11:51.867585 containerd[1465]: 2025-09-13 00:11:51.690 [INFO][4458] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182" iface="eth0" netns="/var/run/netns/cni-ee0e728f-7b38-5a5f-71e7-e714fa65d470" Sep 13 00:11:51.867585 containerd[1465]: 2025-09-13 00:11:51.690 [INFO][4458] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182" iface="eth0" netns="/var/run/netns/cni-ee0e728f-7b38-5a5f-71e7-e714fa65d470" Sep 13 00:11:51.867585 containerd[1465]: 2025-09-13 00:11:51.690 [INFO][4458] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182" iface="eth0" netns="/var/run/netns/cni-ee0e728f-7b38-5a5f-71e7-e714fa65d470" Sep 13 00:11:51.867585 containerd[1465]: 2025-09-13 00:11:51.690 [INFO][4458] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182" Sep 13 00:11:51.867585 containerd[1465]: 2025-09-13 00:11:51.690 [INFO][4458] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182" Sep 13 00:11:51.867585 containerd[1465]: 2025-09-13 00:11:51.721 [INFO][4526] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182" HandleID="k8s-pod-network.bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182" Workload="localhost-k8s-coredns--7c65d6cfc9--t2khr-eth0" Sep 13 00:11:51.867585 containerd[1465]: 2025-09-13 00:11:51.722 [INFO][4526] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:51.867585 containerd[1465]: 2025-09-13 00:11:51.722 [INFO][4526] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:51.867585 containerd[1465]: 2025-09-13 00:11:51.828 [WARNING][4526] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182" HandleID="k8s-pod-network.bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182" Workload="localhost-k8s-coredns--7c65d6cfc9--t2khr-eth0" Sep 13 00:11:51.867585 containerd[1465]: 2025-09-13 00:11:51.828 [INFO][4526] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182" HandleID="k8s-pod-network.bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182" Workload="localhost-k8s-coredns--7c65d6cfc9--t2khr-eth0" Sep 13 00:11:51.867585 containerd[1465]: 2025-09-13 00:11:51.850 [INFO][4526] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:51.867585 containerd[1465]: 2025-09-13 00:11:51.854 [INFO][4458] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182" Sep 13 00:11:51.875839 systemd[1]: run-netns-cni\x2dee0e728f\x2d7b38\x2d5a5f\x2d71e7\x2de714fa65d470.mount: Deactivated successfully. Sep 13 00:11:51.880050 containerd[1465]: time="2025-09-13T00:11:51.879911614Z" level=info msg="TearDown network for sandbox \"bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182\" successfully" Sep 13 00:11:51.880050 containerd[1465]: time="2025-09-13T00:11:51.879968316Z" level=info msg="StopPodSandbox for \"bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182\" returns successfully" Sep 13 00:11:51.880530 kubelet[2556]: E0913 00:11:51.880478 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:51.882138 containerd[1465]: time="2025-09-13T00:11:51.882088045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-t2khr,Uid:efb62d5d-3a33-4337-b3ca-e67aed5932c5,Namespace:kube-system,Attempt:1,}" Sep 13 00:11:51.893513 containerd[1465]: 2025-09-13 00:11:51.697 [INFO][4445] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce" Sep 13 00:11:51.893513 containerd[1465]: 2025-09-13 00:11:51.697 [INFO][4445] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce" iface="eth0" netns="/var/run/netns/cni-fa5b602f-746c-d33e-fcae-94bf9a49898d" Sep 13 00:11:51.893513 containerd[1465]: 2025-09-13 00:11:51.698 [INFO][4445] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce" iface="eth0" netns="/var/run/netns/cni-fa5b602f-746c-d33e-fcae-94bf9a49898d" Sep 13 00:11:51.893513 containerd[1465]: 2025-09-13 00:11:51.699 [INFO][4445] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce" iface="eth0" netns="/var/run/netns/cni-fa5b602f-746c-d33e-fcae-94bf9a49898d" Sep 13 00:11:51.893513 containerd[1465]: 2025-09-13 00:11:51.699 [INFO][4445] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce" Sep 13 00:11:51.893513 containerd[1465]: 2025-09-13 00:11:51.699 [INFO][4445] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce" Sep 13 00:11:51.893513 containerd[1465]: 2025-09-13 00:11:51.723 [INFO][4533] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce" HandleID="k8s-pod-network.0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce" Workload="localhost-k8s-coredns--7c65d6cfc9--8ddht-eth0" Sep 13 00:11:51.893513 containerd[1465]: 2025-09-13 00:11:51.723 [INFO][4533] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:51.893513 containerd[1465]: 2025-09-13 00:11:51.852 [INFO][4533] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:51.893513 containerd[1465]: 2025-09-13 00:11:51.878 [WARNING][4533] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce" HandleID="k8s-pod-network.0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce" Workload="localhost-k8s-coredns--7c65d6cfc9--8ddht-eth0" Sep 13 00:11:51.893513 containerd[1465]: 2025-09-13 00:11:51.878 [INFO][4533] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce" HandleID="k8s-pod-network.0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce" Workload="localhost-k8s-coredns--7c65d6cfc9--8ddht-eth0" Sep 13 00:11:51.893513 containerd[1465]: 2025-09-13 00:11:51.882 [INFO][4533] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:51.893513 containerd[1465]: 2025-09-13 00:11:51.885 [INFO][4445] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce" Sep 13 00:11:51.894042 containerd[1465]: time="2025-09-13T00:11:51.894010222Z" level=info msg="TearDown network for sandbox \"0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce\" successfully" Sep 13 00:11:51.894102 containerd[1465]: time="2025-09-13T00:11:51.894088628Z" level=info msg="StopPodSandbox for \"0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce\" returns successfully" Sep 13 00:11:51.894647 kubelet[2556]: E0913 00:11:51.894625 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:51.895677 containerd[1465]: time="2025-09-13T00:11:51.895628734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8ddht,Uid:572e4883-43be-47f8-9d71-340af499cdf4,Namespace:kube-system,Attempt:1,}" Sep 13 00:11:51.897332 systemd[1]: run-netns-cni\x2dfa5b602f\x2d746c\x2dd33e\x2dfcae\x2d94bf9a49898d.mount: Deactivated successfully. Sep 13 00:11:52.272635 containerd[1465]: 2025-09-13 00:11:51.851 [INFO][4441] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a" Sep 13 00:11:52.272635 containerd[1465]: 2025-09-13 00:11:51.851 [INFO][4441] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a" iface="eth0" netns="/var/run/netns/cni-ae595c0c-acd7-96dd-e328-e47ef0ef52a0" Sep 13 00:11:52.272635 containerd[1465]: 2025-09-13 00:11:51.852 [INFO][4441] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a" iface="eth0" netns="/var/run/netns/cni-ae595c0c-acd7-96dd-e328-e47ef0ef52a0" Sep 13 00:11:52.272635 containerd[1465]: 2025-09-13 00:11:51.852 [INFO][4441] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a" iface="eth0" netns="/var/run/netns/cni-ae595c0c-acd7-96dd-e328-e47ef0ef52a0" Sep 13 00:11:52.272635 containerd[1465]: 2025-09-13 00:11:51.852 [INFO][4441] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a" Sep 13 00:11:52.272635 containerd[1465]: 2025-09-13 00:11:51.852 [INFO][4441] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a" Sep 13 00:11:52.272635 containerd[1465]: 2025-09-13 00:11:51.920 [INFO][4545] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a" HandleID="k8s-pod-network.4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a" Workload="localhost-k8s-goldmane--7988f88666--6x2wp-eth0" Sep 13 00:11:52.272635 containerd[1465]: 2025-09-13 00:11:51.920 [INFO][4545] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:52.272635 containerd[1465]: 2025-09-13 00:11:51.921 [INFO][4545] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:52.272635 containerd[1465]: 2025-09-13 00:11:51.981 [WARNING][4545] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a" HandleID="k8s-pod-network.4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a" Workload="localhost-k8s-goldmane--7988f88666--6x2wp-eth0" Sep 13 00:11:52.272635 containerd[1465]: 2025-09-13 00:11:51.981 [INFO][4545] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a" HandleID="k8s-pod-network.4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a" Workload="localhost-k8s-goldmane--7988f88666--6x2wp-eth0" Sep 13 00:11:52.272635 containerd[1465]: 2025-09-13 00:11:52.265 [INFO][4545] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:52.272635 containerd[1465]: 2025-09-13 00:11:52.269 [INFO][4441] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a" Sep 13 00:11:52.273870 containerd[1465]: time="2025-09-13T00:11:52.272898050Z" level=info msg="TearDown network for sandbox \"4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a\" successfully" Sep 13 00:11:52.273870 containerd[1465]: time="2025-09-13T00:11:52.272936085Z" level=info msg="StopPodSandbox for \"4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a\" returns successfully" Sep 13 00:11:52.274343 containerd[1465]: time="2025-09-13T00:11:52.274278336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-6x2wp,Uid:7f30d77d-673c-4aff-b8fd-abd4bc5cd3dc,Namespace:calico-system,Attempt:1,}" Sep 13 00:11:52.275996 systemd[1]: run-netns-cni\x2dae595c0c\x2dacd7\x2d96dd\x2de328\x2de47ef0ef52a0.mount: Deactivated successfully. Sep 13 00:11:52.389039 systemd-networkd[1409]: cali331ed5d21b3: Gained IPv6LL Sep 13 00:11:52.442764 containerd[1465]: 2025-09-13 00:11:51.858 [INFO][4443] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735" Sep 13 00:11:52.442764 containerd[1465]: 2025-09-13 00:11:51.863 [INFO][4443] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735" iface="eth0" netns="/var/run/netns/cni-7572de67-a4ab-cb8a-ba83-36a969ca4c9e" Sep 13 00:11:52.442764 containerd[1465]: 2025-09-13 00:11:51.863 [INFO][4443] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735" iface="eth0" netns="/var/run/netns/cni-7572de67-a4ab-cb8a-ba83-36a969ca4c9e" Sep 13 00:11:52.442764 containerd[1465]: 2025-09-13 00:11:51.863 [INFO][4443] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735" iface="eth0" netns="/var/run/netns/cni-7572de67-a4ab-cb8a-ba83-36a969ca4c9e" Sep 13 00:11:52.442764 containerd[1465]: 2025-09-13 00:11:51.863 [INFO][4443] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735" Sep 13 00:11:52.442764 containerd[1465]: 2025-09-13 00:11:51.863 [INFO][4443] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735" Sep 13 00:11:52.442764 containerd[1465]: 2025-09-13 00:11:51.920 [INFO][4552] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735" HandleID="k8s-pod-network.b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735" Workload="localhost-k8s-csi--node--driver--6249r-eth0" Sep 13 00:11:52.442764 containerd[1465]: 2025-09-13 00:11:51.921 [INFO][4552] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:52.442764 containerd[1465]: 2025-09-13 00:11:52.265 [INFO][4552] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:52.442764 containerd[1465]: 2025-09-13 00:11:52.433 [WARNING][4552] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735" HandleID="k8s-pod-network.b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735" Workload="localhost-k8s-csi--node--driver--6249r-eth0" Sep 13 00:11:52.442764 containerd[1465]: 2025-09-13 00:11:52.433 [INFO][4552] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735" HandleID="k8s-pod-network.b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735" Workload="localhost-k8s-csi--node--driver--6249r-eth0" Sep 13 00:11:52.442764 containerd[1465]: 2025-09-13 00:11:52.435 [INFO][4552] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:52.442764 containerd[1465]: 2025-09-13 00:11:52.438 [INFO][4443] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735" Sep 13 00:11:52.443535 containerd[1465]: time="2025-09-13T00:11:52.443130022Z" level=info msg="TearDown network for sandbox \"b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735\" successfully" Sep 13 00:11:52.443535 containerd[1465]: time="2025-09-13T00:11:52.443178328Z" level=info msg="StopPodSandbox for \"b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735\" returns successfully" Sep 13 00:11:52.444253 containerd[1465]: time="2025-09-13T00:11:52.444198648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6249r,Uid:093a73a0-183b-402a-9ff9-9d907062e092,Namespace:calico-system,Attempt:1,}" Sep 13 00:11:52.613975 systemd[1]: run-netns-cni\x2d7572de67\x2da4ab\x2dcb8a\x2dba83\x2d36a969ca4c9e.mount: Deactivated successfully. Sep 13 00:11:55.041811 systemd-networkd[1409]: cali198bd32a56b: Link UP Sep 13 00:11:55.044121 systemd-networkd[1409]: cali198bd32a56b: Gained carrier Sep 13 00:11:55.213064 containerd[1465]: 2025-09-13 00:11:54.587 [INFO][4571] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--t2khr-eth0 coredns-7c65d6cfc9- kube-system efb62d5d-3a33-4337-b3ca-e67aed5932c5 999 0 2025-09-13 00:11:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-t2khr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali198bd32a56b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="fafccda0260f1bf93d258b6469ab472b4a9f0cf2a690f825178db20145494aa0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-t2khr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--t2khr-" Sep 13 00:11:55.213064 containerd[1465]: 2025-09-13 00:11:54.587 [INFO][4571] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fafccda0260f1bf93d258b6469ab472b4a9f0cf2a690f825178db20145494aa0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-t2khr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--t2khr-eth0" Sep 13 00:11:55.213064 containerd[1465]: 2025-09-13 00:11:54.613 [INFO][4585] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fafccda0260f1bf93d258b6469ab472b4a9f0cf2a690f825178db20145494aa0" HandleID="k8s-pod-network.fafccda0260f1bf93d258b6469ab472b4a9f0cf2a690f825178db20145494aa0" Workload="localhost-k8s-coredns--7c65d6cfc9--t2khr-eth0" Sep 13 00:11:55.213064 containerd[1465]: 2025-09-13 00:11:54.613 [INFO][4585] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fafccda0260f1bf93d258b6469ab472b4a9f0cf2a690f825178db20145494aa0" HandleID="k8s-pod-network.fafccda0260f1bf93d258b6469ab472b4a9f0cf2a690f825178db20145494aa0" Workload="localhost-k8s-coredns--7c65d6cfc9--t2khr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e620), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-t2khr", "timestamp":"2025-09-13 00:11:54.613411552 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:11:55.213064 containerd[1465]: 2025-09-13 00:11:54.613 [INFO][4585] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:55.213064 containerd[1465]: 2025-09-13 00:11:54.613 [INFO][4585] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:55.213064 containerd[1465]: 2025-09-13 00:11:54.613 [INFO][4585] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:11:55.213064 containerd[1465]: 2025-09-13 00:11:54.689 [INFO][4585] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fafccda0260f1bf93d258b6469ab472b4a9f0cf2a690f825178db20145494aa0" host="localhost" Sep 13 00:11:55.213064 containerd[1465]: 2025-09-13 00:11:54.697 [INFO][4585] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:11:55.213064 containerd[1465]: 2025-09-13 00:11:54.703 [INFO][4585] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:11:55.213064 containerd[1465]: 2025-09-13 00:11:54.705 [INFO][4585] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:11:55.213064 containerd[1465]: 2025-09-13 00:11:54.707 [INFO][4585] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:11:55.213064 containerd[1465]: 2025-09-13 00:11:54.707 [INFO][4585] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fafccda0260f1bf93d258b6469ab472b4a9f0cf2a690f825178db20145494aa0" host="localhost" Sep 13 00:11:55.213064 containerd[1465]: 2025-09-13 00:11:54.709 [INFO][4585] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fafccda0260f1bf93d258b6469ab472b4a9f0cf2a690f825178db20145494aa0 Sep 13 00:11:55.213064 containerd[1465]: 2025-09-13 00:11:54.936 [INFO][4585] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fafccda0260f1bf93d258b6469ab472b4a9f0cf2a690f825178db20145494aa0" host="localhost" Sep 13 00:11:55.213064 containerd[1465]: 2025-09-13 00:11:55.034 [INFO][4585] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.fafccda0260f1bf93d258b6469ab472b4a9f0cf2a690f825178db20145494aa0" host="localhost" Sep 13 00:11:55.213064 containerd[1465]: 2025-09-13 00:11:55.034 [INFO][4585] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.fafccda0260f1bf93d258b6469ab472b4a9f0cf2a690f825178db20145494aa0" host="localhost" Sep 13 00:11:55.213064 containerd[1465]: 2025-09-13 00:11:55.034 [INFO][4585] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:55.213064 containerd[1465]: 2025-09-13 00:11:55.034 [INFO][4585] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="fafccda0260f1bf93d258b6469ab472b4a9f0cf2a690f825178db20145494aa0" HandleID="k8s-pod-network.fafccda0260f1bf93d258b6469ab472b4a9f0cf2a690f825178db20145494aa0" Workload="localhost-k8s-coredns--7c65d6cfc9--t2khr-eth0" Sep 13 00:11:55.214036 containerd[1465]: 2025-09-13 00:11:55.037 [INFO][4571] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fafccda0260f1bf93d258b6469ab472b4a9f0cf2a690f825178db20145494aa0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-t2khr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--t2khr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--t2khr-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"efb62d5d-3a33-4337-b3ca-e67aed5932c5", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-t2khr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali198bd32a56b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:55.214036 containerd[1465]: 2025-09-13 00:11:55.038 [INFO][4571] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="fafccda0260f1bf93d258b6469ab472b4a9f0cf2a690f825178db20145494aa0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-t2khr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--t2khr-eth0" Sep 13 00:11:55.214036 containerd[1465]: 2025-09-13 00:11:55.038 [INFO][4571] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali198bd32a56b ContainerID="fafccda0260f1bf93d258b6469ab472b4a9f0cf2a690f825178db20145494aa0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-t2khr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--t2khr-eth0" Sep 13 00:11:55.214036 containerd[1465]: 2025-09-13 00:11:55.044 [INFO][4571] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fafccda0260f1bf93d258b6469ab472b4a9f0cf2a690f825178db20145494aa0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-t2khr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--t2khr-eth0" Sep 13 00:11:55.214036 containerd[1465]: 2025-09-13 00:11:55.045 [INFO][4571] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fafccda0260f1bf93d258b6469ab472b4a9f0cf2a690f825178db20145494aa0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-t2khr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--t2khr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--t2khr-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"efb62d5d-3a33-4337-b3ca-e67aed5932c5", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fafccda0260f1bf93d258b6469ab472b4a9f0cf2a690f825178db20145494aa0", Pod:"coredns-7c65d6cfc9-t2khr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali198bd32a56b", MAC:"22:cf:60:6b:5c:66", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:55.214036 containerd[1465]: 2025-09-13 00:11:55.207 [INFO][4571] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fafccda0260f1bf93d258b6469ab472b4a9f0cf2a690f825178db20145494aa0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-t2khr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--t2khr-eth0" Sep 13 00:11:55.538595 containerd[1465]: time="2025-09-13T00:11:55.538453340Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:11:55.538595 containerd[1465]: time="2025-09-13T00:11:55.538539968Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:11:55.538595 containerd[1465]: time="2025-09-13T00:11:55.538553883Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:55.539122 containerd[1465]: time="2025-09-13T00:11:55.539001316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:55.565058 systemd[1]: Started cri-containerd-fafccda0260f1bf93d258b6469ab472b4a9f0cf2a690f825178db20145494aa0.scope - libcontainer container fafccda0260f1bf93d258b6469ab472b4a9f0cf2a690f825178db20145494aa0. Sep 13 00:11:55.579391 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:11:55.603293 containerd[1465]: time="2025-09-13T00:11:55.603206525Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:55.605529 containerd[1465]: time="2025-09-13T00:11:55.605505362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-t2khr,Uid:efb62d5d-3a33-4337-b3ca-e67aed5932c5,Namespace:kube-system,Attempt:1,} returns sandbox id \"fafccda0260f1bf93d258b6469ab472b4a9f0cf2a690f825178db20145494aa0\"" Sep 13 00:11:55.606309 kubelet[2556]: E0913 00:11:55.606273 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:55.608159 containerd[1465]: time="2025-09-13T00:11:55.608125201Z" level=info msg="CreateContainer within sandbox \"fafccda0260f1bf93d258b6469ab472b4a9f0cf2a690f825178db20145494aa0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:11:55.626964 containerd[1465]: time="2025-09-13T00:11:55.626929896Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 13 00:11:55.692184 systemd-networkd[1409]: calid865690000d: Link UP Sep 13 00:11:55.692382 systemd-networkd[1409]: calid865690000d: Gained carrier Sep 13 00:11:55.750165 containerd[1465]: time="2025-09-13T00:11:55.750061894Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:55.836774 systemd[1]: Started sshd@11-10.0.0.108:22-10.0.0.1:37812.service - OpenSSH per-connection server daemon (10.0.0.1:37812). Sep 13 00:11:55.881581 sshd[4718]: Accepted publickey for core from 10.0.0.1 port 37812 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:11:55.883392 sshd[4718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:11:55.887492 systemd-logind[1450]: New session 12 of user core. Sep 13 00:11:55.896951 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 13 00:11:55.984737 containerd[1465]: time="2025-09-13T00:11:55.984652702Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:55.986119 containerd[1465]: time="2025-09-13T00:11:55.985832356Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 7.941481737s" Sep 13 00:11:55.986119 containerd[1465]: time="2025-09-13T00:11:55.986099201Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:11:55.988326 containerd[1465]: 2025-09-13 00:11:55.209 [INFO][4593] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7988f88666--6x2wp-eth0 goldmane-7988f88666- calico-system 7f30d77d-673c-4aff-b8fd-abd4bc5cd3dc 1006 0 2025-09-13 00:11:22 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7988f88666-6x2wp eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calid865690000d [] [] }} ContainerID="8c1ff6cdb641f2d9258d616f62783fa59b685715b130018d820c2216f6db8f7c" Namespace="calico-system" Pod="goldmane-7988f88666-6x2wp" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--6x2wp-" Sep 13 00:11:55.988326 containerd[1465]: 2025-09-13 00:11:55.209 [INFO][4593] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8c1ff6cdb641f2d9258d616f62783fa59b685715b130018d820c2216f6db8f7c" Namespace="calico-system" Pod="goldmane-7988f88666-6x2wp" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--6x2wp-eth0" Sep 13 00:11:55.988326 containerd[1465]: 2025-09-13 00:11:55.245 [INFO][4620] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8c1ff6cdb641f2d9258d616f62783fa59b685715b130018d820c2216f6db8f7c" HandleID="k8s-pod-network.8c1ff6cdb641f2d9258d616f62783fa59b685715b130018d820c2216f6db8f7c" Workload="localhost-k8s-goldmane--7988f88666--6x2wp-eth0" Sep 13 00:11:55.988326 containerd[1465]: 2025-09-13 00:11:55.245 [INFO][4620] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8c1ff6cdb641f2d9258d616f62783fa59b685715b130018d820c2216f6db8f7c" HandleID="k8s-pod-network.8c1ff6cdb641f2d9258d616f62783fa59b685715b130018d820c2216f6db8f7c" Workload="localhost-k8s-goldmane--7988f88666--6x2wp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001182d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7988f88666-6x2wp", "timestamp":"2025-09-13 00:11:55.245463421 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:11:55.988326 containerd[1465]: 2025-09-13 00:11:55.245 [INFO][4620] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:55.988326 containerd[1465]: 2025-09-13 00:11:55.245 [INFO][4620] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:55.988326 containerd[1465]: 2025-09-13 00:11:55.245 [INFO][4620] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:11:55.988326 containerd[1465]: 2025-09-13 00:11:55.486 [INFO][4620] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8c1ff6cdb641f2d9258d616f62783fa59b685715b130018d820c2216f6db8f7c" host="localhost" Sep 13 00:11:55.988326 containerd[1465]: 2025-09-13 00:11:55.522 [INFO][4620] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:11:55.988326 containerd[1465]: 2025-09-13 00:11:55.531 [INFO][4620] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:11:55.988326 containerd[1465]: 2025-09-13 00:11:55.534 [INFO][4620] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:11:55.988326 containerd[1465]: 2025-09-13 00:11:55.537 [INFO][4620] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:11:55.988326 containerd[1465]: 2025-09-13 00:11:55.537 [INFO][4620] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8c1ff6cdb641f2d9258d616f62783fa59b685715b130018d820c2216f6db8f7c" host="localhost" Sep 13 00:11:55.988326 containerd[1465]: 2025-09-13 00:11:55.539 [INFO][4620] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8c1ff6cdb641f2d9258d616f62783fa59b685715b130018d820c2216f6db8f7c Sep 13 00:11:55.988326 containerd[1465]: 2025-09-13 00:11:55.563 [INFO][4620] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8c1ff6cdb641f2d9258d616f62783fa59b685715b130018d820c2216f6db8f7c" host="localhost" Sep 13 00:11:55.988326 containerd[1465]: 2025-09-13 00:11:55.686 [INFO][4620] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.8c1ff6cdb641f2d9258d616f62783fa59b685715b130018d820c2216f6db8f7c" host="localhost" Sep 13 00:11:55.988326 containerd[1465]: 2025-09-13 00:11:55.686 [INFO][4620] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.8c1ff6cdb641f2d9258d616f62783fa59b685715b130018d820c2216f6db8f7c" host="localhost" Sep 13 00:11:55.988326 containerd[1465]: 2025-09-13 00:11:55.686 [INFO][4620] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:55.988326 containerd[1465]: 2025-09-13 00:11:55.686 [INFO][4620] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="8c1ff6cdb641f2d9258d616f62783fa59b685715b130018d820c2216f6db8f7c" HandleID="k8s-pod-network.8c1ff6cdb641f2d9258d616f62783fa59b685715b130018d820c2216f6db8f7c" Workload="localhost-k8s-goldmane--7988f88666--6x2wp-eth0" Sep 13 00:11:55.990722 containerd[1465]: 2025-09-13 00:11:55.689 [INFO][4593] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8c1ff6cdb641f2d9258d616f62783fa59b685715b130018d820c2216f6db8f7c" Namespace="calico-system" Pod="goldmane-7988f88666-6x2wp" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--6x2wp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--6x2wp-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"7f30d77d-673c-4aff-b8fd-abd4bc5cd3dc", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7988f88666-6x2wp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid865690000d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:55.990722 containerd[1465]: 2025-09-13 00:11:55.689 [INFO][4593] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="8c1ff6cdb641f2d9258d616f62783fa59b685715b130018d820c2216f6db8f7c" Namespace="calico-system" Pod="goldmane-7988f88666-6x2wp" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--6x2wp-eth0" Sep 13 00:11:55.990722 containerd[1465]: 2025-09-13 00:11:55.689 [INFO][4593] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid865690000d ContainerID="8c1ff6cdb641f2d9258d616f62783fa59b685715b130018d820c2216f6db8f7c" Namespace="calico-system" Pod="goldmane-7988f88666-6x2wp" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--6x2wp-eth0" Sep 13 00:11:55.990722 containerd[1465]: 2025-09-13 00:11:55.691 [INFO][4593] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8c1ff6cdb641f2d9258d616f62783fa59b685715b130018d820c2216f6db8f7c" Namespace="calico-system" Pod="goldmane-7988f88666-6x2wp" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--6x2wp-eth0" Sep 13 00:11:55.990722 containerd[1465]: 2025-09-13 00:11:55.692 [INFO][4593] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8c1ff6cdb641f2d9258d616f62783fa59b685715b130018d820c2216f6db8f7c" Namespace="calico-system" Pod="goldmane-7988f88666-6x2wp" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--6x2wp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--6x2wp-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"7f30d77d-673c-4aff-b8fd-abd4bc5cd3dc", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8c1ff6cdb641f2d9258d616f62783fa59b685715b130018d820c2216f6db8f7c", Pod:"goldmane-7988f88666-6x2wp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid865690000d", MAC:"c6:58:d8:06:78:cb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:55.990722 containerd[1465]: 2025-09-13 00:11:55.980 [INFO][4593] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8c1ff6cdb641f2d9258d616f62783fa59b685715b130018d820c2216f6db8f7c" Namespace="calico-system" Pod="goldmane-7988f88666-6x2wp" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--6x2wp-eth0" Sep 13 00:11:55.994439 containerd[1465]: time="2025-09-13T00:11:55.993516546Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 13 00:11:55.998523 containerd[1465]: time="2025-09-13T00:11:55.998483841Z" level=info msg="CreateContainer within sandbox \"c60aa7b749f46a0213cfd5fc86946caf5294b9cb6a8be0f4213c6559e4dc151e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:11:56.100984 systemd-networkd[1409]: cali198bd32a56b: Gained IPv6LL Sep 13 00:11:56.538018 containerd[1465]: time="2025-09-13T00:11:56.537917073Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:11:56.538018 containerd[1465]: time="2025-09-13T00:11:56.537978966Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:11:56.538018 containerd[1465]: time="2025-09-13T00:11:56.537989845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:56.538540 containerd[1465]: time="2025-09-13T00:11:56.538082725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:56.559986 systemd[1]: Started cri-containerd-8c1ff6cdb641f2d9258d616f62783fa59b685715b130018d820c2216f6db8f7c.scope - libcontainer container 8c1ff6cdb641f2d9258d616f62783fa59b685715b130018d820c2216f6db8f7c. Sep 13 00:11:56.573230 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:11:56.598706 containerd[1465]: time="2025-09-13T00:11:56.598658665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-6x2wp,Uid:7f30d77d-673c-4aff-b8fd-abd4bc5cd3dc,Namespace:calico-system,Attempt:1,} returns sandbox id \"8c1ff6cdb641f2d9258d616f62783fa59b685715b130018d820c2216f6db8f7c\"" Sep 13 00:11:57.075139 sshd[4718]: pam_unix(sshd:session): session closed for user core Sep 13 00:11:57.079141 systemd[1]: sshd@11-10.0.0.108:22-10.0.0.1:37812.service: Deactivated successfully. Sep 13 00:11:57.081520 systemd-networkd[1409]: cali397b6c22b0b: Link UP Sep 13 00:11:57.082468 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:11:57.083156 systemd-networkd[1409]: cali397b6c22b0b: Gained carrier Sep 13 00:11:57.084591 systemd-logind[1450]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:11:57.085861 systemd-logind[1450]: Removed session 12. Sep 13 00:11:57.281317 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3953525826.mount: Deactivated successfully. Sep 13 00:11:57.284893 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2709110739.mount: Deactivated successfully. Sep 13 00:11:57.386932 systemd-networkd[1409]: calid865690000d: Gained IPv6LL Sep 13 00:11:57.403475 containerd[1465]: 2025-09-13 00:11:55.489 [INFO][4629] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--8ddht-eth0 coredns-7c65d6cfc9- kube-system 572e4883-43be-47f8-9d71-340af499cdf4 1001 0 2025-09-13 00:11:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-8ddht eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali397b6c22b0b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f58a671e89caf1c95c4a545367dbbc8194c11f18bf699dfa3cf00c503b8e68fa" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8ddht" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--8ddht-" Sep 13 00:11:57.403475 containerd[1465]: 2025-09-13 00:11:55.490 [INFO][4629] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f58a671e89caf1c95c4a545367dbbc8194c11f18bf699dfa3cf00c503b8e68fa" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8ddht" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--8ddht-eth0" Sep 13 00:11:57.403475 containerd[1465]: 2025-09-13 00:11:55.519 [INFO][4661] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f58a671e89caf1c95c4a545367dbbc8194c11f18bf699dfa3cf00c503b8e68fa" HandleID="k8s-pod-network.f58a671e89caf1c95c4a545367dbbc8194c11f18bf699dfa3cf00c503b8e68fa" Workload="localhost-k8s-coredns--7c65d6cfc9--8ddht-eth0" Sep 13 00:11:57.403475 containerd[1465]: 2025-09-13 00:11:55.519 [INFO][4661] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f58a671e89caf1c95c4a545367dbbc8194c11f18bf699dfa3cf00c503b8e68fa" HandleID="k8s-pod-network.f58a671e89caf1c95c4a545367dbbc8194c11f18bf699dfa3cf00c503b8e68fa" Workload="localhost-k8s-coredns--7c65d6cfc9--8ddht-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002de350), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-8ddht", "timestamp":"2025-09-13 00:11:55.519107813 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:11:57.403475 containerd[1465]: 2025-09-13 00:11:55.519 [INFO][4661] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:57.403475 containerd[1465]: 2025-09-13 00:11:55.686 [INFO][4661] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:57.403475 containerd[1465]: 2025-09-13 00:11:55.686 [INFO][4661] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:11:57.403475 containerd[1465]: 2025-09-13 00:11:55.980 [INFO][4661] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f58a671e89caf1c95c4a545367dbbc8194c11f18bf699dfa3cf00c503b8e68fa" host="localhost" Sep 13 00:11:57.403475 containerd[1465]: 2025-09-13 00:11:55.995 [INFO][4661] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:11:57.403475 containerd[1465]: 2025-09-13 00:11:56.661 [INFO][4661] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:11:57.403475 containerd[1465]: 2025-09-13 00:11:56.663 [INFO][4661] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:11:57.403475 containerd[1465]: 2025-09-13 00:11:56.665 [INFO][4661] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:11:57.403475 containerd[1465]: 2025-09-13 00:11:56.666 [INFO][4661] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f58a671e89caf1c95c4a545367dbbc8194c11f18bf699dfa3cf00c503b8e68fa" host="localhost" Sep 13 00:11:57.403475 containerd[1465]: 2025-09-13 00:11:56.667 [INFO][4661] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f58a671e89caf1c95c4a545367dbbc8194c11f18bf699dfa3cf00c503b8e68fa Sep 13 00:11:57.403475 containerd[1465]: 2025-09-13 00:11:56.694 [INFO][4661] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f58a671e89caf1c95c4a545367dbbc8194c11f18bf699dfa3cf00c503b8e68fa" host="localhost" Sep 13 00:11:57.403475 containerd[1465]: 2025-09-13 00:11:57.074 [INFO][4661] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.f58a671e89caf1c95c4a545367dbbc8194c11f18bf699dfa3cf00c503b8e68fa" host="localhost" Sep 13 00:11:57.403475 containerd[1465]: 2025-09-13 00:11:57.074 [INFO][4661] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.f58a671e89caf1c95c4a545367dbbc8194c11f18bf699dfa3cf00c503b8e68fa" host="localhost" Sep 13 00:11:57.403475 containerd[1465]: 2025-09-13 00:11:57.074 [INFO][4661] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:57.403475 containerd[1465]: 2025-09-13 00:11:57.074 [INFO][4661] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="f58a671e89caf1c95c4a545367dbbc8194c11f18bf699dfa3cf00c503b8e68fa" HandleID="k8s-pod-network.f58a671e89caf1c95c4a545367dbbc8194c11f18bf699dfa3cf00c503b8e68fa" Workload="localhost-k8s-coredns--7c65d6cfc9--8ddht-eth0" Sep 13 00:11:57.404329 containerd[1465]: 2025-09-13 00:11:57.078 [INFO][4629] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f58a671e89caf1c95c4a545367dbbc8194c11f18bf699dfa3cf00c503b8e68fa" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8ddht" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--8ddht-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--8ddht-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"572e4883-43be-47f8-9d71-340af499cdf4", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-8ddht", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali397b6c22b0b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:57.404329 containerd[1465]: 2025-09-13 00:11:57.078 [INFO][4629] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="f58a671e89caf1c95c4a545367dbbc8194c11f18bf699dfa3cf00c503b8e68fa" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8ddht" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--8ddht-eth0" Sep 13 00:11:57.404329 containerd[1465]: 2025-09-13 00:11:57.078 [INFO][4629] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali397b6c22b0b ContainerID="f58a671e89caf1c95c4a545367dbbc8194c11f18bf699dfa3cf00c503b8e68fa" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8ddht" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--8ddht-eth0" Sep 13 00:11:57.404329 containerd[1465]: 2025-09-13 00:11:57.081 [INFO][4629] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f58a671e89caf1c95c4a545367dbbc8194c11f18bf699dfa3cf00c503b8e68fa" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8ddht" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--8ddht-eth0" Sep 13 00:11:57.404329 containerd[1465]: 2025-09-13 00:11:57.082 [INFO][4629] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f58a671e89caf1c95c4a545367dbbc8194c11f18bf699dfa3cf00c503b8e68fa" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8ddht" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--8ddht-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--8ddht-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"572e4883-43be-47f8-9d71-340af499cdf4", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f58a671e89caf1c95c4a545367dbbc8194c11f18bf699dfa3cf00c503b8e68fa", Pod:"coredns-7c65d6cfc9-8ddht", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali397b6c22b0b", MAC:"76:91:c1:a8:26:82", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:57.404329 containerd[1465]: 2025-09-13 00:11:57.392 [INFO][4629] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f58a671e89caf1c95c4a545367dbbc8194c11f18bf699dfa3cf00c503b8e68fa" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8ddht" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--8ddht-eth0" Sep 13 00:11:57.778364 systemd-networkd[1409]: cali1c93443476f: Link UP Sep 13 00:11:57.780283 systemd-networkd[1409]: cali1c93443476f: Gained carrier Sep 13 00:11:58.024894 containerd[1465]: 2025-09-13 00:11:55.489 [INFO][4644] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--6249r-eth0 csi-node-driver- calico-system 093a73a0-183b-402a-9ff9-9d907062e092 1007 0 2025-09-13 00:11:22 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-6249r eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1c93443476f [] [] }} ContainerID="f6961c824e060b5cadda9cf937f2eeb70779563b7296cf5e0f2a31bc7f2549e9" Namespace="calico-system" Pod="csi-node-driver-6249r" WorkloadEndpoint="localhost-k8s-csi--node--driver--6249r-" Sep 13 00:11:58.024894 containerd[1465]: 2025-09-13 00:11:55.489 [INFO][4644] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f6961c824e060b5cadda9cf937f2eeb70779563b7296cf5e0f2a31bc7f2549e9" Namespace="calico-system" Pod="csi-node-driver-6249r" WorkloadEndpoint="localhost-k8s-csi--node--driver--6249r-eth0" Sep 13 00:11:58.024894 containerd[1465]: 2025-09-13 00:11:55.519 [INFO][4659] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f6961c824e060b5cadda9cf937f2eeb70779563b7296cf5e0f2a31bc7f2549e9" HandleID="k8s-pod-network.f6961c824e060b5cadda9cf937f2eeb70779563b7296cf5e0f2a31bc7f2549e9" Workload="localhost-k8s-csi--node--driver--6249r-eth0" Sep 13 00:11:58.024894 containerd[1465]: 2025-09-13 00:11:55.519 [INFO][4659] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f6961c824e060b5cadda9cf937f2eeb70779563b7296cf5e0f2a31bc7f2549e9" HandleID="k8s-pod-network.f6961c824e060b5cadda9cf937f2eeb70779563b7296cf5e0f2a31bc7f2549e9" Workload="localhost-k8s-csi--node--driver--6249r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad480), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-6249r", "timestamp":"2025-09-13 00:11:55.519457207 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:11:58.024894 containerd[1465]: 2025-09-13 00:11:55.519 [INFO][4659] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:58.024894 containerd[1465]: 2025-09-13 00:11:57.075 [INFO][4659] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:58.024894 containerd[1465]: 2025-09-13 00:11:57.075 [INFO][4659] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:11:58.024894 containerd[1465]: 2025-09-13 00:11:57.395 [INFO][4659] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f6961c824e060b5cadda9cf937f2eeb70779563b7296cf5e0f2a31bc7f2549e9" host="localhost" Sep 13 00:11:58.024894 containerd[1465]: 2025-09-13 00:11:57.406 [INFO][4659] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:11:58.024894 containerd[1465]: 2025-09-13 00:11:57.414 [INFO][4659] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:11:58.024894 containerd[1465]: 2025-09-13 00:11:57.417 [INFO][4659] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:11:58.024894 containerd[1465]: 2025-09-13 00:11:57.420 [INFO][4659] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:11:58.024894 containerd[1465]: 2025-09-13 00:11:57.420 [INFO][4659] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f6961c824e060b5cadda9cf937f2eeb70779563b7296cf5e0f2a31bc7f2549e9" host="localhost" Sep 13 00:11:58.024894 containerd[1465]: 2025-09-13 00:11:57.423 [INFO][4659] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f6961c824e060b5cadda9cf937f2eeb70779563b7296cf5e0f2a31bc7f2549e9 Sep 13 00:11:58.024894 containerd[1465]: 2025-09-13 00:11:57.710 [INFO][4659] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f6961c824e060b5cadda9cf937f2eeb70779563b7296cf5e0f2a31bc7f2549e9" host="localhost" Sep 13 00:11:58.024894 containerd[1465]: 2025-09-13 00:11:57.772 [INFO][4659] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.f6961c824e060b5cadda9cf937f2eeb70779563b7296cf5e0f2a31bc7f2549e9" host="localhost" Sep 13 00:11:58.024894 containerd[1465]: 2025-09-13 00:11:57.772 [INFO][4659] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.f6961c824e060b5cadda9cf937f2eeb70779563b7296cf5e0f2a31bc7f2549e9" host="localhost" Sep 13 00:11:58.024894 containerd[1465]: 2025-09-13 00:11:57.772 [INFO][4659] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:58.024894 containerd[1465]: 2025-09-13 00:11:57.772 [INFO][4659] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="f6961c824e060b5cadda9cf937f2eeb70779563b7296cf5e0f2a31bc7f2549e9" HandleID="k8s-pod-network.f6961c824e060b5cadda9cf937f2eeb70779563b7296cf5e0f2a31bc7f2549e9" Workload="localhost-k8s-csi--node--driver--6249r-eth0" Sep 13 00:11:58.025948 containerd[1465]: 2025-09-13 00:11:57.775 [INFO][4644] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f6961c824e060b5cadda9cf937f2eeb70779563b7296cf5e0f2a31bc7f2549e9" Namespace="calico-system" Pod="csi-node-driver-6249r" WorkloadEndpoint="localhost-k8s-csi--node--driver--6249r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6249r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"093a73a0-183b-402a-9ff9-9d907062e092", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-6249r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1c93443476f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:58.025948 containerd[1465]: 2025-09-13 00:11:57.775 [INFO][4644] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="f6961c824e060b5cadda9cf937f2eeb70779563b7296cf5e0f2a31bc7f2549e9" Namespace="calico-system" Pod="csi-node-driver-6249r" WorkloadEndpoint="localhost-k8s-csi--node--driver--6249r-eth0" Sep 13 00:11:58.025948 containerd[1465]: 2025-09-13 00:11:57.775 [INFO][4644] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1c93443476f ContainerID="f6961c824e060b5cadda9cf937f2eeb70779563b7296cf5e0f2a31bc7f2549e9" Namespace="calico-system" Pod="csi-node-driver-6249r" WorkloadEndpoint="localhost-k8s-csi--node--driver--6249r-eth0" Sep 13 00:11:58.025948 containerd[1465]: 2025-09-13 00:11:57.781 [INFO][4644] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f6961c824e060b5cadda9cf937f2eeb70779563b7296cf5e0f2a31bc7f2549e9" Namespace="calico-system" Pod="csi-node-driver-6249r" WorkloadEndpoint="localhost-k8s-csi--node--driver--6249r-eth0" Sep 13 00:11:58.025948 containerd[1465]: 2025-09-13 00:11:57.781 [INFO][4644] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f6961c824e060b5cadda9cf937f2eeb70779563b7296cf5e0f2a31bc7f2549e9" Namespace="calico-system" Pod="csi-node-driver-6249r" WorkloadEndpoint="localhost-k8s-csi--node--driver--6249r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6249r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"093a73a0-183b-402a-9ff9-9d907062e092", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f6961c824e060b5cadda9cf937f2eeb70779563b7296cf5e0f2a31bc7f2549e9", Pod:"csi-node-driver-6249r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1c93443476f", MAC:"16:96:17:b4:14:46", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:58.025948 containerd[1465]: 2025-09-13 00:11:58.021 [INFO][4644] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f6961c824e060b5cadda9cf937f2eeb70779563b7296cf5e0f2a31bc7f2549e9" Namespace="calico-system" Pod="csi-node-driver-6249r" WorkloadEndpoint="localhost-k8s-csi--node--driver--6249r-eth0" Sep 13 00:11:58.213271 containerd[1465]: time="2025-09-13T00:11:58.213139713Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:11:58.213271 containerd[1465]: time="2025-09-13T00:11:58.213213248Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:11:58.213271 containerd[1465]: time="2025-09-13T00:11:58.213230349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:58.213483 containerd[1465]: time="2025-09-13T00:11:58.213325273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:58.238984 systemd[1]: Started cri-containerd-f58a671e89caf1c95c4a545367dbbc8194c11f18bf699dfa3cf00c503b8e68fa.scope - libcontainer container f58a671e89caf1c95c4a545367dbbc8194c11f18bf699dfa3cf00c503b8e68fa. Sep 13 00:11:58.256575 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:11:58.283913 containerd[1465]: time="2025-09-13T00:11:58.283869477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8ddht,Uid:572e4883-43be-47f8-9d71-340af499cdf4,Namespace:kube-system,Attempt:1,} returns sandbox id \"f58a671e89caf1c95c4a545367dbbc8194c11f18bf699dfa3cf00c503b8e68fa\"" Sep 13 00:11:58.284893 kubelet[2556]: E0913 00:11:58.284866 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:58.296414 containerd[1465]: time="2025-09-13T00:11:58.296371189Z" level=info msg="CreateContainer within sandbox \"f58a671e89caf1c95c4a545367dbbc8194c11f18bf699dfa3cf00c503b8e68fa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:11:58.469005 systemd-networkd[1409]: cali397b6c22b0b: Gained IPv6LL Sep 13 00:11:58.530621 containerd[1465]: time="2025-09-13T00:11:58.529857912Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:11:58.530747 containerd[1465]: time="2025-09-13T00:11:58.530643329Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:11:58.530769 containerd[1465]: time="2025-09-13T00:11:58.530729627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:58.530984 containerd[1465]: time="2025-09-13T00:11:58.530891794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:58.537979 containerd[1465]: time="2025-09-13T00:11:58.537866285Z" level=info msg="CreateContainer within sandbox \"fafccda0260f1bf93d258b6469ab472b4a9f0cf2a690f825178db20145494aa0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7fc8cde294ac80cc897ddebd0d140d162ead473735fde3ad98db8a81c20e8ede\"" Sep 13 00:11:58.539608 containerd[1465]: time="2025-09-13T00:11:58.539549073Z" level=info msg="StartContainer for \"7fc8cde294ac80cc897ddebd0d140d162ead473735fde3ad98db8a81c20e8ede\"" Sep 13 00:11:58.554979 systemd[1]: Started cri-containerd-f6961c824e060b5cadda9cf937f2eeb70779563b7296cf5e0f2a31bc7f2549e9.scope - libcontainer container f6961c824e060b5cadda9cf937f2eeb70779563b7296cf5e0f2a31bc7f2549e9. Sep 13 00:11:58.575659 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:11:58.583990 systemd[1]: Started cri-containerd-7fc8cde294ac80cc897ddebd0d140d162ead473735fde3ad98db8a81c20e8ede.scope - libcontainer container 7fc8cde294ac80cc897ddebd0d140d162ead473735fde3ad98db8a81c20e8ede. Sep 13 00:11:58.601391 containerd[1465]: time="2025-09-13T00:11:58.601348768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6249r,Uid:093a73a0-183b-402a-9ff9-9d907062e092,Namespace:calico-system,Attempt:1,} returns sandbox id \"f6961c824e060b5cadda9cf937f2eeb70779563b7296cf5e0f2a31bc7f2549e9\"" Sep 13 00:11:58.794996 containerd[1465]: time="2025-09-13T00:11:58.794725919Z" level=info msg="StartContainer for \"7fc8cde294ac80cc897ddebd0d140d162ead473735fde3ad98db8a81c20e8ede\" returns successfully" Sep 13 00:11:58.870409 containerd[1465]: time="2025-09-13T00:11:58.870340399Z" level=info msg="CreateContainer within sandbox \"c60aa7b749f46a0213cfd5fc86946caf5294b9cb6a8be0f4213c6559e4dc151e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"013a9bdb3e1799d2098de5f39f3c1d0890d5e42616a28fec19499fff2a040486\"" Sep 13 00:11:58.872874 containerd[1465]: time="2025-09-13T00:11:58.871354505Z" level=info msg="StartContainer for \"013a9bdb3e1799d2098de5f39f3c1d0890d5e42616a28fec19499fff2a040486\"" Sep 13 00:11:58.910120 systemd[1]: Started cri-containerd-013a9bdb3e1799d2098de5f39f3c1d0890d5e42616a28fec19499fff2a040486.scope - libcontainer container 013a9bdb3e1799d2098de5f39f3c1d0890d5e42616a28fec19499fff2a040486. Sep 13 00:11:58.924089 containerd[1465]: time="2025-09-13T00:11:58.924009099Z" level=info msg="CreateContainer within sandbox \"f58a671e89caf1c95c4a545367dbbc8194c11f18bf699dfa3cf00c503b8e68fa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cf2700e8a6c821a993d54b4b261fe8c8ab43a7cd744d32ddf2c1c57fdafa8c91\"" Sep 13 00:11:58.925842 containerd[1465]: time="2025-09-13T00:11:58.925726290Z" level=info msg="StartContainer for \"cf2700e8a6c821a993d54b4b261fe8c8ab43a7cd744d32ddf2c1c57fdafa8c91\"" Sep 13 00:11:58.959044 systemd[1]: Started cri-containerd-cf2700e8a6c821a993d54b4b261fe8c8ab43a7cd744d32ddf2c1c57fdafa8c91.scope - libcontainer container cf2700e8a6c821a993d54b4b261fe8c8ab43a7cd744d32ddf2c1c57fdafa8c91. Sep 13 00:11:59.089220 containerd[1465]: time="2025-09-13T00:11:59.089073171Z" level=info msg="StartContainer for \"013a9bdb3e1799d2098de5f39f3c1d0890d5e42616a28fec19499fff2a040486\" returns successfully" Sep 13 00:11:59.089220 containerd[1465]: time="2025-09-13T00:11:59.089192229Z" level=info msg="StartContainer for \"cf2700e8a6c821a993d54b4b261fe8c8ab43a7cd744d32ddf2c1c57fdafa8c91\" returns successfully" Sep 13 00:11:59.334525 kubelet[2556]: E0913 00:11:59.334490 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:59.342932 kubelet[2556]: E0913 00:11:59.340061 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:11:59.547451 kubelet[2556]: I0913 00:11:59.547332 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-8ddht" podStartSLOduration=49.547316689 podStartE2EDuration="49.547316689s" podCreationTimestamp="2025-09-13 00:11:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:11:59.545975952 +0000 UTC m=+55.135796484" watchObservedRunningTime="2025-09-13 00:11:59.547316689 +0000 UTC m=+55.137137211" Sep 13 00:11:59.748981 systemd-networkd[1409]: cali1c93443476f: Gained IPv6LL Sep 13 00:11:59.838888 kubelet[2556]: I0913 00:11:59.838754 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-58f9bc44bc-q854x" podStartSLOduration=32.893883013 podStartE2EDuration="40.838731044s" podCreationTimestamp="2025-09-13 00:11:19 +0000 UTC" firstStartedPulling="2025-09-13 00:11:48.044011054 +0000 UTC m=+43.633831576" lastFinishedPulling="2025-09-13 00:11:55.988859085 +0000 UTC m=+51.578679607" observedRunningTime="2025-09-13 00:11:59.836217134 +0000 UTC m=+55.426037657" watchObservedRunningTime="2025-09-13 00:11:59.838731044 +0000 UTC m=+55.428551566" Sep 13 00:12:00.344471 kubelet[2556]: I0913 00:12:00.344433 2556 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:12:00.345178 kubelet[2556]: E0913 00:12:00.344747 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:12:00.345178 kubelet[2556]: E0913 00:12:00.345109 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:12:00.572454 kubelet[2556]: I0913 00:12:00.572075 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-t2khr" podStartSLOduration=50.572054609 podStartE2EDuration="50.572054609s" podCreationTimestamp="2025-09-13 00:11:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:11:59.924856767 +0000 UTC m=+55.514677299" watchObservedRunningTime="2025-09-13 00:12:00.572054609 +0000 UTC m=+56.161875131" Sep 13 00:12:01.134304 containerd[1465]: time="2025-09-13T00:12:01.134232749Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:01.138996 containerd[1465]: time="2025-09-13T00:12:01.138839100Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 13 00:12:01.144242 containerd[1465]: time="2025-09-13T00:12:01.144188539Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:01.153119 containerd[1465]: time="2025-09-13T00:12:01.152881494Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:01.153945 containerd[1465]: time="2025-09-13T00:12:01.153891063Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 5.160320317s" Sep 13 00:12:01.153945 containerd[1465]: time="2025-09-13T00:12:01.153941315Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 13 00:12:01.155359 containerd[1465]: time="2025-09-13T00:12:01.155308592Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 13 00:12:01.157291 containerd[1465]: time="2025-09-13T00:12:01.157228737Z" level=info msg="CreateContainer within sandbox \"90f4ff3b69d65f750e543e408e5092a16b2367214f5c594e655ae7b02077bf3c\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 13 00:12:01.238110 containerd[1465]: time="2025-09-13T00:12:01.238017919Z" level=info msg="CreateContainer within sandbox \"90f4ff3b69d65f750e543e408e5092a16b2367214f5c594e655ae7b02077bf3c\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"2fbdfd135ac249ffdae90a3f3fb993f8081814282fc8e7ba8bfd64a7343eec1b\"" Sep 13 00:12:01.239087 containerd[1465]: time="2025-09-13T00:12:01.239031265Z" level=info msg="StartContainer for \"2fbdfd135ac249ffdae90a3f3fb993f8081814282fc8e7ba8bfd64a7343eec1b\"" Sep 13 00:12:01.282398 systemd[1]: run-containerd-runc-k8s.io-2fbdfd135ac249ffdae90a3f3fb993f8081814282fc8e7ba8bfd64a7343eec1b-runc.lZSSnj.mount: Deactivated successfully. Sep 13 00:12:01.295008 systemd[1]: Started cri-containerd-2fbdfd135ac249ffdae90a3f3fb993f8081814282fc8e7ba8bfd64a7343eec1b.scope - libcontainer container 2fbdfd135ac249ffdae90a3f3fb993f8081814282fc8e7ba8bfd64a7343eec1b. Sep 13 00:12:01.349623 kubelet[2556]: E0913 00:12:01.349573 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:12:01.350220 kubelet[2556]: E0913 00:12:01.350122 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:12:01.359006 containerd[1465]: time="2025-09-13T00:12:01.358950890Z" level=info msg="StartContainer for \"2fbdfd135ac249ffdae90a3f3fb993f8081814282fc8e7ba8bfd64a7343eec1b\" returns successfully" Sep 13 00:12:02.092420 systemd[1]: Started sshd@12-10.0.0.108:22-10.0.0.1:32836.service - OpenSSH per-connection server daemon (10.0.0.1:32836). Sep 13 00:12:02.140744 sshd[5089]: Accepted publickey for core from 10.0.0.1 port 32836 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:12:02.142417 sshd[5089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:12:02.146429 systemd-logind[1450]: New session 13 of user core. Sep 13 00:12:02.156932 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 13 00:12:02.354095 kubelet[2556]: E0913 00:12:02.353938 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:12:02.729142 sshd[5089]: pam_unix(sshd:session): session closed for user core Sep 13 00:12:02.742569 systemd[1]: sshd@12-10.0.0.108:22-10.0.0.1:32836.service: Deactivated successfully. Sep 13 00:12:02.744742 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:12:02.746338 systemd-logind[1450]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:12:02.757306 systemd[1]: Started sshd@13-10.0.0.108:22-10.0.0.1:32852.service - OpenSSH per-connection server daemon (10.0.0.1:32852). Sep 13 00:12:02.758583 systemd-logind[1450]: Removed session 13. Sep 13 00:12:02.787943 sshd[5104]: Accepted publickey for core from 10.0.0.1 port 32852 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:12:02.789678 sshd[5104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:12:02.794171 systemd-logind[1450]: New session 14 of user core. Sep 13 00:12:02.801914 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 13 00:12:03.020917 sshd[5104]: pam_unix(sshd:session): session closed for user core Sep 13 00:12:03.034528 systemd[1]: sshd@13-10.0.0.108:22-10.0.0.1:32852.service: Deactivated successfully. Sep 13 00:12:03.039530 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:12:03.043383 systemd-logind[1450]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:12:03.057861 systemd[1]: Started sshd@14-10.0.0.108:22-10.0.0.1:32862.service - OpenSSH per-connection server daemon (10.0.0.1:32862). Sep 13 00:12:03.062009 systemd-logind[1450]: Removed session 14. Sep 13 00:12:03.096081 sshd[5118]: Accepted publickey for core from 10.0.0.1 port 32862 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:12:03.097998 sshd[5118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:12:03.105554 systemd-logind[1450]: New session 15 of user core. Sep 13 00:12:03.114152 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 13 00:12:03.585700 sshd[5118]: pam_unix(sshd:session): session closed for user core Sep 13 00:12:03.589455 systemd[1]: sshd@14-10.0.0.108:22-10.0.0.1:32862.service: Deactivated successfully. Sep 13 00:12:03.591525 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:12:03.592210 systemd-logind[1450]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:12:03.593152 systemd-logind[1450]: Removed session 15. Sep 13 00:12:04.488255 containerd[1465]: time="2025-09-13T00:12:04.488207012Z" level=info msg="StopPodSandbox for \"870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473\"" Sep 13 00:12:04.579262 containerd[1465]: 2025-09-13 00:12:04.529 [WARNING][5143] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6bb584c8c5--4nwz8-eth0", GenerateName:"calico-kube-controllers-6bb584c8c5-", Namespace:"calico-system", SelfLink:"", UID:"7a177557-980a-4069-9ba1-1de68d33d2df", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bb584c8c5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"346a26d8e21ef37ccd6a7798620d35267200b16fc7168f26dcb26501fa1c7a98", Pod:"calico-kube-controllers-6bb584c8c5-4nwz8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali856b6608dc8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:04.579262 containerd[1465]: 2025-09-13 00:12:04.529 [INFO][5143] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473" Sep 13 00:12:04.579262 containerd[1465]: 2025-09-13 00:12:04.529 [INFO][5143] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473" iface="eth0" netns="" Sep 13 00:12:04.579262 containerd[1465]: 2025-09-13 00:12:04.529 [INFO][5143] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473" Sep 13 00:12:04.579262 containerd[1465]: 2025-09-13 00:12:04.529 [INFO][5143] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473" Sep 13 00:12:04.579262 containerd[1465]: 2025-09-13 00:12:04.563 [INFO][5154] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473" HandleID="k8s-pod-network.870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473" Workload="localhost-k8s-calico--kube--controllers--6bb584c8c5--4nwz8-eth0" Sep 13 00:12:04.579262 containerd[1465]: 2025-09-13 00:12:04.563 [INFO][5154] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:04.579262 containerd[1465]: 2025-09-13 00:12:04.564 [INFO][5154] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:04.579262 containerd[1465]: 2025-09-13 00:12:04.570 [WARNING][5154] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473" HandleID="k8s-pod-network.870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473" Workload="localhost-k8s-calico--kube--controllers--6bb584c8c5--4nwz8-eth0" Sep 13 00:12:04.579262 containerd[1465]: 2025-09-13 00:12:04.570 [INFO][5154] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473" HandleID="k8s-pod-network.870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473" Workload="localhost-k8s-calico--kube--controllers--6bb584c8c5--4nwz8-eth0" Sep 13 00:12:04.579262 containerd[1465]: 2025-09-13 00:12:04.572 [INFO][5154] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:04.579262 containerd[1465]: 2025-09-13 00:12:04.575 [INFO][5143] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473" Sep 13 00:12:04.579862 containerd[1465]: time="2025-09-13T00:12:04.579316252Z" level=info msg="TearDown network for sandbox \"870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473\" successfully" Sep 13 00:12:04.579862 containerd[1465]: time="2025-09-13T00:12:04.579353862Z" level=info msg="StopPodSandbox for \"870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473\" returns successfully" Sep 13 00:12:04.580450 containerd[1465]: time="2025-09-13T00:12:04.580401582Z" level=info msg="RemovePodSandbox for \"870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473\"" Sep 13 00:12:04.583575 containerd[1465]: time="2025-09-13T00:12:04.583539582Z" level=info msg="Forcibly stopping sandbox \"870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473\"" Sep 13 00:12:04.659711 containerd[1465]: 2025-09-13 00:12:04.621 [WARNING][5171] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6bb584c8c5--4nwz8-eth0", GenerateName:"calico-kube-controllers-6bb584c8c5-", Namespace:"calico-system", SelfLink:"", UID:"7a177557-980a-4069-9ba1-1de68d33d2df", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bb584c8c5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"346a26d8e21ef37ccd6a7798620d35267200b16fc7168f26dcb26501fa1c7a98", Pod:"calico-kube-controllers-6bb584c8c5-4nwz8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali856b6608dc8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:04.659711 containerd[1465]: 2025-09-13 00:12:04.621 [INFO][5171] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473" Sep 13 00:12:04.659711 containerd[1465]: 2025-09-13 00:12:04.621 [INFO][5171] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473" iface="eth0" netns="" Sep 13 00:12:04.659711 containerd[1465]: 2025-09-13 00:12:04.621 [INFO][5171] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473" Sep 13 00:12:04.659711 containerd[1465]: 2025-09-13 00:12:04.621 [INFO][5171] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473" Sep 13 00:12:04.659711 containerd[1465]: 2025-09-13 00:12:04.644 [INFO][5179] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473" HandleID="k8s-pod-network.870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473" Workload="localhost-k8s-calico--kube--controllers--6bb584c8c5--4nwz8-eth0" Sep 13 00:12:04.659711 containerd[1465]: 2025-09-13 00:12:04.644 [INFO][5179] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:04.659711 containerd[1465]: 2025-09-13 00:12:04.644 [INFO][5179] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:04.659711 containerd[1465]: 2025-09-13 00:12:04.650 [WARNING][5179] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473" HandleID="k8s-pod-network.870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473" Workload="localhost-k8s-calico--kube--controllers--6bb584c8c5--4nwz8-eth0" Sep 13 00:12:04.659711 containerd[1465]: 2025-09-13 00:12:04.650 [INFO][5179] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473" HandleID="k8s-pod-network.870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473" Workload="localhost-k8s-calico--kube--controllers--6bb584c8c5--4nwz8-eth0" Sep 13 00:12:04.659711 containerd[1465]: 2025-09-13 00:12:04.652 [INFO][5179] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:04.659711 containerd[1465]: 2025-09-13 00:12:04.654 [INFO][5171] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473" Sep 13 00:12:04.660194 containerd[1465]: time="2025-09-13T00:12:04.659755595Z" level=info msg="TearDown network for sandbox \"870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473\" successfully" Sep 13 00:12:05.085859 containerd[1465]: time="2025-09-13T00:12:05.085772055Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:12:05.085999 containerd[1465]: time="2025-09-13T00:12:05.085898149Z" level=info msg="RemovePodSandbox \"870a82591c6c868dc9d7c44bc1ea7ad95c73fdd95db3b99226d53db3bc44c473\" returns successfully" Sep 13 00:12:05.088581 containerd[1465]: time="2025-09-13T00:12:05.088172868Z" level=info msg="StopPodSandbox for \"b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735\"" Sep 13 00:12:05.209989 containerd[1465]: 2025-09-13 00:12:05.154 [WARNING][5221] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6249r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"093a73a0-183b-402a-9ff9-9d907062e092", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f6961c824e060b5cadda9cf937f2eeb70779563b7296cf5e0f2a31bc7f2549e9", Pod:"csi-node-driver-6249r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1c93443476f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:05.209989 containerd[1465]: 2025-09-13 00:12:05.154 [INFO][5221] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735" Sep 13 00:12:05.209989 containerd[1465]: 2025-09-13 00:12:05.154 [INFO][5221] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735" iface="eth0" netns="" Sep 13 00:12:05.209989 containerd[1465]: 2025-09-13 00:12:05.154 [INFO][5221] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735" Sep 13 00:12:05.209989 containerd[1465]: 2025-09-13 00:12:05.154 [INFO][5221] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735" Sep 13 00:12:05.209989 containerd[1465]: 2025-09-13 00:12:05.193 [INFO][5232] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735" HandleID="k8s-pod-network.b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735" Workload="localhost-k8s-csi--node--driver--6249r-eth0" Sep 13 00:12:05.209989 containerd[1465]: 2025-09-13 00:12:05.194 [INFO][5232] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:05.209989 containerd[1465]: 2025-09-13 00:12:05.194 [INFO][5232] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:05.209989 containerd[1465]: 2025-09-13 00:12:05.201 [WARNING][5232] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735" HandleID="k8s-pod-network.b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735" Workload="localhost-k8s-csi--node--driver--6249r-eth0" Sep 13 00:12:05.209989 containerd[1465]: 2025-09-13 00:12:05.201 [INFO][5232] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735" HandleID="k8s-pod-network.b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735" Workload="localhost-k8s-csi--node--driver--6249r-eth0" Sep 13 00:12:05.209989 containerd[1465]: 2025-09-13 00:12:05.203 [INFO][5232] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:05.209989 containerd[1465]: 2025-09-13 00:12:05.206 [INFO][5221] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735" Sep 13 00:12:05.210592 containerd[1465]: time="2025-09-13T00:12:05.210046484Z" level=info msg="TearDown network for sandbox \"b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735\" successfully" Sep 13 00:12:05.210592 containerd[1465]: time="2025-09-13T00:12:05.210079104Z" level=info msg="StopPodSandbox for \"b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735\" returns successfully" Sep 13 00:12:05.210661 containerd[1465]: time="2025-09-13T00:12:05.210611051Z" level=info msg="RemovePodSandbox for \"b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735\"" Sep 13 00:12:05.210661 containerd[1465]: time="2025-09-13T00:12:05.210646046Z" level=info msg="Forcibly stopping sandbox \"b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735\"" Sep 13 00:12:05.398356 containerd[1465]: 2025-09-13 00:12:05.257 [WARNING][5250] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6249r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"093a73a0-183b-402a-9ff9-9d907062e092", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f6961c824e060b5cadda9cf937f2eeb70779563b7296cf5e0f2a31bc7f2549e9", Pod:"csi-node-driver-6249r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1c93443476f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:05.398356 containerd[1465]: 2025-09-13 00:12:05.257 [INFO][5250] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735" Sep 13 00:12:05.398356 containerd[1465]: 2025-09-13 00:12:05.257 [INFO][5250] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735" iface="eth0" netns="" Sep 13 00:12:05.398356 containerd[1465]: 2025-09-13 00:12:05.257 [INFO][5250] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735" Sep 13 00:12:05.398356 containerd[1465]: 2025-09-13 00:12:05.258 [INFO][5250] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735" Sep 13 00:12:05.398356 containerd[1465]: 2025-09-13 00:12:05.313 [INFO][5259] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735" HandleID="k8s-pod-network.b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735" Workload="localhost-k8s-csi--node--driver--6249r-eth0" Sep 13 00:12:05.398356 containerd[1465]: 2025-09-13 00:12:05.313 [INFO][5259] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:05.398356 containerd[1465]: 2025-09-13 00:12:05.313 [INFO][5259] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:05.398356 containerd[1465]: 2025-09-13 00:12:05.387 [WARNING][5259] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735" HandleID="k8s-pod-network.b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735" Workload="localhost-k8s-csi--node--driver--6249r-eth0" Sep 13 00:12:05.398356 containerd[1465]: 2025-09-13 00:12:05.387 [INFO][5259] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735" HandleID="k8s-pod-network.b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735" Workload="localhost-k8s-csi--node--driver--6249r-eth0" Sep 13 00:12:05.398356 containerd[1465]: 2025-09-13 00:12:05.391 [INFO][5259] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:05.398356 containerd[1465]: 2025-09-13 00:12:05.394 [INFO][5250] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735" Sep 13 00:12:05.398356 containerd[1465]: time="2025-09-13T00:12:05.398208077Z" level=info msg="TearDown network for sandbox \"b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735\" successfully" Sep 13 00:12:06.253807 containerd[1465]: time="2025-09-13T00:12:06.253726092Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:12:06.254279 containerd[1465]: time="2025-09-13T00:12:06.253836116Z" level=info msg="RemovePodSandbox \"b0f4313b25cb0ac7e3c9b42ac86d8af0666de371bc4b835752d9ca695c6b2735\" returns successfully" Sep 13 00:12:06.254332 containerd[1465]: time="2025-09-13T00:12:06.254285903Z" level=info msg="StopPodSandbox for \"0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314\"" Sep 13 00:12:06.366280 containerd[1465]: 2025-09-13 00:12:06.330 [WARNING][5282] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--58f9bc44bc--gqfmq-eth0", GenerateName:"calico-apiserver-58f9bc44bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"18e54af7-8bec-40b9-9191-5e12c28dbbdd", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58f9bc44bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"53a7670203e647a7c9c52cf04754272058b6ec928e306a911a0c078e04808623", Pod:"calico-apiserver-58f9bc44bc-gqfmq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali331ed5d21b3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:06.366280 containerd[1465]: 2025-09-13 00:12:06.330 [INFO][5282] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314" Sep 13 00:12:06.366280 containerd[1465]: 2025-09-13 00:12:06.330 [INFO][5282] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314" iface="eth0" netns="" Sep 13 00:12:06.366280 containerd[1465]: 2025-09-13 00:12:06.330 [INFO][5282] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314" Sep 13 00:12:06.366280 containerd[1465]: 2025-09-13 00:12:06.330 [INFO][5282] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314" Sep 13 00:12:06.366280 containerd[1465]: 2025-09-13 00:12:06.349 [INFO][5291] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314" HandleID="k8s-pod-network.0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314" Workload="localhost-k8s-calico--apiserver--58f9bc44bc--gqfmq-eth0" Sep 13 00:12:06.366280 containerd[1465]: 2025-09-13 00:12:06.349 [INFO][5291] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:06.366280 containerd[1465]: 2025-09-13 00:12:06.349 [INFO][5291] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:06.366280 containerd[1465]: 2025-09-13 00:12:06.355 [WARNING][5291] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314" HandleID="k8s-pod-network.0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314" Workload="localhost-k8s-calico--apiserver--58f9bc44bc--gqfmq-eth0" Sep 13 00:12:06.366280 containerd[1465]: 2025-09-13 00:12:06.355 [INFO][5291] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314" HandleID="k8s-pod-network.0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314" Workload="localhost-k8s-calico--apiserver--58f9bc44bc--gqfmq-eth0" Sep 13 00:12:06.366280 containerd[1465]: 2025-09-13 00:12:06.356 [INFO][5291] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:06.366280 containerd[1465]: 2025-09-13 00:12:06.359 [INFO][5282] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314" Sep 13 00:12:06.366280 containerd[1465]: time="2025-09-13T00:12:06.364031300Z" level=info msg="TearDown network for sandbox \"0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314\" successfully" Sep 13 00:12:06.366280 containerd[1465]: time="2025-09-13T00:12:06.364084669Z" level=info msg="StopPodSandbox for \"0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314\" returns successfully" Sep 13 00:12:06.366280 containerd[1465]: time="2025-09-13T00:12:06.365745466Z" level=info msg="RemovePodSandbox for \"0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314\"" Sep 13 00:12:06.366280 containerd[1465]: time="2025-09-13T00:12:06.365830904Z" level=info msg="Forcibly stopping sandbox \"0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314\"" Sep 13 00:12:06.452924 containerd[1465]: 2025-09-13 00:12:06.422 [WARNING][5309] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--58f9bc44bc--gqfmq-eth0", GenerateName:"calico-apiserver-58f9bc44bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"18e54af7-8bec-40b9-9191-5e12c28dbbdd", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58f9bc44bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"53a7670203e647a7c9c52cf04754272058b6ec928e306a911a0c078e04808623", Pod:"calico-apiserver-58f9bc44bc-gqfmq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali331ed5d21b3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:06.452924 containerd[1465]: 2025-09-13 00:12:06.422 [INFO][5309] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314" Sep 13 00:12:06.452924 containerd[1465]: 2025-09-13 00:12:06.422 [INFO][5309] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314" iface="eth0" netns="" Sep 13 00:12:06.452924 containerd[1465]: 2025-09-13 00:12:06.422 [INFO][5309] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314" Sep 13 00:12:06.452924 containerd[1465]: 2025-09-13 00:12:06.422 [INFO][5309] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314" Sep 13 00:12:06.452924 containerd[1465]: 2025-09-13 00:12:06.441 [INFO][5318] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314" HandleID="k8s-pod-network.0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314" Workload="localhost-k8s-calico--apiserver--58f9bc44bc--gqfmq-eth0" Sep 13 00:12:06.452924 containerd[1465]: 2025-09-13 00:12:06.441 [INFO][5318] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:06.452924 containerd[1465]: 2025-09-13 00:12:06.441 [INFO][5318] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:06.452924 containerd[1465]: 2025-09-13 00:12:06.446 [WARNING][5318] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314" HandleID="k8s-pod-network.0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314" Workload="localhost-k8s-calico--apiserver--58f9bc44bc--gqfmq-eth0" Sep 13 00:12:06.452924 containerd[1465]: 2025-09-13 00:12:06.446 [INFO][5318] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314" HandleID="k8s-pod-network.0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314" Workload="localhost-k8s-calico--apiserver--58f9bc44bc--gqfmq-eth0" Sep 13 00:12:06.452924 containerd[1465]: 2025-09-13 00:12:06.447 [INFO][5318] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:06.452924 containerd[1465]: 2025-09-13 00:12:06.450 [INFO][5309] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314" Sep 13 00:12:06.453410 containerd[1465]: time="2025-09-13T00:12:06.452973284Z" level=info msg="TearDown network for sandbox \"0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314\" successfully" Sep 13 00:12:06.515848 containerd[1465]: time="2025-09-13T00:12:06.515688805Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:06.625638 containerd[1465]: time="2025-09-13T00:12:06.625539880Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 13 00:12:06.787777 containerd[1465]: time="2025-09-13T00:12:06.787622696Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:06.822042 containerd[1465]: time="2025-09-13T00:12:06.821946422Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:12:06.822225 containerd[1465]: time="2025-09-13T00:12:06.822148629Z" level=info msg="RemovePodSandbox \"0e8217f4503b0bc19cf282d60a41a1eae1f3bd9eff46704a7009405c99cab314\" returns successfully" Sep 13 00:12:06.822897 containerd[1465]: time="2025-09-13T00:12:06.822871050Z" level=info msg="StopPodSandbox for \"63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c\"" Sep 13 00:12:06.856124 containerd[1465]: time="2025-09-13T00:12:06.855538050Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:06.856767 containerd[1465]: time="2025-09-13T00:12:06.856733260Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 5.698119882s" Sep 13 00:12:06.856879 containerd[1465]: time="2025-09-13T00:12:06.856776390Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 13 00:12:06.858950 containerd[1465]: time="2025-09-13T00:12:06.858921035Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:12:06.871360 containerd[1465]: time="2025-09-13T00:12:06.871307511Z" level=info msg="CreateContainer within sandbox \"346a26d8e21ef37ccd6a7798620d35267200b16fc7168f26dcb26501fa1c7a98\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 13 00:12:06.927069 containerd[1465]: 2025-09-13 00:12:06.868 [WARNING][5337] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--58f9bc44bc--q854x-eth0", GenerateName:"calico-apiserver-58f9bc44bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"093eb2fa-5825-44f6-93c6-61d3114099e0", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58f9bc44bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c60aa7b749f46a0213cfd5fc86946caf5294b9cb6a8be0f4213c6559e4dc151e", Pod:"calico-apiserver-58f9bc44bc-q854x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5917225f54c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:06.927069 containerd[1465]: 2025-09-13 00:12:06.869 [INFO][5337] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c" Sep 13 00:12:06.927069 containerd[1465]: 2025-09-13 00:12:06.869 [INFO][5337] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c" iface="eth0" netns="" Sep 13 00:12:06.927069 containerd[1465]: 2025-09-13 00:12:06.869 [INFO][5337] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c" Sep 13 00:12:06.927069 containerd[1465]: 2025-09-13 00:12:06.869 [INFO][5337] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c" Sep 13 00:12:06.927069 containerd[1465]: 2025-09-13 00:12:06.912 [INFO][5348] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c" HandleID="k8s-pod-network.63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c" Workload="localhost-k8s-calico--apiserver--58f9bc44bc--q854x-eth0" Sep 13 00:12:06.927069 containerd[1465]: 2025-09-13 00:12:06.912 [INFO][5348] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:06.927069 containerd[1465]: 2025-09-13 00:12:06.912 [INFO][5348] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:06.927069 containerd[1465]: 2025-09-13 00:12:06.919 [WARNING][5348] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c" HandleID="k8s-pod-network.63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c" Workload="localhost-k8s-calico--apiserver--58f9bc44bc--q854x-eth0" Sep 13 00:12:06.927069 containerd[1465]: 2025-09-13 00:12:06.919 [INFO][5348] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c" HandleID="k8s-pod-network.63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c" Workload="localhost-k8s-calico--apiserver--58f9bc44bc--q854x-eth0" Sep 13 00:12:06.927069 containerd[1465]: 2025-09-13 00:12:06.921 [INFO][5348] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:06.927069 containerd[1465]: 2025-09-13 00:12:06.924 [INFO][5337] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c" Sep 13 00:12:06.927657 containerd[1465]: time="2025-09-13T00:12:06.927131878Z" level=info msg="TearDown network for sandbox \"63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c\" successfully" Sep 13 00:12:06.927657 containerd[1465]: time="2025-09-13T00:12:06.927165670Z" level=info msg="StopPodSandbox for \"63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c\" returns successfully" Sep 13 00:12:06.927842 containerd[1465]: time="2025-09-13T00:12:06.927815819Z" level=info msg="RemovePodSandbox for \"63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c\"" Sep 13 00:12:06.927895 containerd[1465]: time="2025-09-13T00:12:06.927854501Z" level=info msg="Forcibly stopping sandbox \"63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c\"" Sep 13 00:12:06.965954 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount261092706.mount: Deactivated successfully. Sep 13 00:12:07.003256 containerd[1465]: 2025-09-13 00:12:06.968 [WARNING][5365] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--58f9bc44bc--q854x-eth0", GenerateName:"calico-apiserver-58f9bc44bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"093eb2fa-5825-44f6-93c6-61d3114099e0", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58f9bc44bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c60aa7b749f46a0213cfd5fc86946caf5294b9cb6a8be0f4213c6559e4dc151e", Pod:"calico-apiserver-58f9bc44bc-q854x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5917225f54c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:07.003256 containerd[1465]: 2025-09-13 00:12:06.968 [INFO][5365] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c" Sep 13 00:12:07.003256 containerd[1465]: 2025-09-13 00:12:06.968 [INFO][5365] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c" iface="eth0" netns="" Sep 13 00:12:07.003256 containerd[1465]: 2025-09-13 00:12:06.968 [INFO][5365] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c" Sep 13 00:12:07.003256 containerd[1465]: 2025-09-13 00:12:06.968 [INFO][5365] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c" Sep 13 00:12:07.003256 containerd[1465]: 2025-09-13 00:12:06.990 [INFO][5374] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c" HandleID="k8s-pod-network.63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c" Workload="localhost-k8s-calico--apiserver--58f9bc44bc--q854x-eth0" Sep 13 00:12:07.003256 containerd[1465]: 2025-09-13 00:12:06.990 [INFO][5374] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:07.003256 containerd[1465]: 2025-09-13 00:12:06.990 [INFO][5374] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:07.003256 containerd[1465]: 2025-09-13 00:12:06.996 [WARNING][5374] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c" HandleID="k8s-pod-network.63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c" Workload="localhost-k8s-calico--apiserver--58f9bc44bc--q854x-eth0" Sep 13 00:12:07.003256 containerd[1465]: 2025-09-13 00:12:06.996 [INFO][5374] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c" HandleID="k8s-pod-network.63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c" Workload="localhost-k8s-calico--apiserver--58f9bc44bc--q854x-eth0" Sep 13 00:12:07.003256 containerd[1465]: 2025-09-13 00:12:06.997 [INFO][5374] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:07.003256 containerd[1465]: 2025-09-13 00:12:07.000 [INFO][5365] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c" Sep 13 00:12:07.003704 containerd[1465]: time="2025-09-13T00:12:07.003301454Z" level=info msg="TearDown network for sandbox \"63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c\" successfully" Sep 13 00:12:07.050674 containerd[1465]: time="2025-09-13T00:12:07.050526908Z" level=info msg="CreateContainer within sandbox \"346a26d8e21ef37ccd6a7798620d35267200b16fc7168f26dcb26501fa1c7a98\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"8b3c9ca4ce7c2d09cbbaecf406fdcb6af5a60c3c09d5cb0e6ea1c1897696aaff\"" Sep 13 00:12:07.051219 containerd[1465]: time="2025-09-13T00:12:07.051188550Z" level=info msg="StartContainer for \"8b3c9ca4ce7c2d09cbbaecf406fdcb6af5a60c3c09d5cb0e6ea1c1897696aaff\"" Sep 13 00:12:07.081991 systemd[1]: Started cri-containerd-8b3c9ca4ce7c2d09cbbaecf406fdcb6af5a60c3c09d5cb0e6ea1c1897696aaff.scope - libcontainer container 8b3c9ca4ce7c2d09cbbaecf406fdcb6af5a60c3c09d5cb0e6ea1c1897696aaff. Sep 13 00:12:07.085955 containerd[1465]: time="2025-09-13T00:12:07.085904202Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:12:07.086091 containerd[1465]: time="2025-09-13T00:12:07.085997256Z" level=info msg="RemovePodSandbox \"63abb1c4aa6582629c9db2f4802934e99e11b1aebdb006b5a69f4875fef27a6c\" returns successfully" Sep 13 00:12:07.086613 containerd[1465]: time="2025-09-13T00:12:07.086566065Z" level=info msg="StopPodSandbox for \"0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce\"" Sep 13 00:12:07.165543 containerd[1465]: time="2025-09-13T00:12:07.165484326Z" level=info msg="StartContainer for \"8b3c9ca4ce7c2d09cbbaecf406fdcb6af5a60c3c09d5cb0e6ea1c1897696aaff\" returns successfully" Sep 13 00:12:07.169175 containerd[1465]: 2025-09-13 00:12:07.126 [WARNING][5417] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--8ddht-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"572e4883-43be-47f8-9d71-340af499cdf4", ResourceVersion:"1079", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f58a671e89caf1c95c4a545367dbbc8194c11f18bf699dfa3cf00c503b8e68fa", Pod:"coredns-7c65d6cfc9-8ddht", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali397b6c22b0b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:07.169175 containerd[1465]: 2025-09-13 00:12:07.126 [INFO][5417] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce" Sep 13 00:12:07.169175 containerd[1465]: 2025-09-13 00:12:07.126 [INFO][5417] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce" iface="eth0" netns="" Sep 13 00:12:07.169175 containerd[1465]: 2025-09-13 00:12:07.126 [INFO][5417] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce" Sep 13 00:12:07.169175 containerd[1465]: 2025-09-13 00:12:07.126 [INFO][5417] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce" Sep 13 00:12:07.169175 containerd[1465]: 2025-09-13 00:12:07.152 [INFO][5434] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce" HandleID="k8s-pod-network.0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce" Workload="localhost-k8s-coredns--7c65d6cfc9--8ddht-eth0" Sep 13 00:12:07.169175 containerd[1465]: 2025-09-13 00:12:07.153 [INFO][5434] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:07.169175 containerd[1465]: 2025-09-13 00:12:07.153 [INFO][5434] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:07.169175 containerd[1465]: 2025-09-13 00:12:07.161 [WARNING][5434] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce" HandleID="k8s-pod-network.0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce" Workload="localhost-k8s-coredns--7c65d6cfc9--8ddht-eth0" Sep 13 00:12:07.169175 containerd[1465]: 2025-09-13 00:12:07.161 [INFO][5434] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce" HandleID="k8s-pod-network.0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce" Workload="localhost-k8s-coredns--7c65d6cfc9--8ddht-eth0" Sep 13 00:12:07.169175 containerd[1465]: 2025-09-13 00:12:07.162 [INFO][5434] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:07.169175 containerd[1465]: 2025-09-13 00:12:07.166 [INFO][5417] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce" Sep 13 00:12:07.169652 containerd[1465]: time="2025-09-13T00:12:07.169232292Z" level=info msg="TearDown network for sandbox \"0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce\" successfully" Sep 13 00:12:07.169652 containerd[1465]: time="2025-09-13T00:12:07.169264572Z" level=info msg="StopPodSandbox for \"0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce\" returns successfully" Sep 13 00:12:07.169752 containerd[1465]: time="2025-09-13T00:12:07.169726411Z" level=info msg="RemovePodSandbox for \"0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce\"" Sep 13 00:12:07.169797 containerd[1465]: time="2025-09-13T00:12:07.169765073Z" level=info msg="Forcibly stopping sandbox \"0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce\"" Sep 13 00:12:07.249988 containerd[1465]: 2025-09-13 00:12:07.215 [WARNING][5467] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--8ddht-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"572e4883-43be-47f8-9d71-340af499cdf4", ResourceVersion:"1079", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f58a671e89caf1c95c4a545367dbbc8194c11f18bf699dfa3cf00c503b8e68fa", Pod:"coredns-7c65d6cfc9-8ddht", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali397b6c22b0b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:07.249988 containerd[1465]: 2025-09-13 00:12:07.215 [INFO][5467] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce" Sep 13 00:12:07.249988 containerd[1465]: 2025-09-13 00:12:07.215 [INFO][5467] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce" iface="eth0" netns="" Sep 13 00:12:07.249988 containerd[1465]: 2025-09-13 00:12:07.215 [INFO][5467] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce" Sep 13 00:12:07.249988 containerd[1465]: 2025-09-13 00:12:07.215 [INFO][5467] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce" Sep 13 00:12:07.249988 containerd[1465]: 2025-09-13 00:12:07.236 [INFO][5476] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce" HandleID="k8s-pod-network.0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce" Workload="localhost-k8s-coredns--7c65d6cfc9--8ddht-eth0" Sep 13 00:12:07.249988 containerd[1465]: 2025-09-13 00:12:07.236 [INFO][5476] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:07.249988 containerd[1465]: 2025-09-13 00:12:07.236 [INFO][5476] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:07.249988 containerd[1465]: 2025-09-13 00:12:07.242 [WARNING][5476] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce" HandleID="k8s-pod-network.0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce" Workload="localhost-k8s-coredns--7c65d6cfc9--8ddht-eth0" Sep 13 00:12:07.249988 containerd[1465]: 2025-09-13 00:12:07.242 [INFO][5476] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce" HandleID="k8s-pod-network.0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce" Workload="localhost-k8s-coredns--7c65d6cfc9--8ddht-eth0" Sep 13 00:12:07.249988 containerd[1465]: 2025-09-13 00:12:07.244 [INFO][5476] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:07.249988 containerd[1465]: 2025-09-13 00:12:07.247 [INFO][5467] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce" Sep 13 00:12:07.250470 containerd[1465]: time="2025-09-13T00:12:07.250051141Z" level=info msg="TearDown network for sandbox \"0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce\" successfully" Sep 13 00:12:07.269918 containerd[1465]: time="2025-09-13T00:12:07.269249262Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:12:07.270500 containerd[1465]: time="2025-09-13T00:12:07.269942622Z" level=info msg="RemovePodSandbox \"0cb5a3dc7490c9cd44af33e1e3fd379de80aa82413fd7316075d8d1a5eb26fce\" returns successfully" Sep 13 00:12:07.271697 containerd[1465]: time="2025-09-13T00:12:07.271631445Z" level=info msg="StopPodSandbox for \"bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182\"" Sep 13 00:12:07.305006 containerd[1465]: time="2025-09-13T00:12:07.304832188Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:07.310699 containerd[1465]: time="2025-09-13T00:12:07.310079023Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 13 00:12:07.312498 containerd[1465]: time="2025-09-13T00:12:07.312450007Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 453.492643ms" Sep 13 00:12:07.312593 containerd[1465]: time="2025-09-13T00:12:07.312574127Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:12:07.314682 containerd[1465]: time="2025-09-13T00:12:07.314648909Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 13 00:12:07.316017 containerd[1465]: time="2025-09-13T00:12:07.315971401Z" level=info msg="CreateContainer within sandbox \"53a7670203e647a7c9c52cf04754272058b6ec928e306a911a0c078e04808623\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:12:07.337189 containerd[1465]: time="2025-09-13T00:12:07.337130932Z" level=info msg="CreateContainer within sandbox \"53a7670203e647a7c9c52cf04754272058b6ec928e306a911a0c078e04808623\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5ea5b0b528f6493657b7f0be4c5a458867907b82d17dcc8538ad4160ac8dc99f\"" Sep 13 00:12:07.339287 containerd[1465]: time="2025-09-13T00:12:07.338272377Z" level=info msg="StartContainer for \"5ea5b0b528f6493657b7f0be4c5a458867907b82d17dcc8538ad4160ac8dc99f\"" Sep 13 00:12:07.353178 containerd[1465]: 2025-09-13 00:12:07.312 [WARNING][5495] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--t2khr-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"efb62d5d-3a33-4337-b3ca-e67aed5932c5", ResourceVersion:"1086", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fafccda0260f1bf93d258b6469ab472b4a9f0cf2a690f825178db20145494aa0", Pod:"coredns-7c65d6cfc9-t2khr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali198bd32a56b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:07.353178 containerd[1465]: 2025-09-13 00:12:07.313 [INFO][5495] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182" Sep 13 00:12:07.353178 containerd[1465]: 2025-09-13 00:12:07.313 [INFO][5495] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182" iface="eth0" netns="" Sep 13 00:12:07.353178 containerd[1465]: 2025-09-13 00:12:07.313 [INFO][5495] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182" Sep 13 00:12:07.353178 containerd[1465]: 2025-09-13 00:12:07.313 [INFO][5495] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182" Sep 13 00:12:07.353178 containerd[1465]: 2025-09-13 00:12:07.338 [INFO][5504] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182" HandleID="k8s-pod-network.bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182" Workload="localhost-k8s-coredns--7c65d6cfc9--t2khr-eth0" Sep 13 00:12:07.353178 containerd[1465]: 2025-09-13 00:12:07.338 [INFO][5504] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:07.353178 containerd[1465]: 2025-09-13 00:12:07.339 [INFO][5504] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:07.353178 containerd[1465]: 2025-09-13 00:12:07.345 [WARNING][5504] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182" HandleID="k8s-pod-network.bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182" Workload="localhost-k8s-coredns--7c65d6cfc9--t2khr-eth0" Sep 13 00:12:07.353178 containerd[1465]: 2025-09-13 00:12:07.345 [INFO][5504] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182" HandleID="k8s-pod-network.bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182" Workload="localhost-k8s-coredns--7c65d6cfc9--t2khr-eth0" Sep 13 00:12:07.353178 containerd[1465]: 2025-09-13 00:12:07.347 [INFO][5504] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:07.353178 containerd[1465]: 2025-09-13 00:12:07.350 [INFO][5495] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182" Sep 13 00:12:07.353739 containerd[1465]: time="2025-09-13T00:12:07.353223754Z" level=info msg="TearDown network for sandbox \"bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182\" successfully" Sep 13 00:12:07.353739 containerd[1465]: time="2025-09-13T00:12:07.353256635Z" level=info msg="StopPodSandbox for \"bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182\" returns successfully" Sep 13 00:12:07.354010 containerd[1465]: time="2025-09-13T00:12:07.353977457Z" level=info msg="RemovePodSandbox for \"bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182\"" Sep 13 00:12:07.354078 containerd[1465]: time="2025-09-13T00:12:07.354012132Z" level=info msg="Forcibly stopping sandbox \"bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182\"" Sep 13 00:12:07.370005 systemd[1]: Started cri-containerd-5ea5b0b528f6493657b7f0be4c5a458867907b82d17dcc8538ad4160ac8dc99f.scope - libcontainer container 5ea5b0b528f6493657b7f0be4c5a458867907b82d17dcc8538ad4160ac8dc99f. Sep 13 00:12:07.391962 kubelet[2556]: I0913 00:12:07.391867 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6bb584c8c5-4nwz8" podStartSLOduration=27.404161746 podStartE2EDuration="45.391844835s" podCreationTimestamp="2025-09-13 00:11:22 +0000 UTC" firstStartedPulling="2025-09-13 00:11:48.870436457 +0000 UTC m=+44.460256979" lastFinishedPulling="2025-09-13 00:12:06.858119546 +0000 UTC m=+62.447940068" observedRunningTime="2025-09-13 00:12:07.39094481 +0000 UTC m=+62.980765352" watchObservedRunningTime="2025-09-13 00:12:07.391844835 +0000 UTC m=+62.981665358" Sep 13 00:12:07.425327 containerd[1465]: time="2025-09-13T00:12:07.425288029Z" level=info msg="StartContainer for \"5ea5b0b528f6493657b7f0be4c5a458867907b82d17dcc8538ad4160ac8dc99f\" returns successfully" Sep 13 00:12:07.466683 containerd[1465]: 2025-09-13 00:12:07.411 [WARNING][5536] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--t2khr-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"efb62d5d-3a33-4337-b3ca-e67aed5932c5", ResourceVersion:"1086", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fafccda0260f1bf93d258b6469ab472b4a9f0cf2a690f825178db20145494aa0", Pod:"coredns-7c65d6cfc9-t2khr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali198bd32a56b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:07.466683 containerd[1465]: 2025-09-13 00:12:07.411 [INFO][5536] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182" Sep 13 00:12:07.466683 containerd[1465]: 2025-09-13 00:12:07.411 [INFO][5536] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182" iface="eth0" netns="" Sep 13 00:12:07.466683 containerd[1465]: 2025-09-13 00:12:07.411 [INFO][5536] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182" Sep 13 00:12:07.466683 containerd[1465]: 2025-09-13 00:12:07.411 [INFO][5536] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182" Sep 13 00:12:07.466683 containerd[1465]: 2025-09-13 00:12:07.443 [INFO][5570] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182" HandleID="k8s-pod-network.bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182" Workload="localhost-k8s-coredns--7c65d6cfc9--t2khr-eth0" Sep 13 00:12:07.466683 containerd[1465]: 2025-09-13 00:12:07.444 [INFO][5570] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:07.466683 containerd[1465]: 2025-09-13 00:12:07.444 [INFO][5570] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:07.466683 containerd[1465]: 2025-09-13 00:12:07.454 [WARNING][5570] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182" HandleID="k8s-pod-network.bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182" Workload="localhost-k8s-coredns--7c65d6cfc9--t2khr-eth0" Sep 13 00:12:07.466683 containerd[1465]: 2025-09-13 00:12:07.454 [INFO][5570] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182" HandleID="k8s-pod-network.bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182" Workload="localhost-k8s-coredns--7c65d6cfc9--t2khr-eth0" Sep 13 00:12:07.466683 containerd[1465]: 2025-09-13 00:12:07.460 [INFO][5570] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:07.466683 containerd[1465]: 2025-09-13 00:12:07.463 [INFO][5536] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182" Sep 13 00:12:07.468740 containerd[1465]: time="2025-09-13T00:12:07.466709833Z" level=info msg="TearDown network for sandbox \"bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182\" successfully" Sep 13 00:12:07.474410 containerd[1465]: time="2025-09-13T00:12:07.474353108Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:12:07.474569 containerd[1465]: time="2025-09-13T00:12:07.474454097Z" level=info msg="RemovePodSandbox \"bc531bffce04af8bca4ba9020291a0798a3fd1cb5ab14e877a18594345481182\" returns successfully" Sep 13 00:12:07.475397 containerd[1465]: time="2025-09-13T00:12:07.475370582Z" level=info msg="StopPodSandbox for \"4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a\"" Sep 13 00:12:07.558020 containerd[1465]: 2025-09-13 00:12:07.519 [WARNING][5602] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--6x2wp-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"7f30d77d-673c-4aff-b8fd-abd4bc5cd3dc", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8c1ff6cdb641f2d9258d616f62783fa59b685715b130018d820c2216f6db8f7c", Pod:"goldmane-7988f88666-6x2wp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid865690000d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:07.558020 containerd[1465]: 2025-09-13 00:12:07.520 [INFO][5602] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a" Sep 13 00:12:07.558020 containerd[1465]: 2025-09-13 00:12:07.520 [INFO][5602] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a" iface="eth0" netns="" Sep 13 00:12:07.558020 containerd[1465]: 2025-09-13 00:12:07.520 [INFO][5602] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a" Sep 13 00:12:07.558020 containerd[1465]: 2025-09-13 00:12:07.520 [INFO][5602] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a" Sep 13 00:12:07.558020 containerd[1465]: 2025-09-13 00:12:07.544 [INFO][5616] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a" HandleID="k8s-pod-network.4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a" Workload="localhost-k8s-goldmane--7988f88666--6x2wp-eth0" Sep 13 00:12:07.558020 containerd[1465]: 2025-09-13 00:12:07.544 [INFO][5616] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:07.558020 containerd[1465]: 2025-09-13 00:12:07.544 [INFO][5616] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:07.558020 containerd[1465]: 2025-09-13 00:12:07.551 [WARNING][5616] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a" HandleID="k8s-pod-network.4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a" Workload="localhost-k8s-goldmane--7988f88666--6x2wp-eth0" Sep 13 00:12:07.558020 containerd[1465]: 2025-09-13 00:12:07.551 [INFO][5616] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a" HandleID="k8s-pod-network.4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a" Workload="localhost-k8s-goldmane--7988f88666--6x2wp-eth0" Sep 13 00:12:07.558020 containerd[1465]: 2025-09-13 00:12:07.552 [INFO][5616] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:07.558020 containerd[1465]: 2025-09-13 00:12:07.554 [INFO][5602] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a" Sep 13 00:12:07.558020 containerd[1465]: time="2025-09-13T00:12:07.557975185Z" level=info msg="TearDown network for sandbox \"4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a\" successfully" Sep 13 00:12:07.558020 containerd[1465]: time="2025-09-13T00:12:07.558013185Z" level=info msg="StopPodSandbox for \"4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a\" returns successfully" Sep 13 00:12:07.558659 containerd[1465]: time="2025-09-13T00:12:07.558569511Z" level=info msg="RemovePodSandbox for \"4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a\"" Sep 13 00:12:07.558659 containerd[1465]: time="2025-09-13T00:12:07.558597342Z" level=info msg="Forcibly stopping sandbox \"4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a\"" Sep 13 00:12:07.636525 containerd[1465]: 2025-09-13 00:12:07.597 [WARNING][5633] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--6x2wp-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"7f30d77d-673c-4aff-b8fd-abd4bc5cd3dc", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8c1ff6cdb641f2d9258d616f62783fa59b685715b130018d820c2216f6db8f7c", Pod:"goldmane-7988f88666-6x2wp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid865690000d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:07.636525 containerd[1465]: 2025-09-13 00:12:07.598 [INFO][5633] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a" Sep 13 00:12:07.636525 containerd[1465]: 2025-09-13 00:12:07.598 [INFO][5633] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a" iface="eth0" netns="" Sep 13 00:12:07.636525 containerd[1465]: 2025-09-13 00:12:07.598 [INFO][5633] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a" Sep 13 00:12:07.636525 containerd[1465]: 2025-09-13 00:12:07.598 [INFO][5633] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a" Sep 13 00:12:07.636525 containerd[1465]: 2025-09-13 00:12:07.622 [INFO][5642] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a" HandleID="k8s-pod-network.4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a" Workload="localhost-k8s-goldmane--7988f88666--6x2wp-eth0" Sep 13 00:12:07.636525 containerd[1465]: 2025-09-13 00:12:07.622 [INFO][5642] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:07.636525 containerd[1465]: 2025-09-13 00:12:07.622 [INFO][5642] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:07.636525 containerd[1465]: 2025-09-13 00:12:07.628 [WARNING][5642] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a" HandleID="k8s-pod-network.4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a" Workload="localhost-k8s-goldmane--7988f88666--6x2wp-eth0" Sep 13 00:12:07.636525 containerd[1465]: 2025-09-13 00:12:07.628 [INFO][5642] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a" HandleID="k8s-pod-network.4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a" Workload="localhost-k8s-goldmane--7988f88666--6x2wp-eth0" Sep 13 00:12:07.636525 containerd[1465]: 2025-09-13 00:12:07.630 [INFO][5642] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:07.636525 containerd[1465]: 2025-09-13 00:12:07.633 [INFO][5633] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a" Sep 13 00:12:07.637144 containerd[1465]: time="2025-09-13T00:12:07.636570134Z" level=info msg="TearDown network for sandbox \"4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a\" successfully" Sep 13 00:12:07.641320 containerd[1465]: time="2025-09-13T00:12:07.641262889Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:12:07.641505 containerd[1465]: time="2025-09-13T00:12:07.641343939Z" level=info msg="RemovePodSandbox \"4f8f894acf978c180a92f888c09323f5e076b36dfb3b4ac1df258a3c6c06ae3a\" returns successfully" Sep 13 00:12:07.642021 containerd[1465]: time="2025-09-13T00:12:07.641977198Z" level=info msg="StopPodSandbox for \"dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38\"" Sep 13 00:12:07.725246 containerd[1465]: 2025-09-13 00:12:07.681 [WARNING][5659] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38" WorkloadEndpoint="localhost-k8s-whisker--8bcf56c4d--dgrfq-eth0" Sep 13 00:12:07.725246 containerd[1465]: 2025-09-13 00:12:07.681 [INFO][5659] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38" Sep 13 00:12:07.725246 containerd[1465]: 2025-09-13 00:12:07.681 [INFO][5659] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38" iface="eth0" netns="" Sep 13 00:12:07.725246 containerd[1465]: 2025-09-13 00:12:07.681 [INFO][5659] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38" Sep 13 00:12:07.725246 containerd[1465]: 2025-09-13 00:12:07.681 [INFO][5659] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38" Sep 13 00:12:07.725246 containerd[1465]: 2025-09-13 00:12:07.707 [INFO][5668] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38" HandleID="k8s-pod-network.dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38" Workload="localhost-k8s-whisker--8bcf56c4d--dgrfq-eth0" Sep 13 00:12:07.725246 containerd[1465]: 2025-09-13 00:12:07.707 [INFO][5668] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:07.725246 containerd[1465]: 2025-09-13 00:12:07.708 [INFO][5668] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:07.725246 containerd[1465]: 2025-09-13 00:12:07.716 [WARNING][5668] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38" HandleID="k8s-pod-network.dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38" Workload="localhost-k8s-whisker--8bcf56c4d--dgrfq-eth0" Sep 13 00:12:07.725246 containerd[1465]: 2025-09-13 00:12:07.716 [INFO][5668] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38" HandleID="k8s-pod-network.dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38" Workload="localhost-k8s-whisker--8bcf56c4d--dgrfq-eth0" Sep 13 00:12:07.725246 containerd[1465]: 2025-09-13 00:12:07.718 [INFO][5668] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:07.725246 containerd[1465]: 2025-09-13 00:12:07.721 [INFO][5659] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38" Sep 13 00:12:07.725661 containerd[1465]: time="2025-09-13T00:12:07.725307270Z" level=info msg="TearDown network for sandbox \"dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38\" successfully" Sep 13 00:12:07.725661 containerd[1465]: time="2025-09-13T00:12:07.725343979Z" level=info msg="StopPodSandbox for \"dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38\" returns successfully" Sep 13 00:12:07.726063 containerd[1465]: time="2025-09-13T00:12:07.726032050Z" level=info msg="RemovePodSandbox for \"dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38\"" Sep 13 00:12:07.726127 containerd[1465]: time="2025-09-13T00:12:07.726069760Z" level=info msg="Forcibly stopping sandbox \"dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38\"" Sep 13 00:12:07.812703 containerd[1465]: 2025-09-13 00:12:07.764 [WARNING][5686] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38" WorkloadEndpoint="localhost-k8s-whisker--8bcf56c4d--dgrfq-eth0" Sep 13 00:12:07.812703 containerd[1465]: 2025-09-13 00:12:07.764 [INFO][5686] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38" Sep 13 00:12:07.812703 containerd[1465]: 2025-09-13 00:12:07.764 [INFO][5686] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38" iface="eth0" netns="" Sep 13 00:12:07.812703 containerd[1465]: 2025-09-13 00:12:07.764 [INFO][5686] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38" Sep 13 00:12:07.812703 containerd[1465]: 2025-09-13 00:12:07.764 [INFO][5686] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38" Sep 13 00:12:07.812703 containerd[1465]: 2025-09-13 00:12:07.794 [INFO][5695] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38" HandleID="k8s-pod-network.dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38" Workload="localhost-k8s-whisker--8bcf56c4d--dgrfq-eth0" Sep 13 00:12:07.812703 containerd[1465]: 2025-09-13 00:12:07.794 [INFO][5695] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:07.812703 containerd[1465]: 2025-09-13 00:12:07.795 [INFO][5695] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:07.812703 containerd[1465]: 2025-09-13 00:12:07.802 [WARNING][5695] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38" HandleID="k8s-pod-network.dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38" Workload="localhost-k8s-whisker--8bcf56c4d--dgrfq-eth0" Sep 13 00:12:07.812703 containerd[1465]: 2025-09-13 00:12:07.802 [INFO][5695] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38" HandleID="k8s-pod-network.dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38" Workload="localhost-k8s-whisker--8bcf56c4d--dgrfq-eth0" Sep 13 00:12:07.812703 containerd[1465]: 2025-09-13 00:12:07.805 [INFO][5695] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:07.812703 containerd[1465]: 2025-09-13 00:12:07.808 [INFO][5686] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38" Sep 13 00:12:07.812703 containerd[1465]: time="2025-09-13T00:12:07.811898363Z" level=info msg="TearDown network for sandbox \"dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38\" successfully" Sep 13 00:12:07.888996 containerd[1465]: time="2025-09-13T00:12:07.888921838Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:12:07.889138 containerd[1465]: time="2025-09-13T00:12:07.889018748Z" level=info msg="RemovePodSandbox \"dbea6821af6cebb71e520e1521aaeaead1471846adb0d5149167891a085c1a38\" returns successfully" Sep 13 00:12:08.591255 kubelet[2556]: I0913 00:12:08.590610 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-58f9bc44bc-gqfmq" podStartSLOduration=33.975745973 podStartE2EDuration="49.590590205s" podCreationTimestamp="2025-09-13 00:11:19 +0000 UTC" firstStartedPulling="2025-09-13 00:11:51.69897901 +0000 UTC m=+47.288799532" lastFinishedPulling="2025-09-13 00:12:07.313823242 +0000 UTC m=+62.903643764" observedRunningTime="2025-09-13 00:12:08.590279495 +0000 UTC m=+64.180100017" watchObservedRunningTime="2025-09-13 00:12:08.590590205 +0000 UTC m=+64.180410727" Sep 13 00:12:08.612188 systemd[1]: Started sshd@15-10.0.0.108:22-10.0.0.1:32878.service - OpenSSH per-connection server daemon (10.0.0.1:32878). Sep 13 00:12:08.699645 sshd[5705]: Accepted publickey for core from 10.0.0.1 port 32878 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:12:08.701625 sshd[5705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:12:08.705846 systemd-logind[1450]: New session 16 of user core. Sep 13 00:12:08.713963 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 13 00:12:09.034025 sshd[5705]: pam_unix(sshd:session): session closed for user core Sep 13 00:12:09.038505 systemd[1]: sshd@15-10.0.0.108:22-10.0.0.1:32878.service: Deactivated successfully. Sep 13 00:12:09.040481 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:12:09.041411 systemd-logind[1450]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:12:09.042726 systemd-logind[1450]: Removed session 16. Sep 13 00:12:10.386148 kubelet[2556]: I0913 00:12:10.386091 2556 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:12:12.070580 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount844620325.mount: Deactivated successfully. Sep 13 00:12:14.045123 systemd[1]: Started sshd@16-10.0.0.108:22-10.0.0.1:51460.service - OpenSSH per-connection server daemon (10.0.0.1:51460). Sep 13 00:12:14.102948 sshd[5738]: Accepted publickey for core from 10.0.0.1 port 51460 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:12:14.105026 sshd[5738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:12:14.110310 systemd-logind[1450]: New session 17 of user core. Sep 13 00:12:14.116966 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 13 00:12:14.544226 sshd[5738]: pam_unix(sshd:session): session closed for user core Sep 13 00:12:14.550064 systemd[1]: sshd@16-10.0.0.108:22-10.0.0.1:51460.service: Deactivated successfully. Sep 13 00:12:14.553169 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:12:14.554113 systemd-logind[1450]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:12:14.555578 systemd-logind[1450]: Removed session 17. Sep 13 00:12:15.740535 containerd[1465]: time="2025-09-13T00:12:15.740449488Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:15.785088 containerd[1465]: time="2025-09-13T00:12:15.784975174Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 13 00:12:15.815593 containerd[1465]: time="2025-09-13T00:12:15.815508728Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:15.877733 containerd[1465]: time="2025-09-13T00:12:15.877566772Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:15.878420 containerd[1465]: time="2025-09-13T00:12:15.878361437Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 8.563449248s" Sep 13 00:12:15.878420 containerd[1465]: time="2025-09-13T00:12:15.878414357Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 13 00:12:15.880107 containerd[1465]: time="2025-09-13T00:12:15.880053012Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 13 00:12:15.881682 containerd[1465]: time="2025-09-13T00:12:15.881636663Z" level=info msg="CreateContainer within sandbox \"8c1ff6cdb641f2d9258d616f62783fa59b685715b130018d820c2216f6db8f7c\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 13 00:12:15.919941 containerd[1465]: time="2025-09-13T00:12:15.919878109Z" level=info msg="CreateContainer within sandbox \"8c1ff6cdb641f2d9258d616f62783fa59b685715b130018d820c2216f6db8f7c\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"160dd4f7afc64627b5395dd2aacb49a8f616f4fc23fee96f5d27aa52db90a19e\"" Sep 13 00:12:15.936693 containerd[1465]: time="2025-09-13T00:12:15.936638603Z" level=info msg="StartContainer for \"160dd4f7afc64627b5395dd2aacb49a8f616f4fc23fee96f5d27aa52db90a19e\"" Sep 13 00:12:16.035257 systemd[1]: Started cri-containerd-160dd4f7afc64627b5395dd2aacb49a8f616f4fc23fee96f5d27aa52db90a19e.scope - libcontainer container 160dd4f7afc64627b5395dd2aacb49a8f616f4fc23fee96f5d27aa52db90a19e. Sep 13 00:12:16.357845 containerd[1465]: time="2025-09-13T00:12:16.357582376Z" level=info msg="StartContainer for \"160dd4f7afc64627b5395dd2aacb49a8f616f4fc23fee96f5d27aa52db90a19e\" returns successfully" Sep 13 00:12:16.500578 kubelet[2556]: E0913 00:12:16.500427 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:12:17.500773 kubelet[2556]: E0913 00:12:17.500699 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:12:18.688502 containerd[1465]: time="2025-09-13T00:12:18.688432893Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:18.689303 containerd[1465]: time="2025-09-13T00:12:18.689264875Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 13 00:12:18.690440 containerd[1465]: time="2025-09-13T00:12:18.690398207Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:18.693275 containerd[1465]: time="2025-09-13T00:12:18.693250566Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:18.693898 containerd[1465]: time="2025-09-13T00:12:18.693871058Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 2.813765036s" Sep 13 00:12:18.693960 containerd[1465]: time="2025-09-13T00:12:18.693901075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 13 00:12:18.694769 containerd[1465]: time="2025-09-13T00:12:18.694746222Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 13 00:12:18.696772 containerd[1465]: time="2025-09-13T00:12:18.696747553Z" level=info msg="CreateContainer within sandbox \"f6961c824e060b5cadda9cf937f2eeb70779563b7296cf5e0f2a31bc7f2549e9\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 13 00:12:18.721324 containerd[1465]: time="2025-09-13T00:12:18.721276598Z" level=info msg="CreateContainer within sandbox \"f6961c824e060b5cadda9cf937f2eeb70779563b7296cf5e0f2a31bc7f2549e9\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"09dafc55de5c9894792b1579c4ed2f8ed86e2d2bda73532213ca802db47c9610\"" Sep 13 00:12:18.721869 containerd[1465]: time="2025-09-13T00:12:18.721755393Z" level=info msg="StartContainer for \"09dafc55de5c9894792b1579c4ed2f8ed86e2d2bda73532213ca802db47c9610\"" Sep 13 00:12:18.758966 systemd[1]: Started cri-containerd-09dafc55de5c9894792b1579c4ed2f8ed86e2d2bda73532213ca802db47c9610.scope - libcontainer container 09dafc55de5c9894792b1579c4ed2f8ed86e2d2bda73532213ca802db47c9610. Sep 13 00:12:18.791884 containerd[1465]: time="2025-09-13T00:12:18.791830933Z" level=info msg="StartContainer for \"09dafc55de5c9894792b1579c4ed2f8ed86e2d2bda73532213ca802db47c9610\" returns successfully" Sep 13 00:12:19.500131 kubelet[2556]: E0913 00:12:19.500063 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:12:19.563980 systemd[1]: Started sshd@17-10.0.0.108:22-10.0.0.1:51462.service - OpenSSH per-connection server daemon (10.0.0.1:51462). Sep 13 00:12:19.620803 sshd[5888]: Accepted publickey for core from 10.0.0.1 port 51462 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:12:19.622724 sshd[5888]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:12:19.627309 systemd-logind[1450]: New session 18 of user core. Sep 13 00:12:19.642023 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 13 00:12:19.904308 sshd[5888]: pam_unix(sshd:session): session closed for user core Sep 13 00:12:19.909155 systemd[1]: sshd@17-10.0.0.108:22-10.0.0.1:51462.service: Deactivated successfully. Sep 13 00:12:19.911439 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:12:19.912414 systemd-logind[1450]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:12:19.913672 systemd-logind[1450]: Removed session 18. Sep 13 00:12:20.500452 kubelet[2556]: E0913 00:12:20.500394 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:12:21.719664 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount357866869.mount: Deactivated successfully. Sep 13 00:12:21.818698 containerd[1465]: time="2025-09-13T00:12:21.818638126Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:21.824238 containerd[1465]: time="2025-09-13T00:12:21.823439126Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 13 00:12:21.825167 containerd[1465]: time="2025-09-13T00:12:21.825136192Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:21.829760 containerd[1465]: time="2025-09-13T00:12:21.829687570Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:21.830530 containerd[1465]: time="2025-09-13T00:12:21.830469462Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 3.135697532s" Sep 13 00:12:21.830530 containerd[1465]: time="2025-09-13T00:12:21.830507955Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 13 00:12:21.831629 containerd[1465]: time="2025-09-13T00:12:21.831602931Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 13 00:12:21.832669 containerd[1465]: time="2025-09-13T00:12:21.832630188Z" level=info msg="CreateContainer within sandbox \"90f4ff3b69d65f750e543e408e5092a16b2367214f5c594e655ae7b02077bf3c\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 13 00:12:21.889937 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3131040651.mount: Deactivated successfully. Sep 13 00:12:21.895181 containerd[1465]: time="2025-09-13T00:12:21.895129687Z" level=info msg="CreateContainer within sandbox \"90f4ff3b69d65f750e543e408e5092a16b2367214f5c594e655ae7b02077bf3c\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"36d410d04905183645259b1c2fecebd454fc9616cd3ce23d75b073df820835d4\"" Sep 13 00:12:21.895717 containerd[1465]: time="2025-09-13T00:12:21.895678296Z" level=info msg="StartContainer for \"36d410d04905183645259b1c2fecebd454fc9616cd3ce23d75b073df820835d4\"" Sep 13 00:12:21.930951 systemd[1]: Started cri-containerd-36d410d04905183645259b1c2fecebd454fc9616cd3ce23d75b073df820835d4.scope - libcontainer container 36d410d04905183645259b1c2fecebd454fc9616cd3ce23d75b073df820835d4. Sep 13 00:12:21.977675 containerd[1465]: time="2025-09-13T00:12:21.977475766Z" level=info msg="StartContainer for \"36d410d04905183645259b1c2fecebd454fc9616cd3ce23d75b073df820835d4\" returns successfully" Sep 13 00:12:22.443046 kubelet[2556]: I0913 00:12:22.442966 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5758544b55-26wzs" podStartSLOduration=1.758428549 podStartE2EDuration="35.442945288s" podCreationTimestamp="2025-09-13 00:11:47 +0000 UTC" firstStartedPulling="2025-09-13 00:11:48.146880292 +0000 UTC m=+43.736700814" lastFinishedPulling="2025-09-13 00:12:21.831397021 +0000 UTC m=+77.421217553" observedRunningTime="2025-09-13 00:12:22.442679954 +0000 UTC m=+78.032500497" watchObservedRunningTime="2025-09-13 00:12:22.442945288 +0000 UTC m=+78.032765810" Sep 13 00:12:22.443651 kubelet[2556]: I0913 00:12:22.443129 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-6x2wp" podStartSLOduration=41.163464543 podStartE2EDuration="1m0.443122134s" podCreationTimestamp="2025-09-13 00:11:22 +0000 UTC" firstStartedPulling="2025-09-13 00:11:56.60002852 +0000 UTC m=+52.189849042" lastFinishedPulling="2025-09-13 00:12:15.879686101 +0000 UTC m=+71.469506633" observedRunningTime="2025-09-13 00:12:16.604649439 +0000 UTC m=+72.194469961" watchObservedRunningTime="2025-09-13 00:12:22.443122134 +0000 UTC m=+78.032942666" Sep 13 00:12:24.916163 systemd[1]: Started sshd@18-10.0.0.108:22-10.0.0.1:48122.service - OpenSSH per-connection server daemon (10.0.0.1:48122). Sep 13 00:12:26.217228 sshd[5948]: Accepted publickey for core from 10.0.0.1 port 48122 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:12:26.219118 sshd[5948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:12:26.224881 systemd-logind[1450]: New session 19 of user core. Sep 13 00:12:26.231005 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 13 00:12:26.648016 sshd[5948]: pam_unix(sshd:session): session closed for user core Sep 13 00:12:26.654296 systemd-logind[1450]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:12:26.658413 systemd[1]: sshd@18-10.0.0.108:22-10.0.0.1:48122.service: Deactivated successfully. Sep 13 00:12:26.661156 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:12:26.669323 systemd-logind[1450]: Removed session 19. Sep 13 00:12:26.864561 containerd[1465]: time="2025-09-13T00:12:26.864501265Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:26.866286 containerd[1465]: time="2025-09-13T00:12:26.866234397Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 13 00:12:26.869554 containerd[1465]: time="2025-09-13T00:12:26.869479407Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:26.872828 containerd[1465]: time="2025-09-13T00:12:26.872516712Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:26.882905 containerd[1465]: time="2025-09-13T00:12:26.882847888Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 5.051201714s" Sep 13 00:12:26.882905 containerd[1465]: time="2025-09-13T00:12:26.882907380Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 13 00:12:26.886815 containerd[1465]: time="2025-09-13T00:12:26.885554062Z" level=info msg="CreateContainer within sandbox \"f6961c824e060b5cadda9cf937f2eeb70779563b7296cf5e0f2a31bc7f2549e9\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 13 00:12:26.926293 containerd[1465]: time="2025-09-13T00:12:26.926227745Z" level=info msg="CreateContainer within sandbox \"f6961c824e060b5cadda9cf937f2eeb70779563b7296cf5e0f2a31bc7f2549e9\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a00b9f9e8c9ef6d950c4e25187a8152e9317f477157a6ca29e94ee7e261e5e31\"" Sep 13 00:12:26.927541 containerd[1465]: time="2025-09-13T00:12:26.927510238Z" level=info msg="StartContainer for \"a00b9f9e8c9ef6d950c4e25187a8152e9317f477157a6ca29e94ee7e261e5e31\"" Sep 13 00:12:26.969990 systemd[1]: Started cri-containerd-a00b9f9e8c9ef6d950c4e25187a8152e9317f477157a6ca29e94ee7e261e5e31.scope - libcontainer container a00b9f9e8c9ef6d950c4e25187a8152e9317f477157a6ca29e94ee7e261e5e31. Sep 13 00:12:27.019814 containerd[1465]: time="2025-09-13T00:12:27.017363993Z" level=info msg="StartContainer for \"a00b9f9e8c9ef6d950c4e25187a8152e9317f477157a6ca29e94ee7e261e5e31\" returns successfully" Sep 13 00:12:27.717308 kubelet[2556]: I0913 00:12:27.717240 2556 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 13 00:12:27.722909 kubelet[2556]: I0913 00:12:27.722878 2556 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 13 00:12:31.654700 systemd[1]: Started sshd@19-10.0.0.108:22-10.0.0.1:42998.service - OpenSSH per-connection server daemon (10.0.0.1:42998). Sep 13 00:12:31.709962 sshd[6011]: Accepted publickey for core from 10.0.0.1 port 42998 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:12:31.712000 sshd[6011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:12:31.716374 systemd-logind[1450]: New session 20 of user core. Sep 13 00:12:31.729921 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 13 00:12:31.856179 kubelet[2556]: I0913 00:12:31.856134 2556 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:12:31.987729 kubelet[2556]: I0913 00:12:31.987558 2556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-6249r" podStartSLOduration=41.706988722 podStartE2EDuration="1m9.987539261s" podCreationTimestamp="2025-09-13 00:11:22 +0000 UTC" firstStartedPulling="2025-09-13 00:11:58.60315366 +0000 UTC m=+54.192974182" lastFinishedPulling="2025-09-13 00:12:26.883704189 +0000 UTC m=+82.473524721" observedRunningTime="2025-09-13 00:12:27.512698119 +0000 UTC m=+83.102518661" watchObservedRunningTime="2025-09-13 00:12:31.987539261 +0000 UTC m=+87.577359773" Sep 13 00:12:32.004900 sshd[6011]: pam_unix(sshd:session): session closed for user core Sep 13 00:12:32.011631 systemd[1]: sshd@19-10.0.0.108:22-10.0.0.1:42998.service: Deactivated successfully. Sep 13 00:12:32.013774 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:12:32.016404 systemd-logind[1450]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:12:32.024182 systemd[1]: Started sshd@20-10.0.0.108:22-10.0.0.1:43004.service - OpenSSH per-connection server daemon (10.0.0.1:43004). Sep 13 00:12:32.026411 systemd-logind[1450]: Removed session 20. Sep 13 00:12:32.080852 sshd[6029]: Accepted publickey for core from 10.0.0.1 port 43004 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:12:32.083061 sshd[6029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:12:32.090017 systemd-logind[1450]: New session 21 of user core. Sep 13 00:12:32.096962 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 13 00:12:32.747625 sshd[6029]: pam_unix(sshd:session): session closed for user core Sep 13 00:12:32.760865 systemd[1]: sshd@20-10.0.0.108:22-10.0.0.1:43004.service: Deactivated successfully. Sep 13 00:12:32.763141 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:12:32.764857 systemd-logind[1450]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:12:32.770120 systemd[1]: Started sshd@21-10.0.0.108:22-10.0.0.1:43008.service - OpenSSH per-connection server daemon (10.0.0.1:43008). Sep 13 00:12:32.771470 systemd-logind[1450]: Removed session 21. Sep 13 00:12:32.816260 sshd[6064]: Accepted publickey for core from 10.0.0.1 port 43008 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:12:32.818180 sshd[6064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:12:32.822598 systemd-logind[1450]: New session 22 of user core. Sep 13 00:12:32.833133 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 13 00:12:35.368577 sshd[6064]: pam_unix(sshd:session): session closed for user core Sep 13 00:12:35.382414 systemd[1]: sshd@21-10.0.0.108:22-10.0.0.1:43008.service: Deactivated successfully. Sep 13 00:12:35.384492 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:12:35.386143 systemd-logind[1450]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:12:35.387631 systemd[1]: Started sshd@22-10.0.0.108:22-10.0.0.1:43012.service - OpenSSH per-connection server daemon (10.0.0.1:43012). Sep 13 00:12:35.388640 systemd-logind[1450]: Removed session 22. Sep 13 00:12:35.447589 sshd[6105]: Accepted publickey for core from 10.0.0.1 port 43012 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:12:35.449577 sshd[6105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:12:35.454366 systemd-logind[1450]: New session 23 of user core. Sep 13 00:12:35.464055 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 13 00:12:36.268989 sshd[6105]: pam_unix(sshd:session): session closed for user core Sep 13 00:12:36.279955 systemd[1]: sshd@22-10.0.0.108:22-10.0.0.1:43012.service: Deactivated successfully. Sep 13 00:12:36.284276 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 00:12:36.288398 systemd-logind[1450]: Session 23 logged out. Waiting for processes to exit. Sep 13 00:12:36.293286 systemd[1]: Started sshd@23-10.0.0.108:22-10.0.0.1:43020.service - OpenSSH per-connection server daemon (10.0.0.1:43020). Sep 13 00:12:36.295081 systemd-logind[1450]: Removed session 23. Sep 13 00:12:36.331327 sshd[6160]: Accepted publickey for core from 10.0.0.1 port 43020 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:12:36.333250 sshd[6160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:12:36.339446 systemd-logind[1450]: New session 24 of user core. Sep 13 00:12:36.347029 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 13 00:12:36.497196 sshd[6160]: pam_unix(sshd:session): session closed for user core Sep 13 00:12:36.502549 systemd[1]: sshd@23-10.0.0.108:22-10.0.0.1:43020.service: Deactivated successfully. Sep 13 00:12:36.504890 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 00:12:36.505901 systemd-logind[1450]: Session 24 logged out. Waiting for processes to exit. Sep 13 00:12:36.507291 systemd-logind[1450]: Removed session 24. Sep 13 00:12:41.514131 systemd[1]: Started sshd@24-10.0.0.108:22-10.0.0.1:53400.service - OpenSSH per-connection server daemon (10.0.0.1:53400). Sep 13 00:12:41.554698 sshd[6199]: Accepted publickey for core from 10.0.0.1 port 53400 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:12:41.556515 sshd[6199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:12:41.561078 systemd-logind[1450]: New session 25 of user core. Sep 13 00:12:41.572075 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 13 00:12:41.736300 sshd[6199]: pam_unix(sshd:session): session closed for user core Sep 13 00:12:41.742949 systemd[1]: sshd@24-10.0.0.108:22-10.0.0.1:53400.service: Deactivated successfully. Sep 13 00:12:41.745924 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 00:12:41.747017 systemd-logind[1450]: Session 25 logged out. Waiting for processes to exit. Sep 13 00:12:41.748573 systemd-logind[1450]: Removed session 25. Sep 13 00:12:46.750395 systemd[1]: Started sshd@25-10.0.0.108:22-10.0.0.1:53404.service - OpenSSH per-connection server daemon (10.0.0.1:53404). Sep 13 00:12:46.789014 sshd[6219]: Accepted publickey for core from 10.0.0.1 port 53404 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:12:46.791276 sshd[6219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:12:46.795611 systemd-logind[1450]: New session 26 of user core. Sep 13 00:12:46.806061 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 13 00:12:46.938745 sshd[6219]: pam_unix(sshd:session): session closed for user core Sep 13 00:12:46.943104 systemd[1]: sshd@25-10.0.0.108:22-10.0.0.1:53404.service: Deactivated successfully. Sep 13 00:12:46.945641 systemd[1]: session-26.scope: Deactivated successfully. Sep 13 00:12:46.947298 systemd-logind[1450]: Session 26 logged out. Waiting for processes to exit. Sep 13 00:12:46.948296 systemd-logind[1450]: Removed session 26. Sep 13 00:12:51.500331 kubelet[2556]: E0913 00:12:51.500261 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:12:51.957415 systemd[1]: Started sshd@26-10.0.0.108:22-10.0.0.1:36052.service - OpenSSH per-connection server daemon (10.0.0.1:36052). Sep 13 00:12:52.015340 sshd[6234]: Accepted publickey for core from 10.0.0.1 port 36052 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:12:52.017504 sshd[6234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:12:52.022360 systemd-logind[1450]: New session 27 of user core. Sep 13 00:12:52.028945 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 13 00:12:52.295463 sshd[6234]: pam_unix(sshd:session): session closed for user core Sep 13 00:12:52.300726 systemd-logind[1450]: Session 27 logged out. Waiting for processes to exit. Sep 13 00:12:52.301053 systemd[1]: sshd@26-10.0.0.108:22-10.0.0.1:36052.service: Deactivated successfully. Sep 13 00:12:52.303591 systemd[1]: session-27.scope: Deactivated successfully. Sep 13 00:12:52.304511 systemd-logind[1450]: Removed session 27. Sep 13 00:12:57.321206 systemd[1]: Started sshd@27-10.0.0.108:22-10.0.0.1:36068.service - OpenSSH per-connection server daemon (10.0.0.1:36068). Sep 13 00:12:57.386631 sshd[6249]: Accepted publickey for core from 10.0.0.1 port 36068 ssh2: RSA SHA256:E2li1XGrhhwy0ZDl4cyDLdomj69UeSun21wOBPeS+vc Sep 13 00:12:57.388818 sshd[6249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:12:57.394914 systemd-logind[1450]: New session 28 of user core. Sep 13 00:12:57.399091 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 13 00:12:57.733181 sshd[6249]: pam_unix(sshd:session): session closed for user core Sep 13 00:12:57.737889 systemd[1]: sshd@27-10.0.0.108:22-10.0.0.1:36068.service: Deactivated successfully. Sep 13 00:12:57.740302 systemd[1]: session-28.scope: Deactivated successfully. Sep 13 00:12:57.741680 systemd-logind[1450]: Session 28 logged out. Waiting for processes to exit. Sep 13 00:12:57.743033 systemd-logind[1450]: Removed session 28.