Jul 11 00:22:10.988504 kernel: Linux version 6.6.96-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Jul 10 22:46:23 -00 2025 Jul 11 00:22:10.988538 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=836a88100946313ecc17069f63287f86e7d91f7c389df4bcd3f3e02beb9683e1 Jul 11 00:22:10.988555 kernel: BIOS-provided physical RAM map: Jul 11 00:22:10.988563 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 11 00:22:10.988572 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 11 00:22:10.988580 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 11 00:22:10.988593 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jul 11 00:22:10.988616 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jul 11 00:22:10.988624 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 11 00:22:10.988638 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 11 00:22:10.988647 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 11 00:22:10.988655 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 11 00:22:10.988668 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 11 00:22:10.988676 kernel: NX (Execute Disable) protection: active Jul 11 00:22:10.988686 kernel: APIC: Static calls initialized Jul 11 00:22:10.988703 kernel: SMBIOS 2.8 present. Jul 11 00:22:10.988713 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jul 11 00:22:10.988723 kernel: Hypervisor detected: KVM Jul 11 00:22:10.988732 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 11 00:22:10.988741 kernel: kvm-clock: using sched offset of 3482692731 cycles Jul 11 00:22:10.988751 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 11 00:22:10.988761 kernel: tsc: Detected 2794.748 MHz processor Jul 11 00:22:10.988770 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 11 00:22:10.988779 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 11 00:22:10.988792 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jul 11 00:22:10.988799 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 11 00:22:10.988807 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 11 00:22:10.988814 kernel: Using GB pages for direct mapping Jul 11 00:22:10.988820 kernel: ACPI: Early table checksum verification disabled Jul 11 00:22:10.988827 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jul 11 00:22:10.988834 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:22:10.988842 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:22:10.988849 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:22:10.988859 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jul 11 00:22:10.988866 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:22:10.988873 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:22:10.988880 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:22:10.988887 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:22:10.988894 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jul 11 00:22:10.988901 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jul 11 00:22:10.988915 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jul 11 00:22:10.988928 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jul 11 00:22:10.988938 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jul 11 00:22:10.988948 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jul 11 00:22:10.988956 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jul 11 00:22:10.988970 kernel: No NUMA configuration found Jul 11 00:22:10.988980 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jul 11 00:22:10.988991 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jul 11 00:22:10.988999 kernel: Zone ranges: Jul 11 00:22:10.989006 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 11 00:22:10.989013 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jul 11 00:22:10.989020 kernel: Normal empty Jul 11 00:22:10.989028 kernel: Movable zone start for each node Jul 11 00:22:10.989035 kernel: Early memory node ranges Jul 11 00:22:10.989043 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 11 00:22:10.989050 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jul 11 00:22:10.989057 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jul 11 00:22:10.989067 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 11 00:22:10.989092 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 11 00:22:10.989100 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jul 11 00:22:10.989107 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 11 00:22:10.989115 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 11 00:22:10.989122 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 11 00:22:10.989129 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 11 00:22:10.989137 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 11 00:22:10.989144 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 11 00:22:10.989155 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 11 00:22:10.989162 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 11 00:22:10.989170 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 11 00:22:10.989177 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 11 00:22:10.989184 kernel: TSC deadline timer available Jul 11 00:22:10.989192 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jul 11 00:22:10.989199 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 11 00:22:10.989206 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 11 00:22:10.989216 kernel: kvm-guest: setup PV sched yield Jul 11 00:22:10.989226 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 11 00:22:10.989233 kernel: Booting paravirtualized kernel on KVM Jul 11 00:22:10.989241 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 11 00:22:10.989248 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 11 00:22:10.989256 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u524288 Jul 11 00:22:10.989263 kernel: pcpu-alloc: s197096 r8192 d32280 u524288 alloc=1*2097152 Jul 11 00:22:10.989270 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 11 00:22:10.989277 kernel: kvm-guest: PV spinlocks enabled Jul 11 00:22:10.989284 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 11 00:22:10.989296 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=836a88100946313ecc17069f63287f86e7d91f7c389df4bcd3f3e02beb9683e1 Jul 11 00:22:10.989312 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 11 00:22:10.989324 kernel: random: crng init done Jul 11 00:22:10.989334 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 11 00:22:10.989343 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 11 00:22:10.989353 kernel: Fallback order for Node 0: 0 Jul 11 00:22:10.989363 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jul 11 00:22:10.989372 kernel: Policy zone: DMA32 Jul 11 00:22:10.989385 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 11 00:22:10.989392 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2295K rwdata, 22744K rodata, 42872K init, 2320K bss, 136900K reserved, 0K cma-reserved) Jul 11 00:22:10.989400 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 11 00:22:10.989407 kernel: ftrace: allocating 37966 entries in 149 pages Jul 11 00:22:10.989414 kernel: ftrace: allocated 149 pages with 4 groups Jul 11 00:22:10.989422 kernel: Dynamic Preempt: voluntary Jul 11 00:22:10.989429 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 11 00:22:10.989437 kernel: rcu: RCU event tracing is enabled. Jul 11 00:22:10.989445 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 11 00:22:10.989455 kernel: Trampoline variant of Tasks RCU enabled. Jul 11 00:22:10.989462 kernel: Rude variant of Tasks RCU enabled. Jul 11 00:22:10.989469 kernel: Tracing variant of Tasks RCU enabled. Jul 11 00:22:10.989477 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 11 00:22:10.989488 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 11 00:22:10.989496 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 11 00:22:10.989504 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 11 00:22:10.989511 kernel: Console: colour VGA+ 80x25 Jul 11 00:22:10.989518 kernel: printk: console [ttyS0] enabled Jul 11 00:22:10.989528 kernel: ACPI: Core revision 20230628 Jul 11 00:22:10.989536 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 11 00:22:10.989543 kernel: APIC: Switch to symmetric I/O mode setup Jul 11 00:22:10.989551 kernel: x2apic enabled Jul 11 00:22:10.989558 kernel: APIC: Switched APIC routing to: physical x2apic Jul 11 00:22:10.989565 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 11 00:22:10.989573 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 11 00:22:10.989580 kernel: kvm-guest: setup PV IPIs Jul 11 00:22:10.989614 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 11 00:22:10.989622 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 11 00:22:10.989630 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jul 11 00:22:10.989638 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 11 00:22:10.989648 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 11 00:22:10.989656 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 11 00:22:10.989664 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 11 00:22:10.989671 kernel: Spectre V2 : Mitigation: Retpolines Jul 11 00:22:10.989679 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 11 00:22:10.989690 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 11 00:22:10.989706 kernel: RETBleed: Mitigation: untrained return thunk Jul 11 00:22:10.989724 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 11 00:22:10.989735 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 11 00:22:10.989745 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 11 00:22:10.989756 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 11 00:22:10.989766 kernel: x86/bugs: return thunk changed Jul 11 00:22:10.989777 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 11 00:22:10.989798 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 11 00:22:10.989810 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 11 00:22:10.989820 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 11 00:22:10.989830 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 11 00:22:10.989840 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 11 00:22:10.989849 kernel: Freeing SMP alternatives memory: 32K Jul 11 00:22:10.989859 kernel: pid_max: default: 32768 minimum: 301 Jul 11 00:22:10.989869 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 11 00:22:10.989878 kernel: landlock: Up and running. Jul 11 00:22:10.989892 kernel: SELinux: Initializing. Jul 11 00:22:10.989902 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 11 00:22:10.989912 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 11 00:22:10.989922 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 11 00:22:10.989931 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 00:22:10.989941 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 00:22:10.989951 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 00:22:10.989962 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 11 00:22:10.989977 kernel: ... version: 0 Jul 11 00:22:10.989991 kernel: ... bit width: 48 Jul 11 00:22:10.990001 kernel: ... generic registers: 6 Jul 11 00:22:10.990011 kernel: ... value mask: 0000ffffffffffff Jul 11 00:22:10.990020 kernel: ... max period: 00007fffffffffff Jul 11 00:22:10.990028 kernel: ... fixed-purpose events: 0 Jul 11 00:22:10.990036 kernel: ... event mask: 000000000000003f Jul 11 00:22:10.990046 kernel: signal: max sigframe size: 1776 Jul 11 00:22:10.990054 kernel: rcu: Hierarchical SRCU implementation. Jul 11 00:22:10.990062 kernel: rcu: Max phase no-delay instances is 400. Jul 11 00:22:10.990097 kernel: smp: Bringing up secondary CPUs ... Jul 11 00:22:10.990105 kernel: smpboot: x86: Booting SMP configuration: Jul 11 00:22:10.990113 kernel: .... node #0, CPUs: #1 #2 #3 Jul 11 00:22:10.990120 kernel: smp: Brought up 1 node, 4 CPUs Jul 11 00:22:10.990128 kernel: smpboot: Max logical packages: 1 Jul 11 00:22:10.990136 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jul 11 00:22:10.990143 kernel: devtmpfs: initialized Jul 11 00:22:10.990151 kernel: x86/mm: Memory block size: 128MB Jul 11 00:22:10.990159 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 11 00:22:10.990170 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 11 00:22:10.990178 kernel: pinctrl core: initialized pinctrl subsystem Jul 11 00:22:10.990186 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 11 00:22:10.990193 kernel: audit: initializing netlink subsys (disabled) Jul 11 00:22:10.990201 kernel: audit: type=2000 audit(1752193329.924:1): state=initialized audit_enabled=0 res=1 Jul 11 00:22:10.990209 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 11 00:22:10.990217 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 11 00:22:10.990224 kernel: cpuidle: using governor menu Jul 11 00:22:10.990232 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 11 00:22:10.990243 kernel: dca service started, version 1.12.1 Jul 11 00:22:10.990251 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jul 11 00:22:10.990259 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jul 11 00:22:10.990267 kernel: PCI: Using configuration type 1 for base access Jul 11 00:22:10.990275 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 11 00:22:10.990282 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 11 00:22:10.990290 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 11 00:22:10.990298 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 11 00:22:10.990306 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 11 00:22:10.990317 kernel: ACPI: Added _OSI(Module Device) Jul 11 00:22:10.990324 kernel: ACPI: Added _OSI(Processor Device) Jul 11 00:22:10.990332 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 11 00:22:10.990340 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 11 00:22:10.990348 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 11 00:22:10.990355 kernel: ACPI: Interpreter enabled Jul 11 00:22:10.990363 kernel: ACPI: PM: (supports S0 S3 S5) Jul 11 00:22:10.990370 kernel: ACPI: Using IOAPIC for interrupt routing Jul 11 00:22:10.990378 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 11 00:22:10.990389 kernel: PCI: Using E820 reservations for host bridge windows Jul 11 00:22:10.990397 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 11 00:22:10.990404 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 11 00:22:10.990658 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 11 00:22:10.990810 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 11 00:22:10.990943 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 11 00:22:10.990954 kernel: PCI host bridge to bus 0000:00 Jul 11 00:22:10.991145 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 11 00:22:10.991317 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 11 00:22:10.991472 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 11 00:22:10.991642 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jul 11 00:22:10.991793 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 11 00:22:10.991941 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jul 11 00:22:10.992128 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 11 00:22:10.992365 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jul 11 00:22:10.992551 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jul 11 00:22:10.992714 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jul 11 00:22:10.992886 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jul 11 00:22:10.993037 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jul 11 00:22:10.993241 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 11 00:22:10.993473 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jul 11 00:22:10.993647 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jul 11 00:22:10.993803 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jul 11 00:22:10.993974 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jul 11 00:22:10.994202 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jul 11 00:22:10.994340 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jul 11 00:22:10.994478 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jul 11 00:22:10.994628 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jul 11 00:22:10.994792 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jul 11 00:22:10.994956 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jul 11 00:22:10.995226 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jul 11 00:22:10.995400 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jul 11 00:22:10.995531 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jul 11 00:22:10.995713 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jul 11 00:22:10.995906 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 11 00:22:10.996089 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jul 11 00:22:10.996246 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jul 11 00:22:10.996394 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jul 11 00:22:10.996577 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jul 11 00:22:10.996722 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jul 11 00:22:10.996734 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 11 00:22:10.996748 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 11 00:22:10.996756 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 11 00:22:10.996764 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 11 00:22:10.996772 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 11 00:22:10.996780 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 11 00:22:10.996788 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 11 00:22:10.996797 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 11 00:22:10.996808 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 11 00:22:10.996818 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 11 00:22:10.996833 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 11 00:22:10.996843 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 11 00:22:10.996852 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 11 00:22:10.996860 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 11 00:22:10.996868 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 11 00:22:10.996876 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 11 00:22:10.996884 kernel: iommu: Default domain type: Translated Jul 11 00:22:10.996892 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 11 00:22:10.996900 kernel: PCI: Using ACPI for IRQ routing Jul 11 00:22:10.996911 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 11 00:22:10.996919 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 11 00:22:10.996927 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jul 11 00:22:10.997120 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 11 00:22:10.997271 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 11 00:22:10.997440 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 11 00:22:10.997453 kernel: vgaarb: loaded Jul 11 00:22:10.997461 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 11 00:22:10.997475 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 11 00:22:10.997483 kernel: clocksource: Switched to clocksource kvm-clock Jul 11 00:22:10.997491 kernel: VFS: Disk quotas dquot_6.6.0 Jul 11 00:22:10.997499 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 11 00:22:10.997508 kernel: pnp: PnP ACPI init Jul 11 00:22:10.997698 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 11 00:22:10.997715 kernel: pnp: PnP ACPI: found 6 devices Jul 11 00:22:10.997723 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 11 00:22:10.997735 kernel: NET: Registered PF_INET protocol family Jul 11 00:22:10.997743 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 11 00:22:10.997751 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 11 00:22:10.997759 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 11 00:22:10.997767 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 11 00:22:10.997775 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 11 00:22:10.997783 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 11 00:22:10.997791 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 11 00:22:10.997799 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 11 00:22:10.997809 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 11 00:22:10.997817 kernel: NET: Registered PF_XDP protocol family Jul 11 00:22:10.997944 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 11 00:22:10.998182 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 11 00:22:10.998309 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 11 00:22:10.998426 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jul 11 00:22:10.998541 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 11 00:22:10.998669 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jul 11 00:22:10.998685 kernel: PCI: CLS 0 bytes, default 64 Jul 11 00:22:10.998693 kernel: Initialise system trusted keyrings Jul 11 00:22:10.998701 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 11 00:22:10.998710 kernel: Key type asymmetric registered Jul 11 00:22:10.998719 kernel: Asymmetric key parser 'x509' registered Jul 11 00:22:10.998730 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 11 00:22:10.998741 kernel: io scheduler mq-deadline registered Jul 11 00:22:10.998751 kernel: io scheduler kyber registered Jul 11 00:22:10.998760 kernel: io scheduler bfq registered Jul 11 00:22:10.998768 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 11 00:22:10.998780 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 11 00:22:10.998788 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 11 00:22:10.998796 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 11 00:22:10.998804 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 11 00:22:10.998812 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 11 00:22:10.998820 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 11 00:22:10.998828 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 11 00:22:10.998836 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 11 00:22:10.998844 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 11 00:22:10.999017 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 11 00:22:10.999205 kernel: rtc_cmos 00:04: registered as rtc0 Jul 11 00:22:10.999328 kernel: rtc_cmos 00:04: setting system clock to 2025-07-11T00:22:10 UTC (1752193330) Jul 11 00:22:10.999446 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 11 00:22:10.999456 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 11 00:22:10.999464 kernel: NET: Registered PF_INET6 protocol family Jul 11 00:22:10.999472 kernel: Segment Routing with IPv6 Jul 11 00:22:10.999485 kernel: In-situ OAM (IOAM) with IPv6 Jul 11 00:22:10.999493 kernel: NET: Registered PF_PACKET protocol family Jul 11 00:22:10.999501 kernel: Key type dns_resolver registered Jul 11 00:22:10.999509 kernel: IPI shorthand broadcast: enabled Jul 11 00:22:10.999517 kernel: sched_clock: Marking stable (1112002937, 124873880)->(1314350586, -77473769) Jul 11 00:22:10.999525 kernel: registered taskstats version 1 Jul 11 00:22:10.999533 kernel: Loading compiled-in X.509 certificates Jul 11 00:22:10.999541 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.96-flatcar: 5956f0842928c96096c398e9db55919cd236a39f' Jul 11 00:22:10.999548 kernel: Key type .fscrypt registered Jul 11 00:22:10.999556 kernel: Key type fscrypt-provisioning registered Jul 11 00:22:10.999567 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 11 00:22:10.999575 kernel: ima: Allocated hash algorithm: sha1 Jul 11 00:22:10.999583 kernel: ima: No architecture policies found Jul 11 00:22:10.999590 kernel: clk: Disabling unused clocks Jul 11 00:22:10.999598 kernel: Freeing unused kernel image (initmem) memory: 42872K Jul 11 00:22:10.999616 kernel: Write protecting the kernel read-only data: 36864k Jul 11 00:22:10.999624 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Jul 11 00:22:10.999632 kernel: Run /init as init process Jul 11 00:22:10.999643 kernel: with arguments: Jul 11 00:22:10.999651 kernel: /init Jul 11 00:22:10.999659 kernel: with environment: Jul 11 00:22:10.999667 kernel: HOME=/ Jul 11 00:22:10.999674 kernel: TERM=linux Jul 11 00:22:10.999682 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 11 00:22:10.999692 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 11 00:22:10.999703 systemd[1]: Detected virtualization kvm. Jul 11 00:22:10.999714 systemd[1]: Detected architecture x86-64. Jul 11 00:22:10.999722 systemd[1]: Running in initrd. Jul 11 00:22:10.999730 systemd[1]: No hostname configured, using default hostname. Jul 11 00:22:10.999738 systemd[1]: Hostname set to . Jul 11 00:22:10.999747 systemd[1]: Initializing machine ID from VM UUID. Jul 11 00:22:10.999755 systemd[1]: Queued start job for default target initrd.target. Jul 11 00:22:10.999764 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 00:22:10.999772 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 00:22:10.999784 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 11 00:22:10.999793 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 11 00:22:10.999814 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 11 00:22:10.999826 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 11 00:22:10.999836 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 11 00:22:10.999847 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 11 00:22:10.999856 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 00:22:10.999864 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 11 00:22:10.999873 systemd[1]: Reached target paths.target - Path Units. Jul 11 00:22:10.999881 systemd[1]: Reached target slices.target - Slice Units. Jul 11 00:22:10.999890 systemd[1]: Reached target swap.target - Swaps. Jul 11 00:22:10.999898 systemd[1]: Reached target timers.target - Timer Units. Jul 11 00:22:10.999907 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 11 00:22:10.999918 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 11 00:22:10.999926 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 11 00:22:10.999935 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 11 00:22:10.999944 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 11 00:22:10.999953 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 11 00:22:10.999961 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 00:22:10.999970 systemd[1]: Reached target sockets.target - Socket Units. Jul 11 00:22:10.999978 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 11 00:22:10.999987 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 11 00:22:10.999998 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 11 00:22:11.000007 systemd[1]: Starting systemd-fsck-usr.service... Jul 11 00:22:11.000015 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 11 00:22:11.000024 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 11 00:22:11.000032 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:22:11.000041 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 11 00:22:11.000050 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 00:22:11.000058 systemd[1]: Finished systemd-fsck-usr.service. Jul 11 00:22:11.000104 systemd-journald[190]: Collecting audit messages is disabled. Jul 11 00:22:11.000131 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 11 00:22:11.000142 systemd-journald[190]: Journal started Jul 11 00:22:11.000168 systemd-journald[190]: Runtime Journal (/run/log/journal/73b9791386e24230923faa24c402e239) is 6.0M, max 48.4M, 42.3M free. Jul 11 00:22:11.002106 systemd[1]: Started systemd-journald.service - Journal Service. Jul 11 00:22:11.002386 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 11 00:22:11.045124 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 11 00:22:11.012571 systemd-modules-load[193]: Inserted module 'overlay' Jul 11 00:22:11.044373 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:22:11.049140 kernel: Bridge firewalling registered Jul 11 00:22:11.049160 systemd-modules-load[193]: Inserted module 'br_netfilter' Jul 11 00:22:11.056511 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 11 00:22:11.059340 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 11 00:22:11.060753 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 11 00:22:11.061319 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 11 00:22:11.067279 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 00:22:11.079974 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:22:11.080678 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 00:22:11.084013 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:22:11.097441 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 11 00:22:11.098096 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 00:22:11.103113 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 11 00:22:11.118755 dracut-cmdline[227]: dracut-dracut-053 Jul 11 00:22:11.123421 dracut-cmdline[227]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=836a88100946313ecc17069f63287f86e7d91f7c389df4bcd3f3e02beb9683e1 Jul 11 00:22:11.144858 systemd-resolved[229]: Positive Trust Anchors: Jul 11 00:22:11.144879 systemd-resolved[229]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 00:22:11.144923 systemd-resolved[229]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 11 00:22:11.148280 systemd-resolved[229]: Defaulting to hostname 'linux'. Jul 11 00:22:11.149773 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 11 00:22:11.156147 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 11 00:22:11.230144 kernel: SCSI subsystem initialized Jul 11 00:22:11.246145 kernel: Loading iSCSI transport class v2.0-870. Jul 11 00:22:11.262202 kernel: iscsi: registered transport (tcp) Jul 11 00:22:11.288122 kernel: iscsi: registered transport (qla4xxx) Jul 11 00:22:11.288209 kernel: QLogic iSCSI HBA Driver Jul 11 00:22:11.356512 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 11 00:22:11.369423 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 11 00:22:11.401310 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 11 00:22:11.401400 kernel: device-mapper: uevent: version 1.0.3 Jul 11 00:22:11.402578 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 11 00:22:11.452186 kernel: raid6: avx2x4 gen() 21612 MB/s Jul 11 00:22:11.469123 kernel: raid6: avx2x2 gen() 20178 MB/s Jul 11 00:22:11.486239 kernel: raid6: avx2x1 gen() 19081 MB/s Jul 11 00:22:11.486343 kernel: raid6: using algorithm avx2x4 gen() 21612 MB/s Jul 11 00:22:11.504368 kernel: raid6: .... xor() 6140 MB/s, rmw enabled Jul 11 00:22:11.504470 kernel: raid6: using avx2x2 recovery algorithm Jul 11 00:22:11.529172 kernel: xor: automatically using best checksumming function avx Jul 11 00:22:11.707137 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 11 00:22:11.725844 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 11 00:22:11.744512 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 00:22:11.761030 systemd-udevd[413]: Using default interface naming scheme 'v255'. Jul 11 00:22:11.766170 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 00:22:11.793267 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 11 00:22:11.814005 dracut-pre-trigger[416]: rd.md=0: removing MD RAID activation Jul 11 00:22:11.856610 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 11 00:22:11.866261 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 11 00:22:11.956804 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 00:22:11.991872 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 11 00:22:12.003885 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 11 00:22:12.007926 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 11 00:22:12.018261 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 11 00:22:12.018544 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 11 00:22:12.010784 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 00:22:12.012270 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 11 00:22:12.026478 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 11 00:22:12.042431 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 11 00:22:12.042470 kernel: GPT:9289727 != 19775487 Jul 11 00:22:12.042485 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 11 00:22:12.042498 kernel: GPT:9289727 != 19775487 Jul 11 00:22:12.042511 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 11 00:22:12.042524 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:22:12.042542 kernel: cryptd: max_cpu_qlen set to 1000 Jul 11 00:22:12.040906 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 11 00:22:12.055271 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 11 00:22:12.056666 kernel: libata version 3.00 loaded. Jul 11 00:22:12.055910 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:22:12.060963 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 11 00:22:12.082114 kernel: AVX2 version of gcm_enc/dec engaged. Jul 11 00:22:12.082740 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 00:22:12.083062 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:22:12.089315 kernel: AES CTR mode by8 optimization enabled Jul 11 00:22:12.086535 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:22:12.097934 kernel: ahci 0000:00:1f.2: version 3.0 Jul 11 00:22:12.098204 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 11 00:22:12.097427 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:22:12.103516 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jul 11 00:22:12.103717 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 11 00:22:12.112095 kernel: scsi host0: ahci Jul 11 00:22:12.139611 kernel: scsi host1: ahci Jul 11 00:22:12.139997 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (460) Jul 11 00:22:12.140015 kernel: BTRFS: device fsid 54fb9359-b495-4b0c-b313-b0e2955e4a38 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (465) Jul 11 00:22:12.128154 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 11 00:22:12.167520 kernel: scsi host2: ahci Jul 11 00:22:12.167668 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 11 00:22:12.167950 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 11 00:22:12.170474 kernel: scsi host3: ahci Jul 11 00:22:12.173306 kernel: scsi host4: ahci Jul 11 00:22:12.173650 kernel: scsi host5: ahci Jul 11 00:22:12.173865 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jul 11 00:22:12.173881 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jul 11 00:22:12.174504 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jul 11 00:22:12.174742 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 11 00:22:12.182176 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jul 11 00:22:12.182211 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jul 11 00:22:12.182229 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jul 11 00:22:12.188037 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 11 00:22:12.226712 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:22:12.239660 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 11 00:22:12.242288 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 11 00:22:12.255918 disk-uuid[556]: Primary Header is updated. Jul 11 00:22:12.255918 disk-uuid[556]: Secondary Entries is updated. Jul 11 00:22:12.255918 disk-uuid[556]: Secondary Header is updated. Jul 11 00:22:12.260116 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:22:12.268110 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:22:12.283041 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:22:12.485120 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 11 00:22:12.485228 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 11 00:22:12.494134 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 11 00:22:12.494255 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 11 00:22:12.495110 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 11 00:22:12.496122 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 11 00:22:12.497126 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 11 00:22:12.497157 kernel: ata3.00: applying bridge limits Jul 11 00:22:12.498104 kernel: ata3.00: configured for UDMA/100 Jul 11 00:22:12.500110 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 11 00:22:12.560219 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 11 00:22:12.560622 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 11 00:22:12.574121 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 11 00:22:13.271375 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:22:13.271441 disk-uuid[558]: The operation has completed successfully. Jul 11 00:22:13.300556 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 11 00:22:13.300762 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 11 00:22:13.341307 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 11 00:22:13.345060 sh[594]: Success Jul 11 00:22:13.358127 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 11 00:22:13.397365 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 11 00:22:13.411143 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 11 00:22:13.414596 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 11 00:22:13.428163 kernel: BTRFS info (device dm-0): first mount of filesystem 54fb9359-b495-4b0c-b313-b0e2955e4a38 Jul 11 00:22:13.428214 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 11 00:22:13.428226 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 11 00:22:13.429177 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 11 00:22:13.430646 kernel: BTRFS info (device dm-0): using free space tree Jul 11 00:22:13.435645 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 11 00:22:13.436834 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 11 00:22:13.456437 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 11 00:22:13.460245 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 11 00:22:13.472129 kernel: BTRFS info (device vda6): first mount of filesystem 71430075-b555-475f-bdad-1d4a4e6e1dbe Jul 11 00:22:13.472216 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 11 00:22:13.472233 kernel: BTRFS info (device vda6): using free space tree Jul 11 00:22:13.475112 kernel: BTRFS info (device vda6): auto enabling async discard Jul 11 00:22:13.486967 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 11 00:22:13.488678 kernel: BTRFS info (device vda6): last unmount of filesystem 71430075-b555-475f-bdad-1d4a4e6e1dbe Jul 11 00:22:13.499791 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 11 00:22:13.507284 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 11 00:22:13.704290 ignition[688]: Ignition 2.19.0 Jul 11 00:22:13.704302 ignition[688]: Stage: fetch-offline Jul 11 00:22:13.704343 ignition[688]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:22:13.704356 ignition[688]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:22:13.704497 ignition[688]: parsed url from cmdline: "" Jul 11 00:22:13.704501 ignition[688]: no config URL provided Jul 11 00:22:13.704507 ignition[688]: reading system config file "/usr/lib/ignition/user.ign" Jul 11 00:22:13.704518 ignition[688]: no config at "/usr/lib/ignition/user.ign" Jul 11 00:22:13.704560 ignition[688]: op(1): [started] loading QEMU firmware config module Jul 11 00:22:13.704570 ignition[688]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 11 00:22:13.714066 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 11 00:22:13.715321 ignition[688]: op(1): [finished] loading QEMU firmware config module Jul 11 00:22:13.720228 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 11 00:22:13.763205 ignition[688]: parsing config with SHA512: ed6621e8ac86cea942441ccc48d86c1ab7502fec471284625a7385ca51e1d54a479647048c73beb069d41a1140b6af11dfd2291623579af00901203e927fbad2 Jul 11 00:22:13.770214 systemd-networkd[783]: lo: Link UP Jul 11 00:22:13.770225 systemd-networkd[783]: lo: Gained carrier Jul 11 00:22:13.772916 systemd-networkd[783]: Enumeration completed Jul 11 00:22:13.774111 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 11 00:22:13.776185 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:22:13.776194 systemd-networkd[783]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 11 00:22:13.780261 ignition[688]: fetch-offline: fetch-offline passed Jul 11 00:22:13.777232 systemd[1]: Reached target network.target - Network. Jul 11 00:22:13.780393 ignition[688]: Ignition finished successfully Jul 11 00:22:13.779490 unknown[688]: fetched base config from "system" Jul 11 00:22:13.779498 unknown[688]: fetched user config from "qemu" Jul 11 00:22:13.784399 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 11 00:22:13.785030 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 11 00:22:13.788127 systemd-networkd[783]: eth0: Link UP Jul 11 00:22:13.788132 systemd-networkd[783]: eth0: Gained carrier Jul 11 00:22:13.788151 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:22:13.789407 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 11 00:22:13.804188 systemd-networkd[783]: eth0: DHCPv4 address 10.0.0.132/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 11 00:22:13.815586 ignition[786]: Ignition 2.19.0 Jul 11 00:22:13.815598 ignition[786]: Stage: kargs Jul 11 00:22:13.815804 ignition[786]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:22:13.815817 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:22:13.816695 ignition[786]: kargs: kargs passed Jul 11 00:22:13.816745 ignition[786]: Ignition finished successfully Jul 11 00:22:13.823036 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 11 00:22:13.836272 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 11 00:22:13.929259 ignition[795]: Ignition 2.19.0 Jul 11 00:22:13.929271 ignition[795]: Stage: disks Jul 11 00:22:13.929476 ignition[795]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:22:13.929488 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:22:13.930478 ignition[795]: disks: disks passed Jul 11 00:22:13.930537 ignition[795]: Ignition finished successfully Jul 11 00:22:13.936171 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 11 00:22:13.938366 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 11 00:22:13.938682 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 11 00:22:13.939005 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 11 00:22:13.939500 systemd[1]: Reached target sysinit.target - System Initialization. Jul 11 00:22:13.939821 systemd[1]: Reached target basic.target - Basic System. Jul 11 00:22:13.953359 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 11 00:22:13.968043 systemd-fsck[805]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 11 00:22:13.974789 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 11 00:22:13.990321 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 11 00:22:14.093129 kernel: EXT4-fs (vda9): mounted filesystem 66ba5133-8c5a-461b-b2c1-a823c72af79b r/w with ordered data mode. Quota mode: none. Jul 11 00:22:14.093875 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 11 00:22:14.095368 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 11 00:22:14.127191 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 11 00:22:14.128933 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 11 00:22:14.148537 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 11 00:22:14.154461 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (813) Jul 11 00:22:14.148620 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 11 00:22:14.148665 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 11 00:22:14.160288 kernel: BTRFS info (device vda6): first mount of filesystem 71430075-b555-475f-bdad-1d4a4e6e1dbe Jul 11 00:22:14.160309 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 11 00:22:14.160325 kernel: BTRFS info (device vda6): using free space tree Jul 11 00:22:14.152667 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 11 00:22:14.162283 kernel: BTRFS info (device vda6): auto enabling async discard Jul 11 00:22:14.163249 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 11 00:22:14.165452 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 11 00:22:14.202263 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Jul 11 00:22:14.208384 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Jul 11 00:22:14.213479 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Jul 11 00:22:14.217347 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Jul 11 00:22:14.309540 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 11 00:22:14.322168 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 11 00:22:14.323768 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 11 00:22:14.331100 kernel: BTRFS info (device vda6): last unmount of filesystem 71430075-b555-475f-bdad-1d4a4e6e1dbe Jul 11 00:22:14.348141 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 11 00:22:14.356594 ignition[927]: INFO : Ignition 2.19.0 Jul 11 00:22:14.356594 ignition[927]: INFO : Stage: mount Jul 11 00:22:14.358293 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:22:14.358293 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:22:14.358293 ignition[927]: INFO : mount: mount passed Jul 11 00:22:14.358293 ignition[927]: INFO : Ignition finished successfully Jul 11 00:22:14.360099 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 11 00:22:14.372166 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 11 00:22:14.427306 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 11 00:22:14.444285 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 11 00:22:14.452751 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (939) Jul 11 00:22:14.452790 kernel: BTRFS info (device vda6): first mount of filesystem 71430075-b555-475f-bdad-1d4a4e6e1dbe Jul 11 00:22:14.452804 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 11 00:22:14.453595 kernel: BTRFS info (device vda6): using free space tree Jul 11 00:22:14.457096 kernel: BTRFS info (device vda6): auto enabling async discard Jul 11 00:22:14.458953 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 11 00:22:14.493724 ignition[956]: INFO : Ignition 2.19.0 Jul 11 00:22:14.493724 ignition[956]: INFO : Stage: files Jul 11 00:22:14.495938 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:22:14.495938 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:22:14.495938 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Jul 11 00:22:14.495938 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 11 00:22:14.495938 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 11 00:22:14.502491 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 11 00:22:14.502491 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 11 00:22:14.502491 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 11 00:22:14.502491 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 11 00:22:14.500263 unknown[956]: wrote ssh authorized keys file for user: core Jul 11 00:22:14.509289 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 11 00:22:14.509289 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 11 00:22:14.509289 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 11 00:22:14.551609 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 11 00:22:14.693059 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 11 00:22:14.693059 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 11 00:22:14.697164 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 11 00:22:14.697164 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 11 00:22:14.705910 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 11 00:22:14.705910 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 11 00:22:14.710038 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 11 00:22:14.712110 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 11 00:22:14.714221 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 11 00:22:14.716520 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 00:22:14.718738 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 00:22:14.720992 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 11 00:22:14.724046 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 11 00:22:14.726898 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 11 00:22:14.729175 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 11 00:22:15.274639 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 11 00:22:15.358756 systemd-networkd[783]: eth0: Gained IPv6LL Jul 11 00:22:15.586859 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 11 00:22:15.586859 ignition[956]: INFO : files: op(c): [started] processing unit "containerd.service" Jul 11 00:22:15.591707 ignition[956]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 11 00:22:15.591707 ignition[956]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 11 00:22:15.591707 ignition[956]: INFO : files: op(c): [finished] processing unit "containerd.service" Jul 11 00:22:15.591707 ignition[956]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jul 11 00:22:15.591707 ignition[956]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 11 00:22:15.591707 ignition[956]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 11 00:22:15.591707 ignition[956]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jul 11 00:22:15.591707 ignition[956]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Jul 11 00:22:15.591707 ignition[956]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 11 00:22:15.591707 ignition[956]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 11 00:22:15.591707 ignition[956]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Jul 11 00:22:15.591707 ignition[956]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Jul 11 00:22:15.632395 ignition[956]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 11 00:22:15.638402 ignition[956]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 11 00:22:15.640179 ignition[956]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Jul 11 00:22:15.640179 ignition[956]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Jul 11 00:22:15.640179 ignition[956]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Jul 11 00:22:15.640179 ignition[956]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 11 00:22:15.640179 ignition[956]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 11 00:22:15.640179 ignition[956]: INFO : files: files passed Jul 11 00:22:15.640179 ignition[956]: INFO : Ignition finished successfully Jul 11 00:22:15.652211 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 11 00:22:15.665449 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 11 00:22:15.667286 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 11 00:22:15.671165 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 11 00:22:15.671342 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 11 00:22:15.681415 initrd-setup-root-after-ignition[984]: grep: /sysroot/oem/oem-release: No such file or directory Jul 11 00:22:15.685191 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:22:15.686867 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:22:15.688347 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:22:15.691675 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 11 00:22:15.695190 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 11 00:22:15.706340 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 11 00:22:15.737819 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 11 00:22:15.737981 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 11 00:22:15.738993 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 11 00:22:15.741539 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 11 00:22:15.741901 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 11 00:22:15.747061 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 11 00:22:15.765974 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 11 00:22:15.783341 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 11 00:22:15.792733 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 11 00:22:15.794014 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 00:22:15.796281 systemd[1]: Stopped target timers.target - Timer Units. Jul 11 00:22:15.798279 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 11 00:22:15.798446 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 11 00:22:15.800576 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 11 00:22:15.808228 systemd[1]: Stopped target basic.target - Basic System. Jul 11 00:22:15.810264 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 11 00:22:15.812426 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 11 00:22:15.814473 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 11 00:22:15.837981 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 11 00:22:15.840358 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 11 00:22:15.842990 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 11 00:22:15.845130 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 11 00:22:15.847504 systemd[1]: Stopped target swap.target - Swaps. Jul 11 00:22:15.849303 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 11 00:22:15.849475 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 11 00:22:15.851646 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 11 00:22:15.852591 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 00:22:15.852844 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 11 00:22:15.853000 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 00:22:15.853353 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 11 00:22:15.853464 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 11 00:22:15.854199 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 11 00:22:15.854313 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 11 00:22:15.854782 systemd[1]: Stopped target paths.target - Path Units. Jul 11 00:22:15.855012 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 11 00:22:15.860190 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 00:22:15.861626 systemd[1]: Stopped target slices.target - Slice Units. Jul 11 00:22:15.863499 systemd[1]: Stopped target sockets.target - Socket Units. Jul 11 00:22:15.865611 systemd[1]: iscsid.socket: Deactivated successfully. Jul 11 00:22:15.865727 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 11 00:22:15.867549 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 11 00:22:15.867643 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 11 00:22:15.870944 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 11 00:22:15.871090 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 11 00:22:15.900792 ignition[1010]: INFO : Ignition 2.19.0 Jul 11 00:22:15.900792 ignition[1010]: INFO : Stage: umount Jul 11 00:22:15.900792 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:22:15.900792 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:22:15.873093 systemd[1]: ignition-files.service: Deactivated successfully. Jul 11 00:22:15.928600 ignition[1010]: INFO : umount: umount passed Jul 11 00:22:15.928600 ignition[1010]: INFO : Ignition finished successfully Jul 11 00:22:15.873203 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 11 00:22:15.886250 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 11 00:22:15.896789 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 11 00:22:15.897883 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 11 00:22:15.898019 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 00:22:15.900942 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 11 00:22:15.901118 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 11 00:22:15.903639 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 11 00:22:15.903755 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 11 00:22:15.906654 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 11 00:22:15.906776 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 11 00:22:15.929541 systemd[1]: Stopped target network.target - Network. Jul 11 00:22:15.931499 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 11 00:22:15.931575 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 11 00:22:15.941965 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 11 00:22:15.942037 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 11 00:22:15.943287 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 11 00:22:15.943352 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 11 00:22:15.945432 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 11 00:22:15.945495 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 11 00:22:15.947781 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 11 00:22:15.950037 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 11 00:22:15.953134 systemd-networkd[783]: eth0: DHCPv6 lease lost Jul 11 00:22:15.953341 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 11 00:22:15.955840 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 11 00:22:15.955982 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 11 00:22:15.958563 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 11 00:22:15.958684 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 11 00:22:15.965441 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 11 00:22:15.965550 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 11 00:22:15.979279 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 11 00:22:15.980363 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 11 00:22:15.980437 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 11 00:22:15.986501 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 11 00:22:15.986561 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:22:15.988657 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 11 00:22:15.988708 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 11 00:22:15.989888 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 11 00:22:15.989943 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 00:22:15.991427 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 00:22:16.015447 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 11 00:22:16.015740 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 00:22:16.035644 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 11 00:22:16.035723 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 11 00:22:16.037371 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 11 00:22:16.037431 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 00:22:16.039495 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 11 00:22:16.039557 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 11 00:22:16.040283 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 11 00:22:16.040336 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 11 00:22:16.040967 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 11 00:22:16.041019 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:22:16.064424 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 11 00:22:16.064763 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 11 00:22:16.064847 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 00:22:16.067420 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 11 00:22:16.067486 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 11 00:22:16.067724 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 11 00:22:16.067783 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 00:22:16.068107 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 00:22:16.068172 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:22:16.073248 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 11 00:22:16.073404 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 11 00:22:16.087357 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 11 00:22:16.087571 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 11 00:22:16.177385 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 11 00:22:16.177592 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 11 00:22:16.180904 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 11 00:22:16.183247 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 11 00:22:16.184390 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 11 00:22:16.200576 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 11 00:22:16.211272 systemd[1]: Switching root. Jul 11 00:22:16.241674 systemd-journald[190]: Journal stopped Jul 11 00:22:17.851616 systemd-journald[190]: Received SIGTERM from PID 1 (systemd). Jul 11 00:22:17.851710 kernel: SELinux: policy capability network_peer_controls=1 Jul 11 00:22:17.851737 kernel: SELinux: policy capability open_perms=1 Jul 11 00:22:17.851753 kernel: SELinux: policy capability extended_socket_class=1 Jul 11 00:22:17.851770 kernel: SELinux: policy capability always_check_network=0 Jul 11 00:22:17.851787 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 11 00:22:17.851809 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 11 00:22:17.851825 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 11 00:22:17.851846 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 11 00:22:17.851862 kernel: audit: type=1403 audit(1752193337.018:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 11 00:22:17.851881 systemd[1]: Successfully loaded SELinux policy in 47.871ms. Jul 11 00:22:17.851919 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.586ms. Jul 11 00:22:17.851944 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 11 00:22:17.851962 systemd[1]: Detected virtualization kvm. Jul 11 00:22:17.851979 systemd[1]: Detected architecture x86-64. Jul 11 00:22:17.851996 systemd[1]: Detected first boot. Jul 11 00:22:17.852018 systemd[1]: Initializing machine ID from VM UUID. Jul 11 00:22:17.852035 zram_generator::config[1072]: No configuration found. Jul 11 00:22:17.852061 systemd[1]: Populated /etc with preset unit settings. Jul 11 00:22:17.852319 systemd[1]: Queued start job for default target multi-user.target. Jul 11 00:22:17.852342 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 11 00:22:17.852361 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 11 00:22:17.852379 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 11 00:22:17.852396 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 11 00:22:17.852413 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 11 00:22:17.852453 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 11 00:22:17.852472 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 11 00:22:17.852490 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 11 00:22:17.852507 systemd[1]: Created slice user.slice - User and Session Slice. Jul 11 00:22:17.852524 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 00:22:17.852542 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 00:22:17.852562 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 11 00:22:17.852580 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 11 00:22:17.852603 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 11 00:22:17.852623 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 11 00:22:17.852641 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 11 00:22:17.852659 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 00:22:17.852676 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 11 00:22:17.852694 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 00:22:17.852711 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 11 00:22:17.852729 systemd[1]: Reached target slices.target - Slice Units. Jul 11 00:22:17.852751 systemd[1]: Reached target swap.target - Swaps. Jul 11 00:22:17.852768 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 11 00:22:17.852785 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 11 00:22:17.852802 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 11 00:22:17.852828 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 11 00:22:17.852846 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 11 00:22:17.852862 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 11 00:22:17.852883 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 00:22:17.852899 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 11 00:22:17.852918 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 11 00:22:17.852943 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 11 00:22:17.852965 systemd[1]: Mounting media.mount - External Media Directory... Jul 11 00:22:17.852983 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:22:17.853000 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 11 00:22:17.853018 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 11 00:22:17.853036 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 11 00:22:17.853053 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 11 00:22:17.853093 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:22:17.853120 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 11 00:22:17.853139 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 11 00:22:17.853157 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:22:17.853174 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 11 00:22:17.853191 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:22:17.853209 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 11 00:22:17.853226 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:22:17.853244 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 11 00:22:17.853261 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 11 00:22:17.853286 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jul 11 00:22:17.853303 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 11 00:22:17.853320 kernel: fuse: init (API version 7.39) Jul 11 00:22:17.853338 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 11 00:22:17.853382 systemd-journald[1157]: Collecting audit messages is disabled. Jul 11 00:22:17.853415 kernel: ACPI: bus type drm_connector registered Jul 11 00:22:17.853442 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 11 00:22:17.853466 systemd-journald[1157]: Journal started Jul 11 00:22:17.853496 systemd-journald[1157]: Runtime Journal (/run/log/journal/73b9791386e24230923faa24c402e239) is 6.0M, max 48.4M, 42.3M free. Jul 11 00:22:17.857920 kernel: loop: module loaded Jul 11 00:22:17.857963 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 11 00:22:17.863815 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 11 00:22:17.871100 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:22:17.877268 systemd[1]: Started systemd-journald.service - Journal Service. Jul 11 00:22:17.878652 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 11 00:22:17.879943 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 11 00:22:17.881404 systemd[1]: Mounted media.mount - External Media Directory. Jul 11 00:22:17.882788 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 11 00:22:17.884226 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 11 00:22:17.885562 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 11 00:22:17.886977 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 11 00:22:17.888944 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 00:22:17.923787 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 11 00:22:17.924162 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 11 00:22:17.925978 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:22:17.926345 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:22:17.928403 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 00:22:17.928720 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 11 00:22:17.930491 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:22:17.930733 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:22:17.932652 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 11 00:22:17.932976 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 11 00:22:17.934772 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:22:17.935262 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:22:17.937148 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 11 00:22:17.939049 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 11 00:22:17.941384 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 11 00:22:17.958899 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 11 00:22:17.985278 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 11 00:22:17.988423 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 11 00:22:17.989778 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 11 00:22:17.993570 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 11 00:22:17.997279 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 11 00:22:17.999382 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 00:22:18.004229 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 11 00:22:18.006457 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 11 00:22:18.007975 systemd-journald[1157]: Time spent on flushing to /var/log/journal/73b9791386e24230923faa24c402e239 is 15.903ms for 938 entries. Jul 11 00:22:18.007975 systemd-journald[1157]: System Journal (/var/log/journal/73b9791386e24230923faa24c402e239) is 8.0M, max 195.6M, 187.6M free. Jul 11 00:22:18.569331 systemd-journald[1157]: Received client request to flush runtime journal. Jul 11 00:22:18.009392 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 00:22:18.014879 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 11 00:22:18.019167 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 11 00:22:18.020522 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 11 00:22:18.059874 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 00:22:18.076219 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 11 00:22:18.086599 udevadm[1216]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 11 00:22:18.101024 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:22:18.104586 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Jul 11 00:22:18.104600 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Jul 11 00:22:18.111238 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 11 00:22:18.123241 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 11 00:22:18.162613 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 11 00:22:18.169306 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 11 00:22:18.198273 systemd-tmpfiles[1223]: ACLs are not supported, ignoring. Jul 11 00:22:18.198289 systemd-tmpfiles[1223]: ACLs are not supported, ignoring. Jul 11 00:22:18.207847 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 00:22:18.501037 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 11 00:22:18.502654 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 11 00:22:18.572266 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 11 00:22:19.165751 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 11 00:22:19.175454 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 00:22:19.203434 systemd-udevd[1237]: Using default interface naming scheme 'v255'. Jul 11 00:22:19.221994 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 00:22:19.234392 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 11 00:22:19.249292 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 11 00:22:19.256659 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jul 11 00:22:19.287104 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1239) Jul 11 00:22:19.319153 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 11 00:22:19.335096 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 11 00:22:19.341096 kernel: ACPI: button: Power Button [PWRF] Jul 11 00:22:19.367346 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 11 00:22:19.384771 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 11 00:22:19.387160 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jul 11 00:22:19.387376 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 11 00:22:19.387741 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 11 00:22:19.415098 kernel: mousedev: PS/2 mouse device common for all mice Jul 11 00:22:19.426696 systemd-networkd[1242]: lo: Link UP Jul 11 00:22:19.426709 systemd-networkd[1242]: lo: Gained carrier Jul 11 00:22:19.428838 systemd-networkd[1242]: Enumeration completed Jul 11 00:22:19.429339 systemd-networkd[1242]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:22:19.429343 systemd-networkd[1242]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 11 00:22:19.431401 systemd-networkd[1242]: eth0: Link UP Jul 11 00:22:19.431406 systemd-networkd[1242]: eth0: Gained carrier Jul 11 00:22:19.431418 systemd-networkd[1242]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:22:19.431771 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:22:19.434331 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 11 00:22:19.524051 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 11 00:22:19.572289 systemd-networkd[1242]: eth0: DHCPv4 address 10.0.0.132/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 11 00:22:19.603340 kernel: kvm_amd: TSC scaling supported Jul 11 00:22:19.603427 kernel: kvm_amd: Nested Virtualization enabled Jul 11 00:22:19.603450 kernel: kvm_amd: Nested Paging enabled Jul 11 00:22:19.603472 kernel: kvm_amd: LBR virtualization supported Jul 11 00:22:19.604668 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 11 00:22:19.604742 kernel: kvm_amd: Virtual GIF supported Jul 11 00:22:19.628152 kernel: EDAC MC: Ver: 3.0.0 Jul 11 00:22:19.660880 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 11 00:22:19.677230 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 11 00:22:19.679220 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:22:19.690973 lvm[1282]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 11 00:22:19.731765 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 11 00:22:19.733527 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 11 00:22:19.746217 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 11 00:22:19.754538 lvm[1287]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 11 00:22:19.790821 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 11 00:22:19.792646 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 11 00:22:19.794155 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 11 00:22:19.794185 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 11 00:22:19.795363 systemd[1]: Reached target machines.target - Containers. Jul 11 00:22:19.797641 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 11 00:22:19.806228 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 11 00:22:19.809021 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 11 00:22:19.810337 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:22:19.811340 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 11 00:22:19.816393 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 11 00:22:19.820240 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 11 00:22:19.824403 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 11 00:22:19.833368 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 11 00:22:19.839089 kernel: loop0: detected capacity change from 0 to 221472 Jul 11 00:22:20.016122 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 11 00:22:20.018479 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 11 00:22:20.019362 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 11 00:22:20.047114 kernel: loop1: detected capacity change from 0 to 140768 Jul 11 00:22:20.087095 kernel: loop2: detected capacity change from 0 to 142488 Jul 11 00:22:20.122135 kernel: loop3: detected capacity change from 0 to 221472 Jul 11 00:22:20.134104 kernel: loop4: detected capacity change from 0 to 140768 Jul 11 00:22:20.145097 kernel: loop5: detected capacity change from 0 to 142488 Jul 11 00:22:20.155903 (sd-merge)[1308]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 11 00:22:20.156747 (sd-merge)[1308]: Merged extensions into '/usr'. Jul 11 00:22:20.162589 systemd[1]: Reloading requested from client PID 1295 ('systemd-sysext') (unit systemd-sysext.service)... Jul 11 00:22:20.162605 systemd[1]: Reloading... Jul 11 00:22:20.231109 zram_generator::config[1336]: No configuration found. Jul 11 00:22:20.292035 ldconfig[1291]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 11 00:22:20.378507 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:22:20.448591 systemd[1]: Reloading finished in 285 ms. Jul 11 00:22:20.469984 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 11 00:22:20.473516 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 11 00:22:20.491303 systemd[1]: Starting ensure-sysext.service... Jul 11 00:22:20.493979 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 11 00:22:20.499444 systemd[1]: Reloading requested from client PID 1380 ('systemctl') (unit ensure-sysext.service)... Jul 11 00:22:20.499457 systemd[1]: Reloading... Jul 11 00:22:20.522814 systemd-tmpfiles[1381]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 11 00:22:20.523229 systemd-tmpfiles[1381]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 11 00:22:20.524299 systemd-tmpfiles[1381]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 11 00:22:20.524715 systemd-tmpfiles[1381]: ACLs are not supported, ignoring. Jul 11 00:22:20.524802 systemd-tmpfiles[1381]: ACLs are not supported, ignoring. Jul 11 00:22:20.529146 systemd-tmpfiles[1381]: Detected autofs mount point /boot during canonicalization of boot. Jul 11 00:22:20.529163 systemd-tmpfiles[1381]: Skipping /boot Jul 11 00:22:20.758558 systemd-networkd[1242]: eth0: Gained IPv6LL Jul 11 00:22:20.762495 systemd-tmpfiles[1381]: Detected autofs mount point /boot during canonicalization of boot. Jul 11 00:22:20.762510 systemd-tmpfiles[1381]: Skipping /boot Jul 11 00:22:20.795179 zram_generator::config[1410]: No configuration found. Jul 11 00:22:20.937485 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:22:21.016462 systemd[1]: Reloading finished in 516 ms. Jul 11 00:22:21.037053 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 11 00:22:21.050204 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 00:22:21.068473 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 11 00:22:21.072257 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 11 00:22:21.075654 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 11 00:22:21.081427 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 11 00:22:21.085614 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 11 00:22:21.092221 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:22:21.092405 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:22:21.094112 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:22:21.098428 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:22:21.103370 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:22:21.106318 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:22:21.106537 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:22:21.107923 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:22:21.108269 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:22:21.111954 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:22:21.112213 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:22:21.114326 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:22:21.114561 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:22:21.122734 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:22:21.123198 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:22:21.126614 augenrules[1486]: No rules Jul 11 00:22:21.130575 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:22:21.135764 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:22:21.142226 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:22:21.143868 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:22:21.144178 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:22:21.146814 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 11 00:22:21.150219 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 11 00:22:21.152820 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:22:21.153271 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:22:21.154602 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:22:21.154889 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:22:21.158347 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:22:21.158677 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:22:21.160910 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 11 00:22:21.170643 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:22:21.170855 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:22:21.178472 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:22:21.183058 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 11 00:22:21.188675 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:22:21.192290 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:22:21.193687 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:22:21.198399 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 11 00:22:21.199676 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:22:21.201644 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 11 00:22:21.204242 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:22:21.204542 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:22:21.206747 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 00:22:21.207009 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 11 00:22:21.209202 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:22:21.209503 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:22:21.211937 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:22:21.212274 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:22:21.214802 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 11 00:22:21.219872 systemd[1]: Finished ensure-sysext.service. Jul 11 00:22:21.227780 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 00:22:21.227851 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 11 00:22:21.235143 systemd-resolved[1466]: Positive Trust Anchors: Jul 11 00:22:21.235160 systemd-resolved[1466]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 00:22:21.235193 systemd-resolved[1466]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 11 00:22:21.237367 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 11 00:22:21.239214 systemd-resolved[1466]: Defaulting to hostname 'linux'. Jul 11 00:22:21.241186 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 11 00:22:21.241361 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 11 00:22:21.242777 systemd[1]: Reached target network.target - Network. Jul 11 00:22:21.243768 systemd[1]: Reached target network-online.target - Network is Online. Jul 11 00:22:21.244917 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 11 00:22:21.310625 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 11 00:22:21.853847 systemd-resolved[1466]: Clock change detected. Flushing caches. Jul 11 00:22:21.853891 systemd-timesyncd[1528]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 11 00:22:21.853968 systemd-timesyncd[1528]: Initial clock synchronization to Fri 2025-07-11 00:22:21.853773 UTC. Jul 11 00:22:21.855097 systemd[1]: Reached target sysinit.target - System Initialization. Jul 11 00:22:21.856596 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 11 00:22:21.858084 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 11 00:22:21.859428 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 11 00:22:21.860741 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 11 00:22:21.860769 systemd[1]: Reached target paths.target - Path Units. Jul 11 00:22:21.861719 systemd[1]: Reached target time-set.target - System Time Set. Jul 11 00:22:21.862972 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 11 00:22:21.864407 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 11 00:22:21.865719 systemd[1]: Reached target timers.target - Timer Units. Jul 11 00:22:21.867723 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 11 00:22:21.871376 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 11 00:22:21.875090 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 11 00:22:21.879463 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 11 00:22:21.880607 systemd[1]: Reached target sockets.target - Socket Units. Jul 11 00:22:21.881575 systemd[1]: Reached target basic.target - Basic System. Jul 11 00:22:21.882724 systemd[1]: System is tainted: cgroupsv1 Jul 11 00:22:21.882776 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 11 00:22:21.882799 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 11 00:22:21.884162 systemd[1]: Starting containerd.service - containerd container runtime... Jul 11 00:22:21.886544 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 11 00:22:21.888837 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 11 00:22:21.894515 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 11 00:22:21.900105 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 11 00:22:21.901380 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 11 00:22:21.903625 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:22:21.908488 jq[1535]: false Jul 11 00:22:21.910360 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 11 00:22:21.915159 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 11 00:22:21.920333 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 11 00:22:21.922791 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 11 00:22:21.928606 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 11 00:22:21.929028 extend-filesystems[1538]: Found loop3 Jul 11 00:22:21.931377 extend-filesystems[1538]: Found loop4 Jul 11 00:22:21.931377 extend-filesystems[1538]: Found loop5 Jul 11 00:22:21.931377 extend-filesystems[1538]: Found sr0 Jul 11 00:22:21.931377 extend-filesystems[1538]: Found vda Jul 11 00:22:21.931377 extend-filesystems[1538]: Found vda1 Jul 11 00:22:21.931377 extend-filesystems[1538]: Found vda2 Jul 11 00:22:21.931377 extend-filesystems[1538]: Found vda3 Jul 11 00:22:21.931377 extend-filesystems[1538]: Found usr Jul 11 00:22:21.931377 extend-filesystems[1538]: Found vda4 Jul 11 00:22:21.931377 extend-filesystems[1538]: Found vda6 Jul 11 00:22:21.931377 extend-filesystems[1538]: Found vda7 Jul 11 00:22:21.931377 extend-filesystems[1538]: Found vda9 Jul 11 00:22:21.931377 extend-filesystems[1538]: Checking size of /dev/vda9 Jul 11 00:22:21.942618 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 11 00:22:21.954924 extend-filesystems[1538]: Resized partition /dev/vda9 Jul 11 00:22:21.944688 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 11 00:22:21.963982 extend-filesystems[1561]: resize2fs 1.47.1 (20-May-2024) Jul 11 00:22:21.963489 dbus-daemon[1534]: [system] SELinux support is enabled Jul 11 00:22:21.969852 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 11 00:22:21.949543 systemd[1]: Starting update-engine.service - Update Engine... Jul 11 00:22:21.955339 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 11 00:22:21.966677 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 11 00:22:21.988944 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 11 00:22:21.989490 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 11 00:22:21.991834 systemd[1]: motdgen.service: Deactivated successfully. Jul 11 00:22:21.998374 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 11 00:22:22.004179 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 11 00:22:22.010643 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 11 00:22:22.010715 update_engine[1563]: I20250711 00:22:22.009900 1563 main.cc:92] Flatcar Update Engine starting Jul 11 00:22:22.044727 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1239) Jul 11 00:22:22.021018 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 11 00:22:22.044859 update_engine[1563]: I20250711 00:22:22.011626 1563 update_check_scheduler.cc:74] Next update check in 8m31s Jul 11 00:22:22.044888 jq[1564]: true Jul 11 00:22:22.021899 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 11 00:22:22.042783 (ntainerd)[1582]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 11 00:22:22.047585 extend-filesystems[1561]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 11 00:22:22.047585 extend-filesystems[1561]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 11 00:22:22.047585 extend-filesystems[1561]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 11 00:22:22.055790 extend-filesystems[1538]: Resized filesystem in /dev/vda9 Jul 11 00:22:22.047886 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 11 00:22:22.054462 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 11 00:22:22.059287 jq[1581]: true Jul 11 00:22:22.057439 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 11 00:22:22.057902 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 11 00:22:22.081368 tar[1577]: linux-amd64/helm Jul 11 00:22:22.095954 systemd[1]: Started update-engine.service - Update Engine. Jul 11 00:22:22.107070 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 11 00:22:22.107241 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 11 00:22:22.107278 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 11 00:22:22.108773 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 11 00:22:22.108795 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 11 00:22:22.111816 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 11 00:22:22.123435 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 11 00:22:22.194208 sshd_keygen[1568]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 11 00:22:22.208942 systemd-logind[1554]: Watching system buttons on /dev/input/event1 (Power Button) Jul 11 00:22:22.208980 systemd-logind[1554]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 11 00:22:22.211000 systemd-logind[1554]: New seat seat0. Jul 11 00:22:22.212554 systemd[1]: Started systemd-logind.service - User Login Management. Jul 11 00:22:22.215816 locksmithd[1611]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 11 00:22:22.237855 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 11 00:22:22.288904 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 11 00:22:22.310960 systemd[1]: issuegen.service: Deactivated successfully. Jul 11 00:22:22.311344 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 11 00:22:22.355610 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 11 00:22:22.383471 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 11 00:22:22.433664 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 11 00:22:22.438340 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 11 00:22:22.442525 systemd[1]: Reached target getty.target - Login Prompts. Jul 11 00:22:22.519763 bash[1618]: Updated "/home/core/.ssh/authorized_keys" Jul 11 00:22:22.525947 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 11 00:22:22.573339 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 11 00:22:22.754974 containerd[1582]: time="2025-07-11T00:22:22.754794671Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 11 00:22:22.825270 containerd[1582]: time="2025-07-11T00:22:22.825184877Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:22:22.829943 containerd[1582]: time="2025-07-11T00:22:22.829822845Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.96-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:22:22.829943 containerd[1582]: time="2025-07-11T00:22:22.829870024Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 11 00:22:22.829943 containerd[1582]: time="2025-07-11T00:22:22.829891544Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 11 00:22:22.830238 containerd[1582]: time="2025-07-11T00:22:22.830213408Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 11 00:22:22.830319 containerd[1582]: time="2025-07-11T00:22:22.830244125Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 11 00:22:22.830418 containerd[1582]: time="2025-07-11T00:22:22.830379219Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:22:22.830496 containerd[1582]: time="2025-07-11T00:22:22.830422430Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:22:22.831018 containerd[1582]: time="2025-07-11T00:22:22.830964276Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:22:22.831018 containerd[1582]: time="2025-07-11T00:22:22.831005062Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 11 00:22:22.831173 containerd[1582]: time="2025-07-11T00:22:22.831032915Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:22:22.831173 containerd[1582]: time="2025-07-11T00:22:22.831051570Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 11 00:22:22.831524 containerd[1582]: time="2025-07-11T00:22:22.831469383Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:22:22.832163 containerd[1582]: time="2025-07-11T00:22:22.832127197Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:22:22.832417 containerd[1582]: time="2025-07-11T00:22:22.832386453Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:22:22.832417 containerd[1582]: time="2025-07-11T00:22:22.832408074Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 11 00:22:22.832614 containerd[1582]: time="2025-07-11T00:22:22.832586258Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 11 00:22:22.832683 containerd[1582]: time="2025-07-11T00:22:22.832665777Z" level=info msg="metadata content store policy set" policy=shared Jul 11 00:22:22.841612 containerd[1582]: time="2025-07-11T00:22:22.841548986Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 11 00:22:22.841612 containerd[1582]: time="2025-07-11T00:22:22.841624427Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 11 00:22:22.841612 containerd[1582]: time="2025-07-11T00:22:22.841640818Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 11 00:22:22.841918 containerd[1582]: time="2025-07-11T00:22:22.841658311Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 11 00:22:22.841918 containerd[1582]: time="2025-07-11T00:22:22.841677747Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 11 00:22:22.841918 containerd[1582]: time="2025-07-11T00:22:22.841862213Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 11 00:22:22.843239 containerd[1582]: time="2025-07-11T00:22:22.842725883Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 11 00:22:22.843282 containerd[1582]: time="2025-07-11T00:22:22.843267839Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 11 00:22:22.843314 containerd[1582]: time="2025-07-11T00:22:22.843294209Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 11 00:22:22.843350 containerd[1582]: time="2025-07-11T00:22:22.843316330Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 11 00:22:22.843350 containerd[1582]: time="2025-07-11T00:22:22.843336298Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 11 00:22:22.843403 containerd[1582]: time="2025-07-11T00:22:22.843354783Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 11 00:22:22.843403 containerd[1582]: time="2025-07-11T00:22:22.843375762Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 11 00:22:22.843440 containerd[1582]: time="2025-07-11T00:22:22.843406089Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 11 00:22:22.843440 containerd[1582]: time="2025-07-11T00:22:22.843425175Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 11 00:22:22.843484 containerd[1582]: time="2025-07-11T00:22:22.843443890Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 11 00:22:22.843484 containerd[1582]: time="2025-07-11T00:22:22.843471682Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 11 00:22:22.843527 containerd[1582]: time="2025-07-11T00:22:22.843484095Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 11 00:22:22.843527 containerd[1582]: time="2025-07-11T00:22:22.843507379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 11 00:22:22.843563 containerd[1582]: time="2025-07-11T00:22:22.843526755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 11 00:22:22.843563 containerd[1582]: time="2025-07-11T00:22:22.843543947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 11 00:22:22.843563 containerd[1582]: time="2025-07-11T00:22:22.843557833Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 11 00:22:22.843626 containerd[1582]: time="2025-07-11T00:22:22.843587008Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 11 00:22:22.843626 containerd[1582]: time="2025-07-11T00:22:22.843606485Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 11 00:22:22.843626 containerd[1582]: time="2025-07-11T00:22:22.843620721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 11 00:22:22.843720 containerd[1582]: time="2025-07-11T00:22:22.843634617Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 11 00:22:22.843720 containerd[1582]: time="2025-07-11T00:22:22.843683689Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 11 00:22:22.843768 containerd[1582]: time="2025-07-11T00:22:22.843716571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 11 00:22:22.843768 containerd[1582]: time="2025-07-11T00:22:22.843733613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 11 00:22:22.843768 containerd[1582]: time="2025-07-11T00:22:22.843751116Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 11 00:22:22.843843 containerd[1582]: time="2025-07-11T00:22:22.843767947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 11 00:22:22.843871 containerd[1582]: time="2025-07-11T00:22:22.843852706Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 11 00:22:22.843966 containerd[1582]: time="2025-07-11T00:22:22.843944218Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 11 00:22:22.844003 containerd[1582]: time="2025-07-11T00:22:22.843973413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 11 00:22:22.844032 containerd[1582]: time="2025-07-11T00:22:22.844013718Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 11 00:22:22.845290 containerd[1582]: time="2025-07-11T00:22:22.845248213Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 11 00:22:22.845336 containerd[1582]: time="2025-07-11T00:22:22.845287948Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 11 00:22:22.845336 containerd[1582]: time="2025-07-11T00:22:22.845305170Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 11 00:22:22.845519 containerd[1582]: time="2025-07-11T00:22:22.845326640Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 11 00:22:22.845519 containerd[1582]: time="2025-07-11T00:22:22.845347369Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 11 00:22:22.845519 containerd[1582]: time="2025-07-11T00:22:22.845368880Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 11 00:22:22.845519 containerd[1582]: time="2025-07-11T00:22:22.845387805Z" level=info msg="NRI interface is disabled by configuration." Jul 11 00:22:22.845519 containerd[1582]: time="2025-07-11T00:22:22.845402924Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 11 00:22:22.845948 containerd[1582]: time="2025-07-11T00:22:22.845865180Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 11 00:22:22.845948 containerd[1582]: time="2025-07-11T00:22:22.845955109Z" level=info msg="Connect containerd service" Jul 11 00:22:22.845948 containerd[1582]: time="2025-07-11T00:22:22.846005894Z" level=info msg="using legacy CRI server" Jul 11 00:22:22.845948 containerd[1582]: time="2025-07-11T00:22:22.846014821Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 11 00:22:22.846402 containerd[1582]: time="2025-07-11T00:22:22.846189729Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 11 00:22:22.848531 containerd[1582]: time="2025-07-11T00:22:22.847141744Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 11 00:22:22.848531 containerd[1582]: time="2025-07-11T00:22:22.847360324Z" level=info msg="Start subscribing containerd event" Jul 11 00:22:22.848531 containerd[1582]: time="2025-07-11T00:22:22.847433341Z" level=info msg="Start recovering state" Jul 11 00:22:22.848531 containerd[1582]: time="2025-07-11T00:22:22.847555250Z" level=info msg="Start event monitor" Jul 11 00:22:22.848531 containerd[1582]: time="2025-07-11T00:22:22.847587751Z" level=info msg="Start snapshots syncer" Jul 11 00:22:22.848531 containerd[1582]: time="2025-07-11T00:22:22.847607979Z" level=info msg="Start cni network conf syncer for default" Jul 11 00:22:22.848531 containerd[1582]: time="2025-07-11T00:22:22.847620112Z" level=info msg="Start streaming server" Jul 11 00:22:22.848531 containerd[1582]: time="2025-07-11T00:22:22.847712906Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 11 00:22:22.848531 containerd[1582]: time="2025-07-11T00:22:22.847797023Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 11 00:22:22.848531 containerd[1582]: time="2025-07-11T00:22:22.848002268Z" level=info msg="containerd successfully booted in 0.095705s" Jul 11 00:22:22.849620 systemd[1]: Started containerd.service - containerd container runtime. Jul 11 00:22:22.911804 tar[1577]: linux-amd64/LICENSE Jul 11 00:22:22.911804 tar[1577]: linux-amd64/README.md Jul 11 00:22:22.984990 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 11 00:22:24.178305 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:22:24.181035 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 11 00:22:24.183097 systemd[1]: Startup finished in 7.550s (kernel) + 6.669s (userspace) = 14.219s. Jul 11 00:22:24.201475 (kubelet)[1669]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:22:24.453116 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 11 00:22:24.468874 systemd[1]: Started sshd@0-10.0.0.132:22-10.0.0.1:59952.service - OpenSSH per-connection server daemon (10.0.0.1:59952). Jul 11 00:22:24.536974 sshd[1679]: Accepted publickey for core from 10.0.0.1 port 59952 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:22:24.539549 sshd[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:22:24.553991 systemd-logind[1554]: New session 1 of user core. Jul 11 00:22:24.555985 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 11 00:22:24.586773 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 11 00:22:24.659927 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 11 00:22:24.673726 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 11 00:22:24.677370 (systemd)[1685]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:22:24.817574 systemd[1685]: Queued start job for default target default.target. Jul 11 00:22:24.818245 systemd[1685]: Created slice app.slice - User Application Slice. Jul 11 00:22:24.818267 systemd[1685]: Reached target paths.target - Paths. Jul 11 00:22:24.818286 systemd[1685]: Reached target timers.target - Timers. Jul 11 00:22:24.857498 systemd[1685]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 11 00:22:24.866144 systemd[1685]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 11 00:22:24.866336 systemd[1685]: Reached target sockets.target - Sockets. Jul 11 00:22:24.866355 systemd[1685]: Reached target basic.target - Basic System. Jul 11 00:22:24.866410 systemd[1685]: Reached target default.target - Main User Target. Jul 11 00:22:24.866448 systemd[1685]: Startup finished in 178ms. Jul 11 00:22:24.867501 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 11 00:22:24.869629 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 11 00:22:24.942697 systemd[1]: Started sshd@1-10.0.0.132:22-10.0.0.1:59954.service - OpenSSH per-connection server daemon (10.0.0.1:59954). Jul 11 00:22:24.983612 sshd[1698]: Accepted publickey for core from 10.0.0.1 port 59954 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:22:24.985736 sshd[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:22:24.992031 systemd-logind[1554]: New session 2 of user core. Jul 11 00:22:25.011471 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 11 00:22:25.158912 sshd[1698]: pam_unix(sshd:session): session closed for user core Jul 11 00:22:25.168703 systemd[1]: Started sshd@2-10.0.0.132:22-10.0.0.1:59970.service - OpenSSH per-connection server daemon (10.0.0.1:59970). Jul 11 00:22:25.169460 systemd[1]: sshd@1-10.0.0.132:22-10.0.0.1:59954.service: Deactivated successfully. Jul 11 00:22:25.174404 systemd-logind[1554]: Session 2 logged out. Waiting for processes to exit. Jul 11 00:22:25.175838 systemd[1]: session-2.scope: Deactivated successfully. Jul 11 00:22:25.178132 systemd-logind[1554]: Removed session 2. Jul 11 00:22:25.208351 sshd[1703]: Accepted publickey for core from 10.0.0.1 port 59970 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:22:25.210748 sshd[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:22:25.216529 systemd-logind[1554]: New session 3 of user core. Jul 11 00:22:25.246684 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 11 00:22:25.300957 kubelet[1669]: E0711 00:22:25.300853 1669 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:22:25.304127 sshd[1703]: pam_unix(sshd:session): session closed for user core Jul 11 00:22:25.313521 systemd[1]: Started sshd@3-10.0.0.132:22-10.0.0.1:59972.service - OpenSSH per-connection server daemon (10.0.0.1:59972). Jul 11 00:22:25.313903 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:22:25.314101 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:22:25.314994 systemd[1]: sshd@2-10.0.0.132:22-10.0.0.1:59970.service: Deactivated successfully. Jul 11 00:22:25.317152 systemd[1]: session-3.scope: Deactivated successfully. Jul 11 00:22:25.318701 systemd-logind[1554]: Session 3 logged out. Waiting for processes to exit. Jul 11 00:22:25.320337 systemd-logind[1554]: Removed session 3. Jul 11 00:22:25.345826 sshd[1712]: Accepted publickey for core from 10.0.0.1 port 59972 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:22:25.347572 sshd[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:22:25.353407 systemd-logind[1554]: New session 4 of user core. Jul 11 00:22:25.369660 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 11 00:22:25.426814 sshd[1712]: pam_unix(sshd:session): session closed for user core Jul 11 00:22:25.435640 systemd[1]: Started sshd@4-10.0.0.132:22-10.0.0.1:59980.service - OpenSSH per-connection server daemon (10.0.0.1:59980). Jul 11 00:22:25.436236 systemd[1]: sshd@3-10.0.0.132:22-10.0.0.1:59972.service: Deactivated successfully. Jul 11 00:22:25.439587 systemd-logind[1554]: Session 4 logged out. Waiting for processes to exit. Jul 11 00:22:25.441426 systemd[1]: session-4.scope: Deactivated successfully. Jul 11 00:22:25.442486 systemd-logind[1554]: Removed session 4. Jul 11 00:22:25.472269 sshd[1721]: Accepted publickey for core from 10.0.0.1 port 59980 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:22:25.474137 sshd[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:22:25.478973 systemd-logind[1554]: New session 5 of user core. Jul 11 00:22:25.492629 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 11 00:22:25.556068 sudo[1728]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 11 00:22:25.556484 sudo[1728]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:22:25.576880 sudo[1728]: pam_unix(sudo:session): session closed for user root Jul 11 00:22:25.579936 sshd[1721]: pam_unix(sshd:session): session closed for user core Jul 11 00:22:25.588572 systemd[1]: Started sshd@5-10.0.0.132:22-10.0.0.1:59984.service - OpenSSH per-connection server daemon (10.0.0.1:59984). Jul 11 00:22:25.589647 systemd[1]: sshd@4-10.0.0.132:22-10.0.0.1:59980.service: Deactivated successfully. Jul 11 00:22:25.593461 systemd-logind[1554]: Session 5 logged out. Waiting for processes to exit. Jul 11 00:22:25.594922 systemd[1]: session-5.scope: Deactivated successfully. Jul 11 00:22:25.596730 systemd-logind[1554]: Removed session 5. Jul 11 00:22:25.626945 sshd[1730]: Accepted publickey for core from 10.0.0.1 port 59984 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:22:25.629076 sshd[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:22:25.636821 systemd-logind[1554]: New session 6 of user core. Jul 11 00:22:25.647502 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 11 00:22:25.706612 sudo[1738]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 11 00:22:25.707142 sudo[1738]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:22:25.713476 sudo[1738]: pam_unix(sudo:session): session closed for user root Jul 11 00:22:25.723106 sudo[1737]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 11 00:22:25.723672 sudo[1737]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:22:25.741564 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 11 00:22:25.746423 auditctl[1741]: No rules Jul 11 00:22:25.748525 systemd[1]: audit-rules.service: Deactivated successfully. Jul 11 00:22:25.749023 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 11 00:22:25.752164 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 11 00:22:25.803343 augenrules[1760]: No rules Jul 11 00:22:25.805802 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 11 00:22:25.807677 sudo[1737]: pam_unix(sudo:session): session closed for user root Jul 11 00:22:25.812088 sshd[1730]: pam_unix(sshd:session): session closed for user core Jul 11 00:22:25.822840 systemd[1]: Started sshd@6-10.0.0.132:22-10.0.0.1:59988.service - OpenSSH per-connection server daemon (10.0.0.1:59988). Jul 11 00:22:25.823743 systemd[1]: sshd@5-10.0.0.132:22-10.0.0.1:59984.service: Deactivated successfully. Jul 11 00:22:25.827570 systemd-logind[1554]: Session 6 logged out. Waiting for processes to exit. Jul 11 00:22:25.828401 systemd[1]: session-6.scope: Deactivated successfully. Jul 11 00:22:25.829896 systemd-logind[1554]: Removed session 6. Jul 11 00:22:25.859012 sshd[1766]: Accepted publickey for core from 10.0.0.1 port 59988 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:22:25.860754 sshd[1766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:22:25.865253 systemd-logind[1554]: New session 7 of user core. Jul 11 00:22:25.879484 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 11 00:22:25.935309 sudo[1773]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 11 00:22:25.935808 sudo[1773]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:22:26.539569 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 11 00:22:26.539849 (dockerd)[1792]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 11 00:22:27.381043 dockerd[1792]: time="2025-07-11T00:22:27.380928523Z" level=info msg="Starting up" Jul 11 00:22:28.298175 dockerd[1792]: time="2025-07-11T00:22:28.298079458Z" level=info msg="Loading containers: start." Jul 11 00:22:28.449251 kernel: Initializing XFRM netlink socket Jul 11 00:22:28.583603 systemd-networkd[1242]: docker0: Link UP Jul 11 00:22:28.699692 dockerd[1792]: time="2025-07-11T00:22:28.699612967Z" level=info msg="Loading containers: done." Jul 11 00:22:28.728784 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck966268916-merged.mount: Deactivated successfully. Jul 11 00:22:28.750940 dockerd[1792]: time="2025-07-11T00:22:28.750799099Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 11 00:22:28.751125 dockerd[1792]: time="2025-07-11T00:22:28.751017439Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 11 00:22:28.751309 dockerd[1792]: time="2025-07-11T00:22:28.751248252Z" level=info msg="Daemon has completed initialization" Jul 11 00:22:28.981191 dockerd[1792]: time="2025-07-11T00:22:28.980973338Z" level=info msg="API listen on /run/docker.sock" Jul 11 00:22:28.981378 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 11 00:22:30.017473 containerd[1582]: time="2025-07-11T00:22:30.017414876Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 11 00:22:32.021121 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3991056028.mount: Deactivated successfully. Jul 11 00:22:34.197874 containerd[1582]: time="2025-07-11T00:22:34.197809248Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:34.198674 containerd[1582]: time="2025-07-11T00:22:34.198643933Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077744" Jul 11 00:22:34.199989 containerd[1582]: time="2025-07-11T00:22:34.199926558Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:34.203394 containerd[1582]: time="2025-07-11T00:22:34.203330923Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:34.204650 containerd[1582]: time="2025-07-11T00:22:34.204599251Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 4.187125415s" Jul 11 00:22:34.204712 containerd[1582]: time="2025-07-11T00:22:34.204650868Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 11 00:22:34.205634 containerd[1582]: time="2025-07-11T00:22:34.205597614Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 11 00:22:35.564505 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 11 00:22:35.784492 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:22:36.094357 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:22:36.100451 (kubelet)[2009]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:22:36.212283 containerd[1582]: time="2025-07-11T00:22:36.212219646Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:36.213378 containerd[1582]: time="2025-07-11T00:22:36.213328486Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713294" Jul 11 00:22:36.214879 containerd[1582]: time="2025-07-11T00:22:36.214839008Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:36.217885 containerd[1582]: time="2025-07-11T00:22:36.217856026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:36.219230 containerd[1582]: time="2025-07-11T00:22:36.219186882Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 2.013549294s" Jul 11 00:22:36.219289 containerd[1582]: time="2025-07-11T00:22:36.219235063Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 11 00:22:36.219887 containerd[1582]: time="2025-07-11T00:22:36.219862840Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 11 00:22:36.278352 kubelet[2009]: E0711 00:22:36.278270 2009 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:22:36.286567 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:22:36.286969 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:22:38.214842 containerd[1582]: time="2025-07-11T00:22:38.214750807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:38.215828 containerd[1582]: time="2025-07-11T00:22:38.215745011Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783671" Jul 11 00:22:38.217458 containerd[1582]: time="2025-07-11T00:22:38.217392291Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:38.221799 containerd[1582]: time="2025-07-11T00:22:38.221741417Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:38.222834 containerd[1582]: time="2025-07-11T00:22:38.222770206Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 2.002872811s" Jul 11 00:22:38.222834 containerd[1582]: time="2025-07-11T00:22:38.222810361Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 11 00:22:38.223397 containerd[1582]: time="2025-07-11T00:22:38.223348290Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 11 00:22:40.060106 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1285498419.mount: Deactivated successfully. Jul 11 00:22:41.883234 containerd[1582]: time="2025-07-11T00:22:41.883130588Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:41.884547 containerd[1582]: time="2025-07-11T00:22:41.884447478Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383943" Jul 11 00:22:41.886139 containerd[1582]: time="2025-07-11T00:22:41.886065563Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:41.890630 containerd[1582]: time="2025-07-11T00:22:41.890565141Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:41.891254 containerd[1582]: time="2025-07-11T00:22:41.891177469Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 3.667789234s" Jul 11 00:22:41.891254 containerd[1582]: time="2025-07-11T00:22:41.891254814Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 11 00:22:41.891817 containerd[1582]: time="2025-07-11T00:22:41.891793044Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 11 00:22:42.514849 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount921240178.mount: Deactivated successfully. Jul 11 00:22:45.686265 containerd[1582]: time="2025-07-11T00:22:45.686114060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:45.689556 containerd[1582]: time="2025-07-11T00:22:45.689417856Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 11 00:22:45.692093 containerd[1582]: time="2025-07-11T00:22:45.692042768Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:45.698941 containerd[1582]: time="2025-07-11T00:22:45.698857508Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:45.700253 containerd[1582]: time="2025-07-11T00:22:45.700182133Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 3.808356799s" Jul 11 00:22:45.700253 containerd[1582]: time="2025-07-11T00:22:45.700246032Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 11 00:22:45.700962 containerd[1582]: time="2025-07-11T00:22:45.700902233Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 11 00:22:46.443524 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 11 00:22:46.456577 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:22:46.458574 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2102940043.mount: Deactivated successfully. Jul 11 00:22:46.459223 containerd[1582]: time="2025-07-11T00:22:46.458752002Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:46.461556 containerd[1582]: time="2025-07-11T00:22:46.461493303Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 11 00:22:46.463565 containerd[1582]: time="2025-07-11T00:22:46.463347941Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:46.469447 containerd[1582]: time="2025-07-11T00:22:46.469402916Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:46.470241 containerd[1582]: time="2025-07-11T00:22:46.470189591Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 769.227606ms" Jul 11 00:22:46.470307 containerd[1582]: time="2025-07-11T00:22:46.470247230Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 11 00:22:46.470843 containerd[1582]: time="2025-07-11T00:22:46.470801669Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 11 00:22:46.639115 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:22:46.645432 (kubelet)[2098]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:22:46.825227 kubelet[2098]: E0711 00:22:46.825025 2098 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:22:46.829746 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:22:46.830035 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:22:47.733080 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount986345654.mount: Deactivated successfully. Jul 11 00:22:50.268235 containerd[1582]: time="2025-07-11T00:22:50.268132430Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:50.269102 containerd[1582]: time="2025-07-11T00:22:50.268950103Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Jul 11 00:22:50.270427 containerd[1582]: time="2025-07-11T00:22:50.270397408Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:50.274570 containerd[1582]: time="2025-07-11T00:22:50.274523035Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:50.276241 containerd[1582]: time="2025-07-11T00:22:50.276141961Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.805305836s" Jul 11 00:22:50.276241 containerd[1582]: time="2025-07-11T00:22:50.276227551Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 11 00:22:52.959993 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:22:52.973542 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:22:53.010940 systemd[1]: Reloading requested from client PID 2191 ('systemctl') (unit session-7.scope)... Jul 11 00:22:53.010967 systemd[1]: Reloading... Jul 11 00:22:53.126234 zram_generator::config[2235]: No configuration found. Jul 11 00:22:54.149397 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:22:54.238091 systemd[1]: Reloading finished in 1226 ms. Jul 11 00:22:54.287490 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 11 00:22:54.287626 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 11 00:22:54.288089 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:22:54.290278 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:22:54.484441 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:22:54.498986 (kubelet)[2290]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 11 00:22:54.571425 kubelet[2290]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:22:54.571425 kubelet[2290]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 11 00:22:54.571425 kubelet[2290]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:22:54.571988 kubelet[2290]: I0711 00:22:54.571519 2290 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 11 00:22:55.126836 kubelet[2290]: I0711 00:22:55.126755 2290 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 11 00:22:55.126836 kubelet[2290]: I0711 00:22:55.126807 2290 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 11 00:22:55.127174 kubelet[2290]: I0711 00:22:55.127137 2290 server.go:934] "Client rotation is on, will bootstrap in background" Jul 11 00:22:55.153827 kubelet[2290]: E0711 00:22:55.153739 2290 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.132:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:22:55.155342 kubelet[2290]: I0711 00:22:55.155262 2290 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 00:22:55.167714 kubelet[2290]: E0711 00:22:55.167645 2290 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 11 00:22:55.167714 kubelet[2290]: I0711 00:22:55.167699 2290 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 11 00:22:55.175292 kubelet[2290]: I0711 00:22:55.175244 2290 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 11 00:22:55.175703 kubelet[2290]: I0711 00:22:55.175649 2290 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 11 00:22:55.175905 kubelet[2290]: I0711 00:22:55.175832 2290 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 11 00:22:55.176105 kubelet[2290]: I0711 00:22:55.175893 2290 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 11 00:22:55.176260 kubelet[2290]: I0711 00:22:55.176122 2290 topology_manager.go:138] "Creating topology manager with none policy" Jul 11 00:22:55.176260 kubelet[2290]: I0711 00:22:55.176133 2290 container_manager_linux.go:300] "Creating device plugin manager" Jul 11 00:22:55.176365 kubelet[2290]: I0711 00:22:55.176341 2290 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:22:55.181582 kubelet[2290]: I0711 00:22:55.181535 2290 kubelet.go:408] "Attempting to sync node with API server" Jul 11 00:22:55.181582 kubelet[2290]: I0711 00:22:55.181582 2290 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 11 00:22:55.181694 kubelet[2290]: I0711 00:22:55.181671 2290 kubelet.go:314] "Adding apiserver pod source" Jul 11 00:22:55.181731 kubelet[2290]: I0711 00:22:55.181713 2290 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 11 00:22:55.185159 kubelet[2290]: I0711 00:22:55.185000 2290 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 11 00:22:55.185595 kubelet[2290]: I0711 00:22:55.185562 2290 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 11 00:22:55.186371 kubelet[2290]: W0711 00:22:55.185688 2290 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 11 00:22:55.186371 kubelet[2290]: W0711 00:22:55.186101 2290 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Jul 11 00:22:55.186371 kubelet[2290]: W0711 00:22:55.186217 2290 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Jul 11 00:22:55.186371 kubelet[2290]: E0711 00:22:55.186323 2290 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:22:55.186371 kubelet[2290]: E0711 00:22:55.186261 2290 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:22:55.188278 kubelet[2290]: I0711 00:22:55.188244 2290 server.go:1274] "Started kubelet" Jul 11 00:22:55.188387 kubelet[2290]: I0711 00:22:55.188347 2290 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 11 00:22:55.188954 kubelet[2290]: I0711 00:22:55.188910 2290 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 11 00:22:55.189479 kubelet[2290]: I0711 00:22:55.189448 2290 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 11 00:22:55.189848 kubelet[2290]: I0711 00:22:55.189815 2290 server.go:449] "Adding debug handlers to kubelet server" Jul 11 00:22:55.198757 kubelet[2290]: I0711 00:22:55.198706 2290 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 11 00:22:55.201136 kubelet[2290]: I0711 00:22:55.198789 2290 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 11 00:22:55.201777 kubelet[2290]: I0711 00:22:55.201739 2290 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 11 00:22:55.202564 kubelet[2290]: E0711 00:22:55.201915 2290 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:22:55.203679 kubelet[2290]: I0711 00:22:55.203636 2290 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 11 00:22:55.203833 kubelet[2290]: I0711 00:22:55.203806 2290 reconciler.go:26] "Reconciler: start to sync state" Jul 11 00:22:55.204679 kubelet[2290]: E0711 00:22:55.204622 2290 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="200ms" Jul 11 00:22:55.205273 kubelet[2290]: W0711 00:22:55.205173 2290 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Jul 11 00:22:55.205320 kubelet[2290]: E0711 00:22:55.205295 2290 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:22:55.205385 kubelet[2290]: I0711 00:22:55.205360 2290 factory.go:221] Registration of the systemd container factory successfully Jul 11 00:22:55.205525 kubelet[2290]: I0711 00:22:55.205484 2290 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 11 00:22:55.207360 kubelet[2290]: E0711 00:22:55.206541 2290 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 11 00:22:55.207578 kubelet[2290]: I0711 00:22:55.207538 2290 factory.go:221] Registration of the containerd container factory successfully Jul 11 00:22:55.208039 kubelet[2290]: E0711 00:22:55.204361 2290 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.132:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.132:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18510a9759e271d3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-11 00:22:55.188185555 +0000 UTC m=+0.671906936,LastTimestamp:2025-07-11 00:22:55.188185555 +0000 UTC m=+0.671906936,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 11 00:22:55.235040 kubelet[2290]: I0711 00:22:55.234962 2290 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 11 00:22:55.238587 kubelet[2290]: I0711 00:22:55.237537 2290 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 11 00:22:55.238587 kubelet[2290]: I0711 00:22:55.237577 2290 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 11 00:22:55.238587 kubelet[2290]: I0711 00:22:55.237612 2290 kubelet.go:2321] "Starting kubelet main sync loop" Jul 11 00:22:55.238587 kubelet[2290]: E0711 00:22:55.237677 2290 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 11 00:22:55.238773 kubelet[2290]: W0711 00:22:55.238750 2290 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Jul 11 00:22:55.238813 kubelet[2290]: E0711 00:22:55.238789 2290 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:22:55.240151 kubelet[2290]: I0711 00:22:55.240120 2290 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 11 00:22:55.240151 kubelet[2290]: I0711 00:22:55.240141 2290 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 11 00:22:55.240251 kubelet[2290]: I0711 00:22:55.240163 2290 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:22:55.302967 kubelet[2290]: E0711 00:22:55.302801 2290 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:22:55.338373 kubelet[2290]: E0711 00:22:55.338252 2290 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 11 00:22:55.403882 kubelet[2290]: E0711 00:22:55.403615 2290 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:22:55.405466 kubelet[2290]: E0711 00:22:55.405397 2290 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="400ms" Jul 11 00:22:55.504595 kubelet[2290]: E0711 00:22:55.504533 2290 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:22:55.538764 kubelet[2290]: E0711 00:22:55.538625 2290 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 11 00:22:55.568244 kubelet[2290]: I0711 00:22:55.568090 2290 policy_none.go:49] "None policy: Start" Jul 11 00:22:55.569490 kubelet[2290]: I0711 00:22:55.569438 2290 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 11 00:22:55.569490 kubelet[2290]: I0711 00:22:55.569494 2290 state_mem.go:35] "Initializing new in-memory state store" Jul 11 00:22:55.604976 kubelet[2290]: E0711 00:22:55.604792 2290 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:22:55.677724 kubelet[2290]: I0711 00:22:55.677515 2290 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 11 00:22:55.678266 kubelet[2290]: I0711 00:22:55.677936 2290 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 11 00:22:55.678266 kubelet[2290]: I0711 00:22:55.677963 2290 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 11 00:22:55.678386 kubelet[2290]: I0711 00:22:55.678329 2290 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 11 00:22:55.679874 kubelet[2290]: E0711 00:22:55.679840 2290 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 11 00:22:55.782581 kubelet[2290]: I0711 00:22:55.782513 2290 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:22:55.783239 kubelet[2290]: E0711 00:22:55.783158 2290 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Jul 11 00:22:55.807472 kubelet[2290]: E0711 00:22:55.807390 2290 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="800ms" Jul 11 00:22:55.985638 kubelet[2290]: I0711 00:22:55.985589 2290 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:22:55.986156 kubelet[2290]: E0711 00:22:55.986108 2290 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Jul 11 00:22:56.008775 kubelet[2290]: I0711 00:22:56.008702 2290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:22:56.008928 kubelet[2290]: I0711 00:22:56.008769 2290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1cfb5b794673652c30d441a0c8d4d450-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1cfb5b794673652c30d441a0c8d4d450\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:22:56.008928 kubelet[2290]: I0711 00:22:56.008881 2290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1cfb5b794673652c30d441a0c8d4d450-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1cfb5b794673652c30d441a0c8d4d450\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:22:56.008928 kubelet[2290]: I0711 00:22:56.008910 2290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:22:56.009070 kubelet[2290]: I0711 00:22:56.008937 2290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:22:56.009070 kubelet[2290]: I0711 00:22:56.008962 2290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1cfb5b794673652c30d441a0c8d4d450-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1cfb5b794673652c30d441a0c8d4d450\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:22:56.009126 kubelet[2290]: I0711 00:22:56.009061 2290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:22:56.009154 kubelet[2290]: I0711 00:22:56.009120 2290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:22:56.009154 kubelet[2290]: I0711 00:22:56.009149 2290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 11 00:22:56.198502 kubelet[2290]: W0711 00:22:56.198382 2290 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Jul 11 00:22:56.198502 kubelet[2290]: E0711 00:22:56.198488 2290 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:22:56.247468 kubelet[2290]: E0711 00:22:56.247318 2290 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:56.248352 containerd[1582]: time="2025-07-11T00:22:56.248288970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1cfb5b794673652c30d441a0c8d4d450,Namespace:kube-system,Attempt:0,}" Jul 11 00:22:56.248855 kubelet[2290]: E0711 00:22:56.248770 2290 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:56.249173 containerd[1582]: time="2025-07-11T00:22:56.249138386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 11 00:22:56.257703 kubelet[2290]: E0711 00:22:56.257652 2290 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:56.258120 containerd[1582]: time="2025-07-11T00:22:56.258088202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 11 00:22:56.267964 kubelet[2290]: W0711 00:22:56.267872 2290 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Jul 11 00:22:56.267964 kubelet[2290]: E0711 00:22:56.267943 2290 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:22:56.342799 kubelet[2290]: W0711 00:22:56.342690 2290 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Jul 11 00:22:56.342799 kubelet[2290]: E0711 00:22:56.342782 2290 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:22:56.385001 kubelet[2290]: W0711 00:22:56.384880 2290 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Jul 11 00:22:56.385001 kubelet[2290]: E0711 00:22:56.384990 2290 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:22:56.387701 kubelet[2290]: I0711 00:22:56.387640 2290 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:22:56.388072 kubelet[2290]: E0711 00:22:56.388023 2290 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Jul 11 00:22:56.609076 kubelet[2290]: E0711 00:22:56.608931 2290 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="1.6s" Jul 11 00:22:56.754373 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount476358618.mount: Deactivated successfully. Jul 11 00:22:56.764234 containerd[1582]: time="2025-07-11T00:22:56.764162131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:22:56.765294 containerd[1582]: time="2025-07-11T00:22:56.765251969Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:22:56.766325 containerd[1582]: time="2025-07-11T00:22:56.766280368Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 11 00:22:56.767299 containerd[1582]: time="2025-07-11T00:22:56.767260516Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:22:56.768794 containerd[1582]: time="2025-07-11T00:22:56.768701617Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 11 00:22:56.769784 containerd[1582]: time="2025-07-11T00:22:56.769734185Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:22:56.770607 containerd[1582]: time="2025-07-11T00:22:56.770506775Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 11 00:22:56.773883 containerd[1582]: time="2025-07-11T00:22:56.773846903Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:22:56.775492 containerd[1582]: time="2025-07-11T00:22:56.775461266Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 517.309132ms" Jul 11 00:22:56.776474 containerd[1582]: time="2025-07-11T00:22:56.776389353Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 527.163308ms" Jul 11 00:22:56.777658 containerd[1582]: time="2025-07-11T00:22:56.777606144Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 529.208427ms" Jul 11 00:22:57.028616 containerd[1582]: time="2025-07-11T00:22:57.028424861Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:22:57.028821 containerd[1582]: time="2025-07-11T00:22:57.028655573Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:22:57.029937 containerd[1582]: time="2025-07-11T00:22:57.029616230Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:22:57.029937 containerd[1582]: time="2025-07-11T00:22:57.029821693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:22:57.031778 containerd[1582]: time="2025-07-11T00:22:57.030936195Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:22:57.031778 containerd[1582]: time="2025-07-11T00:22:57.030988696Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:22:57.031778 containerd[1582]: time="2025-07-11T00:22:57.031007442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:22:57.031778 containerd[1582]: time="2025-07-11T00:22:57.031028713Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:22:57.031778 containerd[1582]: time="2025-07-11T00:22:57.031106983Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:22:57.031778 containerd[1582]: time="2025-07-11T00:22:57.031125938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:22:57.031778 containerd[1582]: time="2025-07-11T00:22:57.031275164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:22:57.031778 containerd[1582]: time="2025-07-11T00:22:57.031682292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:22:57.164444 kubelet[2290]: E0711 00:22:57.164355 2290 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.132:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:22:57.189046 kubelet[2290]: I0711 00:22:57.189001 2290 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:22:57.189711 kubelet[2290]: E0711 00:22:57.189260 2290 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Jul 11 00:22:57.199756 containerd[1582]: time="2025-07-11T00:22:57.199683554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"72333d200be68c894edea01a67169c64a52f8fc837539f81a4df4f315f6fe17c\"" Jul 11 00:22:57.202290 kubelet[2290]: E0711 00:22:57.202247 2290 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:57.205306 containerd[1582]: time="2025-07-11T00:22:57.204929320Z" level=info msg="CreateContainer within sandbox \"72333d200be68c894edea01a67169c64a52f8fc837539f81a4df4f315f6fe17c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 11 00:22:57.207912 containerd[1582]: time="2025-07-11T00:22:57.207877199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1cfb5b794673652c30d441a0c8d4d450,Namespace:kube-system,Attempt:0,} returns sandbox id \"f8bdfa50167e2c39442749188525818d8bae71abedda71215d6528baa29b4644\"" Jul 11 00:22:57.208553 kubelet[2290]: E0711 00:22:57.208524 2290 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:57.210851 containerd[1582]: time="2025-07-11T00:22:57.210826723Z" level=info msg="CreateContainer within sandbox \"f8bdfa50167e2c39442749188525818d8bae71abedda71215d6528baa29b4644\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 11 00:22:57.211985 containerd[1582]: time="2025-07-11T00:22:57.211947016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"c5e7f3bb9530af239994a15fe9f2556fd8f6ca1cdab7c9dc1b509c28bd66faab\"" Jul 11 00:22:57.212505 kubelet[2290]: E0711 00:22:57.212472 2290 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:57.214299 containerd[1582]: time="2025-07-11T00:22:57.214248720Z" level=info msg="CreateContainer within sandbox \"c5e7f3bb9530af239994a15fe9f2556fd8f6ca1cdab7c9dc1b509c28bd66faab\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 11 00:22:57.512235 containerd[1582]: time="2025-07-11T00:22:57.512130722Z" level=info msg="CreateContainer within sandbox \"72333d200be68c894edea01a67169c64a52f8fc837539f81a4df4f315f6fe17c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"71c3e9beb751b17e368b56667ff61fa43e0e1b8823ec5fdd00c08f25b5ed80d5\"" Jul 11 00:22:57.513144 containerd[1582]: time="2025-07-11T00:22:57.513101609Z" level=info msg="StartContainer for \"71c3e9beb751b17e368b56667ff61fa43e0e1b8823ec5fdd00c08f25b5ed80d5\"" Jul 11 00:22:57.520480 containerd[1582]: time="2025-07-11T00:22:57.520405653Z" level=info msg="CreateContainer within sandbox \"f8bdfa50167e2c39442749188525818d8bae71abedda71215d6528baa29b4644\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2166a82355cc5868262323b39239d54d1e63868b492758805bd7a80a2d046cce\"" Jul 11 00:22:57.521494 containerd[1582]: time="2025-07-11T00:22:57.521426025Z" level=info msg="StartContainer for \"2166a82355cc5868262323b39239d54d1e63868b492758805bd7a80a2d046cce\"" Jul 11 00:22:57.523369 containerd[1582]: time="2025-07-11T00:22:57.523324688Z" level=info msg="CreateContainer within sandbox \"c5e7f3bb9530af239994a15fe9f2556fd8f6ca1cdab7c9dc1b509c28bd66faab\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d7c5a5b6b695079c7d0c99da16fcf8e65b006f4d99adf33264d4e9132430f8fa\"" Jul 11 00:22:57.523846 containerd[1582]: time="2025-07-11T00:22:57.523820505Z" level=info msg="StartContainer for \"d7c5a5b6b695079c7d0c99da16fcf8e65b006f4d99adf33264d4e9132430f8fa\"" Jul 11 00:22:57.639746 containerd[1582]: time="2025-07-11T00:22:57.639546787Z" level=info msg="StartContainer for \"2166a82355cc5868262323b39239d54d1e63868b492758805bd7a80a2d046cce\" returns successfully" Jul 11 00:22:57.713328 containerd[1582]: time="2025-07-11T00:22:57.712952262Z" level=info msg="StartContainer for \"d7c5a5b6b695079c7d0c99da16fcf8e65b006f4d99adf33264d4e9132430f8fa\" returns successfully" Jul 11 00:22:57.719222 containerd[1582]: time="2025-07-11T00:22:57.718642760Z" level=info msg="StartContainer for \"71c3e9beb751b17e368b56667ff61fa43e0e1b8823ec5fdd00c08f25b5ed80d5\" returns successfully" Jul 11 00:22:58.259816 kubelet[2290]: E0711 00:22:58.259763 2290 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:58.263002 kubelet[2290]: E0711 00:22:58.262755 2290 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:58.268021 kubelet[2290]: E0711 00:22:58.267979 2290 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:58.801934 kubelet[2290]: I0711 00:22:58.801386 2290 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:22:59.273342 kubelet[2290]: E0711 00:22:59.273301 2290 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:59.273958 kubelet[2290]: E0711 00:22:59.273934 2290 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:59.274257 kubelet[2290]: E0711 00:22:59.274238 2290 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:59.523109 kubelet[2290]: E0711 00:22:59.523027 2290 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 11 00:22:59.600977 kubelet[2290]: I0711 00:22:59.599178 2290 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 11 00:23:00.185828 kubelet[2290]: I0711 00:23:00.185758 2290 apiserver.go:52] "Watching apiserver" Jul 11 00:23:00.204253 kubelet[2290]: I0711 00:23:00.204151 2290 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 11 00:23:00.275890 kubelet[2290]: E0711 00:23:00.275830 2290 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 11 00:23:00.276447 kubelet[2290]: E0711 00:23:00.276038 2290 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:03.200145 kubelet[2290]: E0711 00:23:03.200092 2290 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:03.274765 kubelet[2290]: E0711 00:23:03.274700 2290 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:03.422596 kubelet[2290]: E0711 00:23:03.422529 2290 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:04.277887 kubelet[2290]: E0711 00:23:04.277314 2290 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:05.277799 kubelet[2290]: I0711 00:23:05.277477 2290 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.277442308 podStartE2EDuration="2.277442308s" podCreationTimestamp="2025-07-11 00:23:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:23:05.263357276 +0000 UTC m=+10.747078657" watchObservedRunningTime="2025-07-11 00:23:05.277442308 +0000 UTC m=+10.761163679" Jul 11 00:23:05.652360 systemd[1]: Reloading requested from client PID 2568 ('systemctl') (unit session-7.scope)... Jul 11 00:23:05.652384 systemd[1]: Reloading... Jul 11 00:23:05.759247 zram_generator::config[2607]: No configuration found. Jul 11 00:23:05.897306 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:23:06.004529 systemd[1]: Reloading finished in 351 ms. Jul 11 00:23:06.047066 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:23:06.072501 systemd[1]: kubelet.service: Deactivated successfully. Jul 11 00:23:06.073096 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:23:06.087908 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:23:06.302527 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:23:06.311532 (kubelet)[2662]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 11 00:23:06.367219 kubelet[2662]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:23:06.367219 kubelet[2662]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 11 00:23:06.367219 kubelet[2662]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:23:06.367720 kubelet[2662]: I0711 00:23:06.367288 2662 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 11 00:23:06.375620 kubelet[2662]: I0711 00:23:06.375569 2662 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 11 00:23:06.375620 kubelet[2662]: I0711 00:23:06.375604 2662 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 11 00:23:06.375926 kubelet[2662]: I0711 00:23:06.375881 2662 server.go:934] "Client rotation is on, will bootstrap in background" Jul 11 00:23:06.377489 kubelet[2662]: I0711 00:23:06.377463 2662 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 11 00:23:06.379713 kubelet[2662]: I0711 00:23:06.379689 2662 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 00:23:06.385469 kubelet[2662]: E0711 00:23:06.385418 2662 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 11 00:23:06.385469 kubelet[2662]: I0711 00:23:06.385463 2662 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 11 00:23:06.393250 kubelet[2662]: I0711 00:23:06.393143 2662 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 11 00:23:06.393998 kubelet[2662]: I0711 00:23:06.393964 2662 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 11 00:23:06.394205 kubelet[2662]: I0711 00:23:06.394126 2662 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 11 00:23:06.394432 kubelet[2662]: I0711 00:23:06.394176 2662 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 11 00:23:06.394566 kubelet[2662]: I0711 00:23:06.394440 2662 topology_manager.go:138] "Creating topology manager with none policy" Jul 11 00:23:06.394566 kubelet[2662]: I0711 00:23:06.394461 2662 container_manager_linux.go:300] "Creating device plugin manager" Jul 11 00:23:06.394566 kubelet[2662]: I0711 00:23:06.394506 2662 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:23:06.394685 kubelet[2662]: I0711 00:23:06.394657 2662 kubelet.go:408] "Attempting to sync node with API server" Jul 11 00:23:06.394685 kubelet[2662]: I0711 00:23:06.394682 2662 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 11 00:23:06.394792 kubelet[2662]: I0711 00:23:06.394751 2662 kubelet.go:314] "Adding apiserver pod source" Jul 11 00:23:06.394792 kubelet[2662]: I0711 00:23:06.394768 2662 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 11 00:23:06.401378 kubelet[2662]: I0711 00:23:06.399350 2662 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 11 00:23:06.401378 kubelet[2662]: I0711 00:23:06.399807 2662 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 11 00:23:06.401378 kubelet[2662]: I0711 00:23:06.400339 2662 server.go:1274] "Started kubelet" Jul 11 00:23:06.401378 kubelet[2662]: I0711 00:23:06.400676 2662 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 11 00:23:06.402486 kubelet[2662]: I0711 00:23:06.401787 2662 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 11 00:23:06.402486 kubelet[2662]: I0711 00:23:06.402107 2662 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 11 00:23:06.409794 kubelet[2662]: I0711 00:23:06.409511 2662 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 11 00:23:06.442616 kubelet[2662]: I0711 00:23:06.442068 2662 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 11 00:23:06.444945 kubelet[2662]: I0711 00:23:06.444438 2662 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 11 00:23:06.444945 kubelet[2662]: E0711 00:23:06.444888 2662 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:23:06.445760 kubelet[2662]: I0711 00:23:06.445722 2662 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 11 00:23:06.447468 kubelet[2662]: I0711 00:23:06.447415 2662 reconciler.go:26] "Reconciler: start to sync state" Jul 11 00:23:06.448318 kubelet[2662]: I0711 00:23:06.448292 2662 server.go:449] "Adding debug handlers to kubelet server" Jul 11 00:23:06.449144 kubelet[2662]: I0711 00:23:06.449097 2662 factory.go:221] Registration of the systemd container factory successfully Jul 11 00:23:06.449370 kubelet[2662]: I0711 00:23:06.449327 2662 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 11 00:23:06.456341 kubelet[2662]: E0711 00:23:06.456274 2662 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 11 00:23:06.456824 kubelet[2662]: I0711 00:23:06.456800 2662 factory.go:221] Registration of the containerd container factory successfully Jul 11 00:23:06.460964 kubelet[2662]: I0711 00:23:06.460709 2662 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 11 00:23:06.466277 kubelet[2662]: I0711 00:23:06.465933 2662 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 11 00:23:06.466277 kubelet[2662]: I0711 00:23:06.465976 2662 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 11 00:23:06.466277 kubelet[2662]: I0711 00:23:06.466003 2662 kubelet.go:2321] "Starting kubelet main sync loop" Jul 11 00:23:06.466277 kubelet[2662]: E0711 00:23:06.466076 2662 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 11 00:23:06.861437 kubelet[2662]: I0711 00:23:06.554395 2662 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 11 00:23:06.861437 kubelet[2662]: I0711 00:23:06.554443 2662 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 11 00:23:06.861437 kubelet[2662]: I0711 00:23:06.554496 2662 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:23:06.861437 kubelet[2662]: I0711 00:23:06.554871 2662 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 11 00:23:06.861437 kubelet[2662]: I0711 00:23:06.554904 2662 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 11 00:23:06.861437 kubelet[2662]: I0711 00:23:06.554954 2662 policy_none.go:49] "None policy: Start" Jul 11 00:23:06.861437 kubelet[2662]: I0711 00:23:06.556568 2662 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 11 00:23:06.861437 kubelet[2662]: I0711 00:23:06.556664 2662 state_mem.go:35] "Initializing new in-memory state store" Jul 11 00:23:06.861437 kubelet[2662]: I0711 00:23:06.557116 2662 state_mem.go:75] "Updated machine memory state" Jul 11 00:23:06.861437 kubelet[2662]: I0711 00:23:06.564499 2662 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 11 00:23:06.861437 kubelet[2662]: I0711 00:23:06.565116 2662 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 11 00:23:06.861437 kubelet[2662]: I0711 00:23:06.565156 2662 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 11 00:23:06.861437 kubelet[2662]: I0711 00:23:06.565809 2662 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 11 00:23:06.963947 update_engine[1563]: I20250711 00:23:06.963814 1563 update_attempter.cc:509] Updating boot flags... Jul 11 00:23:06.965535 kubelet[2662]: I0711 00:23:06.964742 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:23:06.965535 kubelet[2662]: I0711 00:23:06.964804 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1cfb5b794673652c30d441a0c8d4d450-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1cfb5b794673652c30d441a0c8d4d450\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:23:06.965535 kubelet[2662]: I0711 00:23:06.964863 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1cfb5b794673652c30d441a0c8d4d450-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1cfb5b794673652c30d441a0c8d4d450\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:23:06.965535 kubelet[2662]: I0711 00:23:06.964919 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:23:06.965535 kubelet[2662]: I0711 00:23:06.964949 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:23:06.965692 kubelet[2662]: I0711 00:23:06.964971 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1cfb5b794673652c30d441a0c8d4d450-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1cfb5b794673652c30d441a0c8d4d450\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:23:06.965692 kubelet[2662]: I0711 00:23:06.964991 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:23:06.965692 kubelet[2662]: I0711 00:23:06.965029 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:23:06.965692 kubelet[2662]: I0711 00:23:06.965052 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 11 00:23:06.980361 kubelet[2662]: I0711 00:23:06.980315 2662 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:23:07.074471 kubelet[2662]: E0711 00:23:07.074416 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:07.381947 kubelet[2662]: E0711 00:23:07.381884 2662 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 11 00:23:07.382590 kubelet[2662]: E0711 00:23:07.382054 2662 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 11 00:23:07.382590 kubelet[2662]: E0711 00:23:07.382107 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:07.382590 kubelet[2662]: E0711 00:23:07.382242 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:07.396503 kubelet[2662]: I0711 00:23:07.396415 2662 apiserver.go:52] "Watching apiserver" Jul 11 00:23:07.446552 kubelet[2662]: I0711 00:23:07.446490 2662 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 11 00:23:07.483606 kubelet[2662]: E0711 00:23:07.483535 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:07.483782 kubelet[2662]: E0711 00:23:07.483611 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:07.726750 kubelet[2662]: I0711 00:23:07.725664 2662 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 11 00:23:07.726750 kubelet[2662]: I0711 00:23:07.725831 2662 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 11 00:23:07.941449 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2698) Jul 11 00:23:08.058224 kubelet[2662]: E0711 00:23:08.058026 2662 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 11 00:23:08.058364 kubelet[2662]: E0711 00:23:08.058239 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:08.170307 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2701) Jul 11 00:23:08.260344 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2701) Jul 11 00:23:08.485208 kubelet[2662]: E0711 00:23:08.485175 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:09.157506 kubelet[2662]: I0711 00:23:09.157416 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.157388191 podStartE2EDuration="3.157388191s" podCreationTimestamp="2025-07-11 00:23:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:23:09.102309119 +0000 UTC m=+2.784796632" watchObservedRunningTime="2025-07-11 00:23:09.157388191 +0000 UTC m=+2.839875704" Jul 11 00:23:10.219947 kubelet[2662]: I0711 00:23:10.219873 2662 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 11 00:23:10.220781 containerd[1582]: time="2025-07-11T00:23:10.220644972Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 11 00:23:10.221176 kubelet[2662]: I0711 00:23:10.220894 2662 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 11 00:23:11.096190 kubelet[2662]: I0711 00:23:11.096106 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b5e134eb-950b-485b-ad3e-f0d96fd645e4-xtables-lock\") pod \"kube-proxy-tzrvj\" (UID: \"b5e134eb-950b-485b-ad3e-f0d96fd645e4\") " pod="kube-system/kube-proxy-tzrvj" Jul 11 00:23:11.096190 kubelet[2662]: I0711 00:23:11.096173 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkvqg\" (UniqueName: \"kubernetes.io/projected/b5e134eb-950b-485b-ad3e-f0d96fd645e4-kube-api-access-hkvqg\") pod \"kube-proxy-tzrvj\" (UID: \"b5e134eb-950b-485b-ad3e-f0d96fd645e4\") " pod="kube-system/kube-proxy-tzrvj" Jul 11 00:23:11.096190 kubelet[2662]: I0711 00:23:11.096236 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b5e134eb-950b-485b-ad3e-f0d96fd645e4-kube-proxy\") pod \"kube-proxy-tzrvj\" (UID: \"b5e134eb-950b-485b-ad3e-f0d96fd645e4\") " pod="kube-system/kube-proxy-tzrvj" Jul 11 00:23:11.096517 kubelet[2662]: I0711 00:23:11.096261 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5e134eb-950b-485b-ad3e-f0d96fd645e4-lib-modules\") pod \"kube-proxy-tzrvj\" (UID: \"b5e134eb-950b-485b-ad3e-f0d96fd645e4\") " pod="kube-system/kube-proxy-tzrvj" Jul 11 00:23:11.398555 kubelet[2662]: I0711 00:23:11.398363 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fkqw\" (UniqueName: \"kubernetes.io/projected/067bcdf3-889c-45e1-b34d-9b6618bafa34-kube-api-access-6fkqw\") pod \"tigera-operator-5bf8dfcb4-zrq77\" (UID: \"067bcdf3-889c-45e1-b34d-9b6618bafa34\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-zrq77" Jul 11 00:23:11.398555 kubelet[2662]: I0711 00:23:11.398419 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/067bcdf3-889c-45e1-b34d-9b6618bafa34-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-zrq77\" (UID: \"067bcdf3-889c-45e1-b34d-9b6618bafa34\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-zrq77" Jul 11 00:23:11.580003 kubelet[2662]: E0711 00:23:11.579948 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:11.580740 containerd[1582]: time="2025-07-11T00:23:11.580664131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tzrvj,Uid:b5e134eb-950b-485b-ad3e-f0d96fd645e4,Namespace:kube-system,Attempt:0,}" Jul 11 00:23:11.635161 containerd[1582]: time="2025-07-11T00:23:11.635079974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-zrq77,Uid:067bcdf3-889c-45e1-b34d-9b6618bafa34,Namespace:tigera-operator,Attempt:0,}" Jul 11 00:23:12.065287 containerd[1582]: time="2025-07-11T00:23:12.064986463Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:23:12.065610 containerd[1582]: time="2025-07-11T00:23:12.065113733Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:23:12.065610 containerd[1582]: time="2025-07-11T00:23:12.065242105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:23:12.065610 containerd[1582]: time="2025-07-11T00:23:12.065473714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:23:12.079801 containerd[1582]: time="2025-07-11T00:23:12.079138355Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:23:12.079801 containerd[1582]: time="2025-07-11T00:23:12.079262159Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:23:12.079801 containerd[1582]: time="2025-07-11T00:23:12.079284431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:23:12.079801 containerd[1582]: time="2025-07-11T00:23:12.079410259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:23:12.127867 containerd[1582]: time="2025-07-11T00:23:12.127795984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tzrvj,Uid:b5e134eb-950b-485b-ad3e-f0d96fd645e4,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a1bd88ed3aab13d26cd13e62d0eb2cc480e6d8390a168b961a96364720bf00e\"" Jul 11 00:23:12.129775 kubelet[2662]: E0711 00:23:12.129716 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:12.132370 containerd[1582]: time="2025-07-11T00:23:12.132325524Z" level=info msg="CreateContainer within sandbox \"9a1bd88ed3aab13d26cd13e62d0eb2cc480e6d8390a168b961a96364720bf00e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 11 00:23:12.150118 containerd[1582]: time="2025-07-11T00:23:12.149994002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-zrq77,Uid:067bcdf3-889c-45e1-b34d-9b6618bafa34,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7ff63c7e9781ed8b41328f073d12c7f9b03ebf43997977ca125813ae975f7b09\"" Jul 11 00:23:12.151857 containerd[1582]: time="2025-07-11T00:23:12.151827107Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 11 00:23:12.665923 containerd[1582]: time="2025-07-11T00:23:12.665795438Z" level=info msg="CreateContainer within sandbox \"9a1bd88ed3aab13d26cd13e62d0eb2cc480e6d8390a168b961a96364720bf00e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"026ab372ba309f6b5079def2b5c707d8ed45aaa354b2a9f54d0873e5d51969d8\"" Jul 11 00:23:12.667027 containerd[1582]: time="2025-07-11T00:23:12.666775289Z" level=info msg="StartContainer for \"026ab372ba309f6b5079def2b5c707d8ed45aaa354b2a9f54d0873e5d51969d8\"" Jul 11 00:23:12.754802 containerd[1582]: time="2025-07-11T00:23:12.754727474Z" level=info msg="StartContainer for \"026ab372ba309f6b5079def2b5c707d8ed45aaa354b2a9f54d0873e5d51969d8\" returns successfully" Jul 11 00:23:13.499357 kubelet[2662]: E0711 00:23:13.499301 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:13.636579 kubelet[2662]: I0711 00:23:13.636501 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tzrvj" podStartSLOduration=3.636475982 podStartE2EDuration="3.636475982s" podCreationTimestamp="2025-07-11 00:23:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:23:13.636260986 +0000 UTC m=+7.318748519" watchObservedRunningTime="2025-07-11 00:23:13.636475982 +0000 UTC m=+7.318963515" Jul 11 00:23:13.865063 kubelet[2662]: E0711 00:23:13.864843 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:14.260342 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1242349595.mount: Deactivated successfully. Jul 11 00:23:14.501185 kubelet[2662]: E0711 00:23:14.501123 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:14.501850 kubelet[2662]: E0711 00:23:14.501806 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:14.827797 containerd[1582]: time="2025-07-11T00:23:14.827685991Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:23:14.829613 containerd[1582]: time="2025-07-11T00:23:14.828447659Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 11 00:23:14.830374 containerd[1582]: time="2025-07-11T00:23:14.830318080Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:23:14.833879 containerd[1582]: time="2025-07-11T00:23:14.833834900Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:23:14.835537 containerd[1582]: time="2025-07-11T00:23:14.835454047Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.683559173s" Jul 11 00:23:14.835595 containerd[1582]: time="2025-07-11T00:23:14.835535180Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 11 00:23:14.840130 containerd[1582]: time="2025-07-11T00:23:14.840064251Z" level=info msg="CreateContainer within sandbox \"7ff63c7e9781ed8b41328f073d12c7f9b03ebf43997977ca125813ae975f7b09\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 11 00:23:14.866414 containerd[1582]: time="2025-07-11T00:23:14.866348204Z" level=info msg="CreateContainer within sandbox \"7ff63c7e9781ed8b41328f073d12c7f9b03ebf43997977ca125813ae975f7b09\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f5689a7d94c5957b65aecdf71ed73d411872fb78b27b1c05605299a6f7b2d348\"" Jul 11 00:23:14.866886 containerd[1582]: time="2025-07-11T00:23:14.866854390Z" level=info msg="StartContainer for \"f5689a7d94c5957b65aecdf71ed73d411872fb78b27b1c05605299a6f7b2d348\"" Jul 11 00:23:14.939319 containerd[1582]: time="2025-07-11T00:23:14.939250502Z" level=info msg="StartContainer for \"f5689a7d94c5957b65aecdf71ed73d411872fb78b27b1c05605299a6f7b2d348\" returns successfully" Jul 11 00:23:15.502996 kubelet[2662]: E0711 00:23:15.502939 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:15.509251 kubelet[2662]: E0711 00:23:15.509185 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:16.510697 kubelet[2662]: E0711 00:23:16.510653 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:16.774977 kubelet[2662]: I0711 00:23:16.773777 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-zrq77" podStartSLOduration=3.088257312 podStartE2EDuration="5.773758893s" podCreationTimestamp="2025-07-11 00:23:11 +0000 UTC" firstStartedPulling="2025-07-11 00:23:12.151229798 +0000 UTC m=+5.833717311" lastFinishedPulling="2025-07-11 00:23:14.836731379 +0000 UTC m=+8.519218892" observedRunningTime="2025-07-11 00:23:15.528965119 +0000 UTC m=+9.211452632" watchObservedRunningTime="2025-07-11 00:23:16.773758893 +0000 UTC m=+10.456246396" Jul 11 00:23:16.796580 kubelet[2662]: E0711 00:23:16.796156 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:17.515020 kubelet[2662]: E0711 00:23:17.514980 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:20.440108 sudo[1773]: pam_unix(sudo:session): session closed for user root Jul 11 00:23:20.442896 sshd[1766]: pam_unix(sshd:session): session closed for user core Jul 11 00:23:20.448376 systemd[1]: sshd@6-10.0.0.132:22-10.0.0.1:59988.service: Deactivated successfully. Jul 11 00:23:20.451692 systemd-logind[1554]: Session 7 logged out. Waiting for processes to exit. Jul 11 00:23:20.451924 systemd[1]: session-7.scope: Deactivated successfully. Jul 11 00:23:20.454018 systemd-logind[1554]: Removed session 7. Jul 11 00:23:23.992218 kubelet[2662]: I0711 00:23:23.990241 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d71470c-306d-4dee-8960-e8f4ad70f78c-tigera-ca-bundle\") pod \"calico-typha-644c4bdbd9-q5slx\" (UID: \"9d71470c-306d-4dee-8960-e8f4ad70f78c\") " pod="calico-system/calico-typha-644c4bdbd9-q5slx" Jul 11 00:23:23.993293 kubelet[2662]: I0711 00:23:23.993125 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9d71470c-306d-4dee-8960-e8f4ad70f78c-typha-certs\") pod \"calico-typha-644c4bdbd9-q5slx\" (UID: \"9d71470c-306d-4dee-8960-e8f4ad70f78c\") " pod="calico-system/calico-typha-644c4bdbd9-q5slx" Jul 11 00:23:23.993293 kubelet[2662]: I0711 00:23:23.993156 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f785h\" (UniqueName: \"kubernetes.io/projected/9d71470c-306d-4dee-8960-e8f4ad70f78c-kube-api-access-f785h\") pod \"calico-typha-644c4bdbd9-q5slx\" (UID: \"9d71470c-306d-4dee-8960-e8f4ad70f78c\") " pod="calico-system/calico-typha-644c4bdbd9-q5slx" Jul 11 00:23:24.093972 kubelet[2662]: I0711 00:23:24.093862 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e5420d56-3504-42a1-9763-5dad129e7a10-cni-log-dir\") pod \"calico-node-x9q57\" (UID: \"e5420d56-3504-42a1-9763-5dad129e7a10\") " pod="calico-system/calico-node-x9q57" Jul 11 00:23:24.093972 kubelet[2662]: I0711 00:23:24.093926 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e5420d56-3504-42a1-9763-5dad129e7a10-cni-bin-dir\") pod \"calico-node-x9q57\" (UID: \"e5420d56-3504-42a1-9763-5dad129e7a10\") " pod="calico-system/calico-node-x9q57" Jul 11 00:23:24.093972 kubelet[2662]: I0711 00:23:24.093949 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5420d56-3504-42a1-9763-5dad129e7a10-tigera-ca-bundle\") pod \"calico-node-x9q57\" (UID: \"e5420d56-3504-42a1-9763-5dad129e7a10\") " pod="calico-system/calico-node-x9q57" Jul 11 00:23:24.093972 kubelet[2662]: I0711 00:23:24.093965 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e5420d56-3504-42a1-9763-5dad129e7a10-lib-modules\") pod \"calico-node-x9q57\" (UID: \"e5420d56-3504-42a1-9763-5dad129e7a10\") " pod="calico-system/calico-node-x9q57" Jul 11 00:23:24.093972 kubelet[2662]: I0711 00:23:24.093979 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e5420d56-3504-42a1-9763-5dad129e7a10-var-lib-calico\") pod \"calico-node-x9q57\" (UID: \"e5420d56-3504-42a1-9763-5dad129e7a10\") " pod="calico-system/calico-node-x9q57" Jul 11 00:23:24.094333 kubelet[2662]: I0711 00:23:24.094015 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e5420d56-3504-42a1-9763-5dad129e7a10-node-certs\") pod \"calico-node-x9q57\" (UID: \"e5420d56-3504-42a1-9763-5dad129e7a10\") " pod="calico-system/calico-node-x9q57" Jul 11 00:23:24.094333 kubelet[2662]: I0711 00:23:24.094042 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e5420d56-3504-42a1-9763-5dad129e7a10-cni-net-dir\") pod \"calico-node-x9q57\" (UID: \"e5420d56-3504-42a1-9763-5dad129e7a10\") " pod="calico-system/calico-node-x9q57" Jul 11 00:23:24.094333 kubelet[2662]: I0711 00:23:24.094152 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e5420d56-3504-42a1-9763-5dad129e7a10-xtables-lock\") pod \"calico-node-x9q57\" (UID: \"e5420d56-3504-42a1-9763-5dad129e7a10\") " pod="calico-system/calico-node-x9q57" Jul 11 00:23:24.094333 kubelet[2662]: I0711 00:23:24.094322 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e5420d56-3504-42a1-9763-5dad129e7a10-flexvol-driver-host\") pod \"calico-node-x9q57\" (UID: \"e5420d56-3504-42a1-9763-5dad129e7a10\") " pod="calico-system/calico-node-x9q57" Jul 11 00:23:24.094425 kubelet[2662]: I0711 00:23:24.094351 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e5420d56-3504-42a1-9763-5dad129e7a10-policysync\") pod \"calico-node-x9q57\" (UID: \"e5420d56-3504-42a1-9763-5dad129e7a10\") " pod="calico-system/calico-node-x9q57" Jul 11 00:23:24.094425 kubelet[2662]: I0711 00:23:24.094379 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e5420d56-3504-42a1-9763-5dad129e7a10-var-run-calico\") pod \"calico-node-x9q57\" (UID: \"e5420d56-3504-42a1-9763-5dad129e7a10\") " pod="calico-system/calico-node-x9q57" Jul 11 00:23:24.094425 kubelet[2662]: I0711 00:23:24.094404 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn7s7\" (UniqueName: \"kubernetes.io/projected/e5420d56-3504-42a1-9763-5dad129e7a10-kube-api-access-cn7s7\") pod \"calico-node-x9q57\" (UID: \"e5420d56-3504-42a1-9763-5dad129e7a10\") " pod="calico-system/calico-node-x9q57" Jul 11 00:23:24.197740 kubelet[2662]: E0711 00:23:24.197676 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.197740 kubelet[2662]: W0711 00:23:24.197715 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.197740 kubelet[2662]: E0711 00:23:24.197753 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.203263 kubelet[2662]: E0711 00:23:24.203223 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.203487 kubelet[2662]: W0711 00:23:24.203413 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.203487 kubelet[2662]: E0711 00:23:24.203444 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.206280 kubelet[2662]: E0711 00:23:24.206248 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.206280 kubelet[2662]: W0711 00:23:24.206270 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.206393 kubelet[2662]: E0711 00:23:24.206294 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.265870 kubelet[2662]: E0711 00:23:24.265658 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8gdbn" podUID="b0fe6896-2f2f-4a85-81a1-6d288dfe16c3" Jul 11 00:23:24.268728 kubelet[2662]: E0711 00:23:24.268387 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.268728 kubelet[2662]: W0711 00:23:24.268415 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.268728 kubelet[2662]: E0711 00:23:24.268441 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.269081 kubelet[2662]: E0711 00:23:24.268767 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.269081 kubelet[2662]: W0711 00:23:24.268780 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.269081 kubelet[2662]: E0711 00:23:24.268793 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.269270 kubelet[2662]: E0711 00:23:24.269186 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.269270 kubelet[2662]: W0711 00:23:24.269234 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.269270 kubelet[2662]: E0711 00:23:24.269250 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.270264 kubelet[2662]: E0711 00:23:24.269496 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.270264 kubelet[2662]: W0711 00:23:24.269512 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.270264 kubelet[2662]: E0711 00:23:24.269536 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.270264 kubelet[2662]: E0711 00:23:24.269965 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.270264 kubelet[2662]: W0711 00:23:24.269988 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.270264 kubelet[2662]: E0711 00:23:24.270005 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.270675 kubelet[2662]: E0711 00:23:24.270394 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.270675 kubelet[2662]: W0711 00:23:24.270407 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.270675 kubelet[2662]: E0711 00:23:24.270419 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.270675 kubelet[2662]: E0711 00:23:24.270663 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.270675 kubelet[2662]: W0711 00:23:24.270675 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.270881 kubelet[2662]: E0711 00:23:24.270687 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.270942 kubelet[2662]: E0711 00:23:24.270928 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.270942 kubelet[2662]: W0711 00:23:24.270940 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.271037 kubelet[2662]: E0711 00:23:24.270953 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.272073 kubelet[2662]: E0711 00:23:24.271152 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:24.272073 kubelet[2662]: E0711 00:23:24.271761 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.272073 kubelet[2662]: W0711 00:23:24.271786 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.272073 kubelet[2662]: E0711 00:23:24.271808 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.273607 kubelet[2662]: E0711 00:23:24.272730 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.273607 kubelet[2662]: W0711 00:23:24.272741 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.273607 kubelet[2662]: E0711 00:23:24.272752 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.274407 kubelet[2662]: E0711 00:23:24.273916 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.274407 kubelet[2662]: W0711 00:23:24.273932 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.274407 kubelet[2662]: E0711 00:23:24.274163 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.275133 kubelet[2662]: E0711 00:23:24.274740 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.275133 kubelet[2662]: W0711 00:23:24.274755 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.275133 kubelet[2662]: E0711 00:23:24.274768 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.275133 kubelet[2662]: E0711 00:23:24.275016 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.275133 kubelet[2662]: W0711 00:23:24.275026 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.275133 kubelet[2662]: E0711 00:23:24.275065 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.275426 containerd[1582]: time="2025-07-11T00:23:24.274950715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-644c4bdbd9-q5slx,Uid:9d71470c-306d-4dee-8960-e8f4ad70f78c,Namespace:calico-system,Attempt:0,}" Jul 11 00:23:24.275987 kubelet[2662]: E0711 00:23:24.275321 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.275987 kubelet[2662]: W0711 00:23:24.275331 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.275987 kubelet[2662]: E0711 00:23:24.275340 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.275987 kubelet[2662]: E0711 00:23:24.275590 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.275987 kubelet[2662]: W0711 00:23:24.275602 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.275987 kubelet[2662]: E0711 00:23:24.275649 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.275987 kubelet[2662]: E0711 00:23:24.275989 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.276264 kubelet[2662]: W0711 00:23:24.276011 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.276264 kubelet[2662]: E0711 00:23:24.276036 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.276498 kubelet[2662]: E0711 00:23:24.276450 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.276498 kubelet[2662]: W0711 00:23:24.276468 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.276498 kubelet[2662]: E0711 00:23:24.276484 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.276922 kubelet[2662]: E0711 00:23:24.276908 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.276953 kubelet[2662]: W0711 00:23:24.276923 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.276953 kubelet[2662]: E0711 00:23:24.276937 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.277983 kubelet[2662]: E0711 00:23:24.277940 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.278048 kubelet[2662]: W0711 00:23:24.277978 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.278048 kubelet[2662]: E0711 00:23:24.278017 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.278382 kubelet[2662]: E0711 00:23:24.278364 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.278382 kubelet[2662]: W0711 00:23:24.278380 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.278451 kubelet[2662]: E0711 00:23:24.278393 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.297092 kubelet[2662]: E0711 00:23:24.297008 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.297092 kubelet[2662]: W0711 00:23:24.297033 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.297092 kubelet[2662]: E0711 00:23:24.297061 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.297988 kubelet[2662]: I0711 00:23:24.297106 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b0fe6896-2f2f-4a85-81a1-6d288dfe16c3-varrun\") pod \"csi-node-driver-8gdbn\" (UID: \"b0fe6896-2f2f-4a85-81a1-6d288dfe16c3\") " pod="calico-system/csi-node-driver-8gdbn" Jul 11 00:23:24.297988 kubelet[2662]: E0711 00:23:24.297616 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.297988 kubelet[2662]: W0711 00:23:24.297650 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.297988 kubelet[2662]: E0711 00:23:24.297672 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.297988 kubelet[2662]: I0711 00:23:24.297693 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4vf8\" (UniqueName: \"kubernetes.io/projected/b0fe6896-2f2f-4a85-81a1-6d288dfe16c3-kube-api-access-l4vf8\") pod \"csi-node-driver-8gdbn\" (UID: \"b0fe6896-2f2f-4a85-81a1-6d288dfe16c3\") " pod="calico-system/csi-node-driver-8gdbn" Jul 11 00:23:24.298179 kubelet[2662]: E0711 00:23:24.298025 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.298179 kubelet[2662]: W0711 00:23:24.298061 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.298179 kubelet[2662]: E0711 00:23:24.298083 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.298179 kubelet[2662]: I0711 00:23:24.298101 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b0fe6896-2f2f-4a85-81a1-6d288dfe16c3-socket-dir\") pod \"csi-node-driver-8gdbn\" (UID: \"b0fe6896-2f2f-4a85-81a1-6d288dfe16c3\") " pod="calico-system/csi-node-driver-8gdbn" Jul 11 00:23:24.298502 kubelet[2662]: E0711 00:23:24.298483 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.298502 kubelet[2662]: W0711 00:23:24.298500 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.298589 kubelet[2662]: E0711 00:23:24.298532 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.298803 kubelet[2662]: E0711 00:23:24.298785 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.298803 kubelet[2662]: W0711 00:23:24.298799 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.298871 kubelet[2662]: E0711 00:23:24.298816 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.299083 kubelet[2662]: E0711 00:23:24.299067 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.299083 kubelet[2662]: W0711 00:23:24.299081 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.299144 kubelet[2662]: E0711 00:23:24.299097 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.299364 kubelet[2662]: E0711 00:23:24.299347 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.299364 kubelet[2662]: W0711 00:23:24.299362 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.299436 kubelet[2662]: E0711 00:23:24.299379 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.299666 kubelet[2662]: E0711 00:23:24.299626 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.299706 kubelet[2662]: W0711 00:23:24.299665 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.299706 kubelet[2662]: E0711 00:23:24.299685 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.300034 kubelet[2662]: E0711 00:23:24.299969 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.300034 kubelet[2662]: W0711 00:23:24.299986 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.300457 kubelet[2662]: E0711 00:23:24.300166 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.300457 kubelet[2662]: E0711 00:23:24.300384 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.300457 kubelet[2662]: W0711 00:23:24.300396 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.300457 kubelet[2662]: E0711 00:23:24.300408 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.300457 kubelet[2662]: I0711 00:23:24.300433 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0fe6896-2f2f-4a85-81a1-6d288dfe16c3-kubelet-dir\") pod \"csi-node-driver-8gdbn\" (UID: \"b0fe6896-2f2f-4a85-81a1-6d288dfe16c3\") " pod="calico-system/csi-node-driver-8gdbn" Jul 11 00:23:24.300789 kubelet[2662]: E0711 00:23:24.300770 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.300789 kubelet[2662]: W0711 00:23:24.300787 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.300992 kubelet[2662]: E0711 00:23:24.300974 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.301036 kubelet[2662]: I0711 00:23:24.301012 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b0fe6896-2f2f-4a85-81a1-6d288dfe16c3-registration-dir\") pod \"csi-node-driver-8gdbn\" (UID: \"b0fe6896-2f2f-4a85-81a1-6d288dfe16c3\") " pod="calico-system/csi-node-driver-8gdbn" Jul 11 00:23:24.301299 kubelet[2662]: E0711 00:23:24.301236 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.301299 kubelet[2662]: W0711 00:23:24.301288 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.301424 kubelet[2662]: E0711 00:23:24.301332 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.301707 kubelet[2662]: E0711 00:23:24.301676 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.301707 kubelet[2662]: W0711 00:23:24.301691 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.301801 kubelet[2662]: E0711 00:23:24.301713 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.302033 kubelet[2662]: E0711 00:23:24.302012 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.302033 kubelet[2662]: W0711 00:23:24.302027 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.302148 kubelet[2662]: E0711 00:23:24.302040 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.302743 kubelet[2662]: E0711 00:23:24.302314 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.302743 kubelet[2662]: W0711 00:23:24.302330 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.302743 kubelet[2662]: E0711 00:23:24.302341 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.316048 containerd[1582]: time="2025-07-11T00:23:24.315860195Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:23:24.316048 containerd[1582]: time="2025-07-11T00:23:24.315964161Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:23:24.316048 containerd[1582]: time="2025-07-11T00:23:24.315979780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:23:24.316414 containerd[1582]: time="2025-07-11T00:23:24.316242885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:23:24.350894 containerd[1582]: time="2025-07-11T00:23:24.350622597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-x9q57,Uid:e5420d56-3504-42a1-9763-5dad129e7a10,Namespace:calico-system,Attempt:0,}" Jul 11 00:23:24.395916 containerd[1582]: time="2025-07-11T00:23:24.395124478Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:23:24.395916 containerd[1582]: time="2025-07-11T00:23:24.395264852Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:23:24.395916 containerd[1582]: time="2025-07-11T00:23:24.395282786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:23:24.395916 containerd[1582]: time="2025-07-11T00:23:24.395426927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:23:24.405027 kubelet[2662]: E0711 00:23:24.404545 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.405027 kubelet[2662]: W0711 00:23:24.405022 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.405258 kubelet[2662]: E0711 00:23:24.405059 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.406353 kubelet[2662]: E0711 00:23:24.406302 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.406353 kubelet[2662]: W0711 00:23:24.406342 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.406496 kubelet[2662]: E0711 00:23:24.406392 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.406966 kubelet[2662]: E0711 00:23:24.406942 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.407875 kubelet[2662]: W0711 00:23:24.406960 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.408062 kubelet[2662]: E0711 00:23:24.408034 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.408480 kubelet[2662]: E0711 00:23:24.408455 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.408560 kubelet[2662]: W0711 00:23:24.408478 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.408621 kubelet[2662]: E0711 00:23:24.408601 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.409406 kubelet[2662]: E0711 00:23:24.409355 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.409406 kubelet[2662]: W0711 00:23:24.409373 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.409659 kubelet[2662]: E0711 00:23:24.409548 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.409724 kubelet[2662]: E0711 00:23:24.409691 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.409724 kubelet[2662]: W0711 00:23:24.409706 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.409876 kubelet[2662]: E0711 00:23:24.409814 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.410069 kubelet[2662]: E0711 00:23:24.410048 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.410069 kubelet[2662]: W0711 00:23:24.410062 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.410260 kubelet[2662]: E0711 00:23:24.410161 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.410306 kubelet[2662]: E0711 00:23:24.410285 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.410306 kubelet[2662]: W0711 00:23:24.410294 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.410402 kubelet[2662]: E0711 00:23:24.410386 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.410551 kubelet[2662]: E0711 00:23:24.410527 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.410551 kubelet[2662]: W0711 00:23:24.410540 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.410596 kubelet[2662]: E0711 00:23:24.410552 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.410821 kubelet[2662]: E0711 00:23:24.410798 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.410821 kubelet[2662]: W0711 00:23:24.410813 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.410920 kubelet[2662]: E0711 00:23:24.410841 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.411363 kubelet[2662]: E0711 00:23:24.411337 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.411363 kubelet[2662]: W0711 00:23:24.411350 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.411678 kubelet[2662]: E0711 00:23:24.411497 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.412114 containerd[1582]: time="2025-07-11T00:23:24.412075771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-644c4bdbd9-q5slx,Uid:9d71470c-306d-4dee-8960-e8f4ad70f78c,Namespace:calico-system,Attempt:0,} returns sandbox id \"8fbec13ea78aebc8235b8683da9b35397f37b101a882c6eab9746db22be91bcc\"" Jul 11 00:23:24.412309 kubelet[2662]: E0711 00:23:24.412262 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.412309 kubelet[2662]: W0711 00:23:24.412284 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.412487 kubelet[2662]: E0711 00:23:24.412467 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.413100 kubelet[2662]: E0711 00:23:24.413072 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.413100 kubelet[2662]: W0711 00:23:24.413090 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.413289 kubelet[2662]: E0711 00:23:24.413268 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.413598 kubelet[2662]: E0711 00:23:24.413578 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.413598 kubelet[2662]: W0711 00:23:24.413593 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.413780 kubelet[2662]: E0711 00:23:24.413763 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.414356 kubelet[2662]: E0711 00:23:24.414330 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.414356 kubelet[2662]: W0711 00:23:24.414345 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.414506 kubelet[2662]: E0711 00:23:24.414488 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:24.415876 kubelet[2662]: E0711 00:23:24.415268 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.415876 kubelet[2662]: E0711 00:23:24.415682 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.415876 kubelet[2662]: W0711 00:23:24.415695 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.415876 kubelet[2662]: E0711 00:23:24.415846 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.416276 kubelet[2662]: E0711 00:23:24.416254 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.416276 kubelet[2662]: W0711 00:23:24.416271 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.416539 kubelet[2662]: E0711 00:23:24.416401 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.418241 containerd[1582]: time="2025-07-11T00:23:24.416695133Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 11 00:23:24.418302 kubelet[2662]: E0711 00:23:24.416840 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.418302 kubelet[2662]: W0711 00:23:24.416870 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.418302 kubelet[2662]: E0711 00:23:24.417013 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.418302 kubelet[2662]: E0711 00:23:24.417552 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.418302 kubelet[2662]: W0711 00:23:24.417567 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.418302 kubelet[2662]: E0711 00:23:24.417647 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.418302 kubelet[2662]: E0711 00:23:24.418027 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.418302 kubelet[2662]: W0711 00:23:24.418073 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.418302 kubelet[2662]: E0711 00:23:24.418182 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.418507 kubelet[2662]: E0711 00:23:24.418485 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.418507 kubelet[2662]: W0711 00:23:24.418497 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.419177 kubelet[2662]: E0711 00:23:24.418608 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.419177 kubelet[2662]: E0711 00:23:24.418994 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.419177 kubelet[2662]: W0711 00:23:24.419034 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.419177 kubelet[2662]: E0711 00:23:24.419138 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.419804 kubelet[2662]: E0711 00:23:24.419569 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.419804 kubelet[2662]: W0711 00:23:24.419617 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.419804 kubelet[2662]: E0711 00:23:24.419713 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.420617 kubelet[2662]: E0711 00:23:24.420592 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.420900 kubelet[2662]: W0711 00:23:24.420683 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.420900 kubelet[2662]: E0711 00:23:24.420724 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.421079 kubelet[2662]: E0711 00:23:24.421035 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.421079 kubelet[2662]: W0711 00:23:24.421048 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.421079 kubelet[2662]: E0711 00:23:24.421058 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.423473 kubelet[2662]: E0711 00:23:24.423435 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:24.423473 kubelet[2662]: W0711 00:23:24.423459 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:24.423593 kubelet[2662]: E0711 00:23:24.423480 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:24.452315 containerd[1582]: time="2025-07-11T00:23:24.452264705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-x9q57,Uid:e5420d56-3504-42a1-9763-5dad129e7a10,Namespace:calico-system,Attempt:0,} returns sandbox id \"e066fa0ec53a1ead63b2e99a4153fe37ae22dda2eea8efb6db2b87f210fbb22d\"" Jul 11 00:23:25.467157 kubelet[2662]: E0711 00:23:25.467078 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8gdbn" podUID="b0fe6896-2f2f-4a85-81a1-6d288dfe16c3" Jul 11 00:23:26.053362 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount421931748.mount: Deactivated successfully. Jul 11 00:23:26.490954 containerd[1582]: time="2025-07-11T00:23:26.490888645Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:23:26.491767 containerd[1582]: time="2025-07-11T00:23:26.491704881Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 11 00:23:26.493081 containerd[1582]: time="2025-07-11T00:23:26.493055993Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:23:26.495585 containerd[1582]: time="2025-07-11T00:23:26.495528454Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:23:26.496210 containerd[1582]: time="2025-07-11T00:23:26.496148300Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.079399405s" Jul 11 00:23:26.496296 containerd[1582]: time="2025-07-11T00:23:26.496211599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 11 00:23:26.498729 containerd[1582]: time="2025-07-11T00:23:26.498684321Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 11 00:23:26.527320 containerd[1582]: time="2025-07-11T00:23:26.526866086Z" level=info msg="CreateContainer within sandbox \"8fbec13ea78aebc8235b8683da9b35397f37b101a882c6eab9746db22be91bcc\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 11 00:23:26.839339 containerd[1582]: time="2025-07-11T00:23:26.839145707Z" level=info msg="CreateContainer within sandbox \"8fbec13ea78aebc8235b8683da9b35397f37b101a882c6eab9746db22be91bcc\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9456a8b821c4a096ab5b208d95865100a46a44b86c1a719efd82e387075de839\"" Jul 11 00:23:26.840943 containerd[1582]: time="2025-07-11T00:23:26.840906108Z" level=info msg="StartContainer for \"9456a8b821c4a096ab5b208d95865100a46a44b86c1a719efd82e387075de839\"" Jul 11 00:23:27.201665 containerd[1582]: time="2025-07-11T00:23:27.201588783Z" level=info msg="StartContainer for \"9456a8b821c4a096ab5b208d95865100a46a44b86c1a719efd82e387075de839\" returns successfully" Jul 11 00:23:27.469049 kubelet[2662]: E0711 00:23:27.468786 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8gdbn" podUID="b0fe6896-2f2f-4a85-81a1-6d288dfe16c3" Jul 11 00:23:27.552013 kubelet[2662]: E0711 00:23:27.551950 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:27.571801 kubelet[2662]: I0711 00:23:27.571726 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-644c4bdbd9-q5slx" podStartSLOduration=2.488638797 podStartE2EDuration="4.571705983s" podCreationTimestamp="2025-07-11 00:23:23 +0000 UTC" firstStartedPulling="2025-07-11 00:23:24.415436305 +0000 UTC m=+18.097923818" lastFinishedPulling="2025-07-11 00:23:26.498503491 +0000 UTC m=+20.180991004" observedRunningTime="2025-07-11 00:23:27.571330267 +0000 UTC m=+21.253817780" watchObservedRunningTime="2025-07-11 00:23:27.571705983 +0000 UTC m=+21.254193486" Jul 11 00:23:27.606236 kubelet[2662]: E0711 00:23:27.606141 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.606464 kubelet[2662]: W0711 00:23:27.606190 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.606464 kubelet[2662]: E0711 00:23:27.606405 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.608300 kubelet[2662]: E0711 00:23:27.608276 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.608300 kubelet[2662]: W0711 00:23:27.608295 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.608400 kubelet[2662]: E0711 00:23:27.608310 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.608876 kubelet[2662]: E0711 00:23:27.608807 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.608876 kubelet[2662]: W0711 00:23:27.608848 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.608876 kubelet[2662]: E0711 00:23:27.608881 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.609328 kubelet[2662]: E0711 00:23:27.609305 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.609328 kubelet[2662]: W0711 00:23:27.609325 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.609432 kubelet[2662]: E0711 00:23:27.609339 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.609678 kubelet[2662]: E0711 00:23:27.609647 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.609678 kubelet[2662]: W0711 00:23:27.609668 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.609678 kubelet[2662]: E0711 00:23:27.609681 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.609925 kubelet[2662]: E0711 00:23:27.609900 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.609925 kubelet[2662]: W0711 00:23:27.609916 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.609925 kubelet[2662]: E0711 00:23:27.609927 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.610179 kubelet[2662]: E0711 00:23:27.610158 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.610273 kubelet[2662]: W0711 00:23:27.610174 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.610273 kubelet[2662]: E0711 00:23:27.610224 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.610533 kubelet[2662]: E0711 00:23:27.610501 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.610533 kubelet[2662]: W0711 00:23:27.610516 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.610533 kubelet[2662]: E0711 00:23:27.610526 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.610773 kubelet[2662]: E0711 00:23:27.610746 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.610773 kubelet[2662]: W0711 00:23:27.610758 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.610773 kubelet[2662]: E0711 00:23:27.610767 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.611027 kubelet[2662]: E0711 00:23:27.610956 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.611027 kubelet[2662]: W0711 00:23:27.610965 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.611027 kubelet[2662]: E0711 00:23:27.610973 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.611379 kubelet[2662]: E0711 00:23:27.611351 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.611379 kubelet[2662]: W0711 00:23:27.611368 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.611514 kubelet[2662]: E0711 00:23:27.611382 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.611650 kubelet[2662]: E0711 00:23:27.611622 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.611650 kubelet[2662]: W0711 00:23:27.611636 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.611650 kubelet[2662]: E0711 00:23:27.611648 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.611949 kubelet[2662]: E0711 00:23:27.611920 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.611949 kubelet[2662]: W0711 00:23:27.611933 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.611949 kubelet[2662]: E0711 00:23:27.611947 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.612249 kubelet[2662]: E0711 00:23:27.612231 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.612249 kubelet[2662]: W0711 00:23:27.612244 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.612334 kubelet[2662]: E0711 00:23:27.612256 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.612526 kubelet[2662]: E0711 00:23:27.612507 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.612526 kubelet[2662]: W0711 00:23:27.612520 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.612606 kubelet[2662]: E0711 00:23:27.612531 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.633105 kubelet[2662]: E0711 00:23:27.633042 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.633105 kubelet[2662]: W0711 00:23:27.633073 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.633105 kubelet[2662]: E0711 00:23:27.633106 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.633579 kubelet[2662]: E0711 00:23:27.633539 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.633579 kubelet[2662]: W0711 00:23:27.633555 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.633579 kubelet[2662]: E0711 00:23:27.633573 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.633877 kubelet[2662]: E0711 00:23:27.633847 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.633877 kubelet[2662]: W0711 00:23:27.633860 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.633877 kubelet[2662]: E0711 00:23:27.633875 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.634169 kubelet[2662]: E0711 00:23:27.634133 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.634169 kubelet[2662]: W0711 00:23:27.634151 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.634169 kubelet[2662]: E0711 00:23:27.634170 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.634437 kubelet[2662]: E0711 00:23:27.634415 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.634437 kubelet[2662]: W0711 00:23:27.634427 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.634529 kubelet[2662]: E0711 00:23:27.634444 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.634694 kubelet[2662]: E0711 00:23:27.634672 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.634694 kubelet[2662]: W0711 00:23:27.634684 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.634780 kubelet[2662]: E0711 00:23:27.634701 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.634994 kubelet[2662]: E0711 00:23:27.634975 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.634994 kubelet[2662]: W0711 00:23:27.634987 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.635097 kubelet[2662]: E0711 00:23:27.635031 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.635252 kubelet[2662]: E0711 00:23:27.635236 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.635252 kubelet[2662]: W0711 00:23:27.635248 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.635338 kubelet[2662]: E0711 00:23:27.635280 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.635514 kubelet[2662]: E0711 00:23:27.635494 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.635514 kubelet[2662]: W0711 00:23:27.635507 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.635590 kubelet[2662]: E0711 00:23:27.635525 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.635772 kubelet[2662]: E0711 00:23:27.635754 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.635772 kubelet[2662]: W0711 00:23:27.635766 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.635839 kubelet[2662]: E0711 00:23:27.635782 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.636042 kubelet[2662]: E0711 00:23:27.636021 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.636042 kubelet[2662]: W0711 00:23:27.636039 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.636123 kubelet[2662]: E0711 00:23:27.636056 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.636311 kubelet[2662]: E0711 00:23:27.636293 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.636311 kubelet[2662]: W0711 00:23:27.636306 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.636395 kubelet[2662]: E0711 00:23:27.636322 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.636586 kubelet[2662]: E0711 00:23:27.636568 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.636586 kubelet[2662]: W0711 00:23:27.636580 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.636671 kubelet[2662]: E0711 00:23:27.636596 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.636849 kubelet[2662]: E0711 00:23:27.636831 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.636849 kubelet[2662]: W0711 00:23:27.636843 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.636934 kubelet[2662]: E0711 00:23:27.636859 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.637171 kubelet[2662]: E0711 00:23:27.637152 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.637171 kubelet[2662]: W0711 00:23:27.637168 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.637425 kubelet[2662]: E0711 00:23:27.637186 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.637490 kubelet[2662]: E0711 00:23:27.637468 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.637490 kubelet[2662]: W0711 00:23:27.637479 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.637545 kubelet[2662]: E0711 00:23:27.637495 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.637826 kubelet[2662]: E0711 00:23:27.637797 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.637826 kubelet[2662]: W0711 00:23:27.637814 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.637826 kubelet[2662]: E0711 00:23:27.637827 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:27.638098 kubelet[2662]: E0711 00:23:27.638068 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:23:27.638098 kubelet[2662]: W0711 00:23:27.638087 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:23:27.638098 kubelet[2662]: E0711 00:23:27.638099 2662 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:23:28.370486 containerd[1582]: time="2025-07-11T00:23:28.370392613Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:23:28.371830 containerd[1582]: time="2025-07-11T00:23:28.371793087Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 11 00:23:28.373113 containerd[1582]: time="2025-07-11T00:23:28.372999045Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:23:28.375698 containerd[1582]: time="2025-07-11T00:23:28.375654798Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:23:28.376423 containerd[1582]: time="2025-07-11T00:23:28.376388809Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.877666216s" Jul 11 00:23:28.376502 containerd[1582]: time="2025-07-11T00:23:28.376428994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 11 00:23:28.378601 containerd[1582]: time="2025-07-11T00:23:28.378560873Z" level=info msg="CreateContainer within sandbox \"e066fa0ec53a1ead63b2e99a4153fe37ae22dda2eea8efb6db2b87f210fbb22d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 11 00:23:28.400844 containerd[1582]: time="2025-07-11T00:23:28.400778693Z" level=info msg="CreateContainer within sandbox \"e066fa0ec53a1ead63b2e99a4153fe37ae22dda2eea8efb6db2b87f210fbb22d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d126083665dd10f46109a389b778722fe5b18e1ce299b9f560f43a415d2b8474\"" Jul 11 00:23:28.401634 containerd[1582]: time="2025-07-11T00:23:28.401565974Z" level=info msg="StartContainer for \"d126083665dd10f46109a389b778722fe5b18e1ce299b9f560f43a415d2b8474\"" Jul 11 00:23:28.487546 containerd[1582]: time="2025-07-11T00:23:28.487483038Z" level=info msg="StartContainer for \"d126083665dd10f46109a389b778722fe5b18e1ce299b9f560f43a415d2b8474\" returns successfully" Jul 11 00:23:28.526241 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d126083665dd10f46109a389b778722fe5b18e1ce299b9f560f43a415d2b8474-rootfs.mount: Deactivated successfully. Jul 11 00:23:28.544402 containerd[1582]: time="2025-07-11T00:23:28.542329445Z" level=info msg="shim disconnected" id=d126083665dd10f46109a389b778722fe5b18e1ce299b9f560f43a415d2b8474 namespace=k8s.io Jul 11 00:23:28.544711 containerd[1582]: time="2025-07-11T00:23:28.544409798Z" level=warning msg="cleaning up after shim disconnected" id=d126083665dd10f46109a389b778722fe5b18e1ce299b9f560f43a415d2b8474 namespace=k8s.io Jul 11 00:23:28.544711 containerd[1582]: time="2025-07-11T00:23:28.544445184Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:23:28.556993 kubelet[2662]: I0711 00:23:28.555639 2662 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:23:28.556993 kubelet[2662]: E0711 00:23:28.556110 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:29.466613 kubelet[2662]: E0711 00:23:29.466507 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8gdbn" podUID="b0fe6896-2f2f-4a85-81a1-6d288dfe16c3" Jul 11 00:23:29.560874 containerd[1582]: time="2025-07-11T00:23:29.560406638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 11 00:23:31.466662 kubelet[2662]: E0711 00:23:31.466581 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8gdbn" podUID="b0fe6896-2f2f-4a85-81a1-6d288dfe16c3" Jul 11 00:23:32.780691 kubelet[2662]: I0711 00:23:32.780633 2662 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:23:32.781343 kubelet[2662]: E0711 00:23:32.781126 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:33.048749 containerd[1582]: time="2025-07-11T00:23:33.048320991Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:23:33.050374 containerd[1582]: time="2025-07-11T00:23:33.050311893Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 11 00:23:33.050732 containerd[1582]: time="2025-07-11T00:23:33.050701715Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:23:33.054270 containerd[1582]: time="2025-07-11T00:23:33.054161126Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:23:33.055185 containerd[1582]: time="2025-07-11T00:23:33.055139675Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 3.494673475s" Jul 11 00:23:33.055185 containerd[1582]: time="2025-07-11T00:23:33.055176504Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 11 00:23:33.058078 containerd[1582]: time="2025-07-11T00:23:33.058048500Z" level=info msg="CreateContainer within sandbox \"e066fa0ec53a1ead63b2e99a4153fe37ae22dda2eea8efb6db2b87f210fbb22d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 11 00:23:33.077184 containerd[1582]: time="2025-07-11T00:23:33.077107102Z" level=info msg="CreateContainer within sandbox \"e066fa0ec53a1ead63b2e99a4153fe37ae22dda2eea8efb6db2b87f210fbb22d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d5eda148bb8a510ff65fdfb6b2db00c1afa621d597e03cf5d4ecea7f17b0d6ee\"" Jul 11 00:23:33.077738 containerd[1582]: time="2025-07-11T00:23:33.077705907Z" level=info msg="StartContainer for \"d5eda148bb8a510ff65fdfb6b2db00c1afa621d597e03cf5d4ecea7f17b0d6ee\"" Jul 11 00:23:33.317504 containerd[1582]: time="2025-07-11T00:23:33.317296663Z" level=info msg="StartContainer for \"d5eda148bb8a510ff65fdfb6b2db00c1afa621d597e03cf5d4ecea7f17b0d6ee\" returns successfully" Jul 11 00:23:33.467291 kubelet[2662]: E0711 00:23:33.467186 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8gdbn" podUID="b0fe6896-2f2f-4a85-81a1-6d288dfe16c3" Jul 11 00:23:33.568679 kubelet[2662]: E0711 00:23:33.568532 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:35.466785 kubelet[2662]: E0711 00:23:35.466646 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8gdbn" podUID="b0fe6896-2f2f-4a85-81a1-6d288dfe16c3" Jul 11 00:23:35.744942 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d5eda148bb8a510ff65fdfb6b2db00c1afa621d597e03cf5d4ecea7f17b0d6ee-rootfs.mount: Deactivated successfully. Jul 11 00:23:35.751104 containerd[1582]: time="2025-07-11T00:23:35.751035081Z" level=info msg="shim disconnected" id=d5eda148bb8a510ff65fdfb6b2db00c1afa621d597e03cf5d4ecea7f17b0d6ee namespace=k8s.io Jul 11 00:23:35.751104 containerd[1582]: time="2025-07-11T00:23:35.751099843Z" level=warning msg="cleaning up after shim disconnected" id=d5eda148bb8a510ff65fdfb6b2db00c1afa621d597e03cf5d4ecea7f17b0d6ee namespace=k8s.io Jul 11 00:23:35.751104 containerd[1582]: time="2025-07-11T00:23:35.751110132Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:23:35.811238 kubelet[2662]: I0711 00:23:35.811177 2662 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 11 00:23:35.893257 kubelet[2662]: I0711 00:23:35.889989 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00f725d3-9859-45cd-81ce-7316e621f780-config\") pod \"goldmane-58fd7646b9-vvl26\" (UID: \"00f725d3-9859-45cd-81ce-7316e621f780\") " pod="calico-system/goldmane-58fd7646b9-vvl26" Jul 11 00:23:35.893257 kubelet[2662]: I0711 00:23:35.890053 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28fd8700-a2a2-4ca6-a552-c1abce22725b-tigera-ca-bundle\") pod \"calico-kube-controllers-85764bc598-vpq2r\" (UID: \"28fd8700-a2a2-4ca6-a552-c1abce22725b\") " pod="calico-system/calico-kube-controllers-85764bc598-vpq2r" Jul 11 00:23:35.893257 kubelet[2662]: I0711 00:23:35.890083 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/415c8f48-d718-4200-be2e-9918b83dc600-config-volume\") pod \"coredns-7c65d6cfc9-f94tf\" (UID: \"415c8f48-d718-4200-be2e-9918b83dc600\") " pod="kube-system/coredns-7c65d6cfc9-f94tf" Jul 11 00:23:35.893257 kubelet[2662]: I0711 00:23:35.890109 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/00f725d3-9859-45cd-81ce-7316e621f780-goldmane-key-pair\") pod \"goldmane-58fd7646b9-vvl26\" (UID: \"00f725d3-9859-45cd-81ce-7316e621f780\") " pod="calico-system/goldmane-58fd7646b9-vvl26" Jul 11 00:23:35.893257 kubelet[2662]: I0711 00:23:35.890135 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djkz4\" (UniqueName: \"kubernetes.io/projected/00f725d3-9859-45cd-81ce-7316e621f780-kube-api-access-djkz4\") pod \"goldmane-58fd7646b9-vvl26\" (UID: \"00f725d3-9859-45cd-81ce-7316e621f780\") " pod="calico-system/goldmane-58fd7646b9-vvl26" Jul 11 00:23:35.893599 kubelet[2662]: I0711 00:23:35.890162 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/534af8e4-6518-491a-9219-a3b30c552e4b-config-volume\") pod \"coredns-7c65d6cfc9-vqw25\" (UID: \"534af8e4-6518-491a-9219-a3b30c552e4b\") " pod="kube-system/coredns-7c65d6cfc9-vqw25" Jul 11 00:23:35.893599 kubelet[2662]: I0711 00:23:35.890184 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5xql\" (UniqueName: \"kubernetes.io/projected/415c8f48-d718-4200-be2e-9918b83dc600-kube-api-access-s5xql\") pod \"coredns-7c65d6cfc9-f94tf\" (UID: \"415c8f48-d718-4200-be2e-9918b83dc600\") " pod="kube-system/coredns-7c65d6cfc9-f94tf" Jul 11 00:23:35.893599 kubelet[2662]: I0711 00:23:35.891963 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a6481fdf-2fc3-43a0-ad5f-0da6fdc9cbf7-calico-apiserver-certs\") pod \"calico-apiserver-59dcd5c4d5-t5svc\" (UID: \"a6481fdf-2fc3-43a0-ad5f-0da6fdc9cbf7\") " pod="calico-apiserver/calico-apiserver-59dcd5c4d5-t5svc" Jul 11 00:23:35.893599 kubelet[2662]: I0711 00:23:35.891995 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nls6\" (UniqueName: \"kubernetes.io/projected/534af8e4-6518-491a-9219-a3b30c552e4b-kube-api-access-5nls6\") pod \"coredns-7c65d6cfc9-vqw25\" (UID: \"534af8e4-6518-491a-9219-a3b30c552e4b\") " pod="kube-system/coredns-7c65d6cfc9-vqw25" Jul 11 00:23:35.893599 kubelet[2662]: I0711 00:23:35.892021 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkv8r\" (UniqueName: \"kubernetes.io/projected/b7a1c20e-8146-4d52-a1e3-bd7a4c0025ad-kube-api-access-bkv8r\") pod \"calico-apiserver-59dcd5c4d5-jfmh4\" (UID: \"b7a1c20e-8146-4d52-a1e3-bd7a4c0025ad\") " pod="calico-apiserver/calico-apiserver-59dcd5c4d5-jfmh4" Jul 11 00:23:35.893750 kubelet[2662]: I0711 00:23:35.892047 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00f725d3-9859-45cd-81ce-7316e621f780-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-vvl26\" (UID: \"00f725d3-9859-45cd-81ce-7316e621f780\") " pod="calico-system/goldmane-58fd7646b9-vvl26" Jul 11 00:23:35.893750 kubelet[2662]: I0711 00:23:35.892080 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6445237e-39b4-491d-b1a4-8b536523e561-whisker-backend-key-pair\") pod \"whisker-645896967f-scc2m\" (UID: \"6445237e-39b4-491d-b1a4-8b536523e561\") " pod="calico-system/whisker-645896967f-scc2m" Jul 11 00:23:35.893750 kubelet[2662]: I0711 00:23:35.892102 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6445237e-39b4-491d-b1a4-8b536523e561-whisker-ca-bundle\") pod \"whisker-645896967f-scc2m\" (UID: \"6445237e-39b4-491d-b1a4-8b536523e561\") " pod="calico-system/whisker-645896967f-scc2m" Jul 11 00:23:35.893750 kubelet[2662]: I0711 00:23:35.892161 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8tww\" (UniqueName: \"kubernetes.io/projected/28fd8700-a2a2-4ca6-a552-c1abce22725b-kube-api-access-v8tww\") pod \"calico-kube-controllers-85764bc598-vpq2r\" (UID: \"28fd8700-a2a2-4ca6-a552-c1abce22725b\") " pod="calico-system/calico-kube-controllers-85764bc598-vpq2r" Jul 11 00:23:35.893750 kubelet[2662]: I0711 00:23:35.892192 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b7a1c20e-8146-4d52-a1e3-bd7a4c0025ad-calico-apiserver-certs\") pod \"calico-apiserver-59dcd5c4d5-jfmh4\" (UID: \"b7a1c20e-8146-4d52-a1e3-bd7a4c0025ad\") " pod="calico-apiserver/calico-apiserver-59dcd5c4d5-jfmh4" Jul 11 00:23:35.894001 kubelet[2662]: I0711 00:23:35.892240 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx97l\" (UniqueName: \"kubernetes.io/projected/6445237e-39b4-491d-b1a4-8b536523e561-kube-api-access-dx97l\") pod \"whisker-645896967f-scc2m\" (UID: \"6445237e-39b4-491d-b1a4-8b536523e561\") " pod="calico-system/whisker-645896967f-scc2m" Jul 11 00:23:35.894001 kubelet[2662]: I0711 00:23:35.892259 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr7r2\" (UniqueName: \"kubernetes.io/projected/a6481fdf-2fc3-43a0-ad5f-0da6fdc9cbf7-kube-api-access-cr7r2\") pod \"calico-apiserver-59dcd5c4d5-t5svc\" (UID: \"a6481fdf-2fc3-43a0-ad5f-0da6fdc9cbf7\") " pod="calico-apiserver/calico-apiserver-59dcd5c4d5-t5svc" Jul 11 00:23:36.169741 kubelet[2662]: E0711 00:23:36.169518 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:36.170201 containerd[1582]: time="2025-07-11T00:23:36.170159989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-f94tf,Uid:415c8f48-d718-4200-be2e-9918b83dc600,Namespace:kube-system,Attempt:0,}" Jul 11 00:23:36.176689 containerd[1582]: time="2025-07-11T00:23:36.176631635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-vvl26,Uid:00f725d3-9859-45cd-81ce-7316e621f780,Namespace:calico-system,Attempt:0,}" Jul 11 00:23:36.179747 containerd[1582]: time="2025-07-11T00:23:36.179718634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59dcd5c4d5-jfmh4,Uid:b7a1c20e-8146-4d52-a1e3-bd7a4c0025ad,Namespace:calico-apiserver,Attempt:0,}" Jul 11 00:23:36.182373 containerd[1582]: time="2025-07-11T00:23:36.182334899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85764bc598-vpq2r,Uid:28fd8700-a2a2-4ca6-a552-c1abce22725b,Namespace:calico-system,Attempt:0,}" Jul 11 00:23:36.184835 containerd[1582]: time="2025-07-11T00:23:36.184810869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59dcd5c4d5-t5svc,Uid:a6481fdf-2fc3-43a0-ad5f-0da6fdc9cbf7,Namespace:calico-apiserver,Attempt:0,}" Jul 11 00:23:36.186162 kubelet[2662]: E0711 00:23:36.186133 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:36.186488 containerd[1582]: time="2025-07-11T00:23:36.186463393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vqw25,Uid:534af8e4-6518-491a-9219-a3b30c552e4b,Namespace:kube-system,Attempt:0,}" Jul 11 00:23:36.189110 containerd[1582]: time="2025-07-11T00:23:36.189072334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-645896967f-scc2m,Uid:6445237e-39b4-491d-b1a4-8b536523e561,Namespace:calico-system,Attempt:0,}" Jul 11 00:23:36.585228 containerd[1582]: time="2025-07-11T00:23:36.584574298Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 11 00:23:36.629944 containerd[1582]: time="2025-07-11T00:23:36.629873127Z" level=error msg="Failed to destroy network for sandbox \"794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:36.634677 containerd[1582]: time="2025-07-11T00:23:36.634628781Z" level=error msg="Failed to destroy network for sandbox \"4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:36.635901 containerd[1582]: time="2025-07-11T00:23:36.634782629Z" level=error msg="Failed to destroy network for sandbox \"c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:36.638239 containerd[1582]: time="2025-07-11T00:23:36.638183859Z" level=error msg="encountered an error cleaning up failed sandbox \"4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:36.638327 containerd[1582]: time="2025-07-11T00:23:36.638280540Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-f94tf,Uid:415c8f48-d718-4200-be2e-9918b83dc600,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:36.638429 containerd[1582]: time="2025-07-11T00:23:36.638405895Z" level=error msg="encountered an error cleaning up failed sandbox \"794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:36.638564 containerd[1582]: time="2025-07-11T00:23:36.638216089Z" level=error msg="encountered an error cleaning up failed sandbox \"c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:36.638632 containerd[1582]: time="2025-07-11T00:23:36.634892416Z" level=error msg="Failed to destroy network for sandbox \"57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:36.639317 containerd[1582]: time="2025-07-11T00:23:36.639263667Z" level=error msg="encountered an error cleaning up failed sandbox \"57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:36.639666 containerd[1582]: time="2025-07-11T00:23:36.639422215Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59dcd5c4d5-jfmh4,Uid:b7a1c20e-8146-4d52-a1e3-bd7a4c0025ad,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:36.639666 containerd[1582]: time="2025-07-11T00:23:36.638517024Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59dcd5c4d5-t5svc,Uid:a6481fdf-2fc3-43a0-ad5f-0da6fdc9cbf7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:36.639666 containerd[1582]: time="2025-07-11T00:23:36.638614577Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-645896967f-scc2m,Uid:6445237e-39b4-491d-b1a4-8b536523e561,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:36.639666 containerd[1582]: time="2025-07-11T00:23:36.634793169Z" level=error msg="Failed to destroy network for sandbox \"67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:36.640363 containerd[1582]: time="2025-07-11T00:23:36.640252644Z" level=error msg="encountered an error cleaning up failed sandbox \"67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:36.640363 containerd[1582]: time="2025-07-11T00:23:36.640314130Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-vvl26,Uid:00f725d3-9859-45cd-81ce-7316e621f780,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:36.650753 containerd[1582]: time="2025-07-11T00:23:36.650700770Z" level=error msg="Failed to destroy network for sandbox \"0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:36.651915 containerd[1582]: time="2025-07-11T00:23:36.651834289Z" level=error msg="encountered an error cleaning up failed sandbox \"0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:36.651972 containerd[1582]: time="2025-07-11T00:23:36.651952191Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85764bc598-vpq2r,Uid:28fd8700-a2a2-4ca6-a552-c1abce22725b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:36.654153 kubelet[2662]: E0711 00:23:36.653764 2662 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:36.654153 kubelet[2662]: E0711 00:23:36.653815 2662 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:36.654153 kubelet[2662]: E0711 00:23:36.653765 2662 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:36.654153 kubelet[2662]: E0711 00:23:36.653873 2662 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85764bc598-vpq2r" Jul 11 00:23:36.654848 kubelet[2662]: E0711 00:23:36.653885 2662 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-vvl26" Jul 11 00:23:36.654848 kubelet[2662]: E0711 00:23:36.653896 2662 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59dcd5c4d5-jfmh4" Jul 11 00:23:36.654848 kubelet[2662]: E0711 00:23:36.653906 2662 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-vvl26" Jul 11 00:23:36.654848 kubelet[2662]: E0711 00:23:36.653927 2662 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59dcd5c4d5-jfmh4" Jul 11 00:23:36.655011 containerd[1582]: time="2025-07-11T00:23:36.654361146Z" level=error msg="Failed to destroy network for sandbox \"ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:36.655011 containerd[1582]: time="2025-07-11T00:23:36.654887615Z" level=error msg="encountered an error cleaning up failed sandbox \"ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:36.655011 containerd[1582]: time="2025-07-11T00:23:36.654939723Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vqw25,Uid:534af8e4-6518-491a-9219-a3b30c552e4b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:36.655134 kubelet[2662]: E0711 00:23:36.653934 2662 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:36.655134 kubelet[2662]: E0711 00:23:36.653958 2662 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-645896967f-scc2m" Jul 11 00:23:36.655134 kubelet[2662]: E0711 00:23:36.653977 2662 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-645896967f-scc2m" Jul 11 00:23:36.655313 kubelet[2662]: E0711 00:23:36.653986 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-59dcd5c4d5-jfmh4_calico-apiserver(b7a1c20e-8146-4d52-a1e3-bd7a4c0025ad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-59dcd5c4d5-jfmh4_calico-apiserver(b7a1c20e-8146-4d52-a1e3-bd7a4c0025ad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59dcd5c4d5-jfmh4" podUID="b7a1c20e-8146-4d52-a1e3-bd7a4c0025ad" Jul 11 00:23:36.655313 kubelet[2662]: E0711 00:23:36.653898 2662 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85764bc598-vpq2r" Jul 11 00:23:36.655313 kubelet[2662]: E0711 00:23:36.654013 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-645896967f-scc2m_calico-system(6445237e-39b4-491d-b1a4-8b536523e561)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-645896967f-scc2m_calico-system(6445237e-39b4-491d-b1a4-8b536523e561)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-645896967f-scc2m" podUID="6445237e-39b4-491d-b1a4-8b536523e561" Jul 11 00:23:36.655486 kubelet[2662]: E0711 00:23:36.653968 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-vvl26_calico-system(00f725d3-9859-45cd-81ce-7316e621f780)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-vvl26_calico-system(00f725d3-9859-45cd-81ce-7316e621f780)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-vvl26" podUID="00f725d3-9859-45cd-81ce-7316e621f780" Jul 11 00:23:36.655486 kubelet[2662]: E0711 00:23:36.653762 2662 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:36.655486 kubelet[2662]: E0711 00:23:36.654048 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-85764bc598-vpq2r_calico-system(28fd8700-a2a2-4ca6-a552-c1abce22725b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-85764bc598-vpq2r_calico-system(28fd8700-a2a2-4ca6-a552-c1abce22725b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85764bc598-vpq2r" podUID="28fd8700-a2a2-4ca6-a552-c1abce22725b" Jul 11 00:23:36.655643 kubelet[2662]: E0711 00:23:36.654071 2662 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59dcd5c4d5-t5svc" Jul 11 00:23:36.655643 kubelet[2662]: E0711 00:23:36.654089 2662 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59dcd5c4d5-t5svc" Jul 11 00:23:36.655643 kubelet[2662]: E0711 00:23:36.654121 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-59dcd5c4d5-t5svc_calico-apiserver(a6481fdf-2fc3-43a0-ad5f-0da6fdc9cbf7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-59dcd5c4d5-t5svc_calico-apiserver(a6481fdf-2fc3-43a0-ad5f-0da6fdc9cbf7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59dcd5c4d5-t5svc" podUID="a6481fdf-2fc3-43a0-ad5f-0da6fdc9cbf7" Jul 11 00:23:36.655876 kubelet[2662]: E0711 00:23:36.654108 2662 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:36.655876 kubelet[2662]: E0711 00:23:36.654190 2662 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-f94tf" Jul 11 00:23:36.655876 kubelet[2662]: E0711 00:23:36.654239 2662 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-f94tf" Jul 11 00:23:36.655983 kubelet[2662]: E0711 00:23:36.654319 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-f94tf_kube-system(415c8f48-d718-4200-be2e-9918b83dc600)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-f94tf_kube-system(415c8f48-d718-4200-be2e-9918b83dc600)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-f94tf" podUID="415c8f48-d718-4200-be2e-9918b83dc600" Jul 11 00:23:36.655983 kubelet[2662]: E0711 00:23:36.655168 2662 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:36.655983 kubelet[2662]: E0711 00:23:36.655242 2662 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-vqw25" Jul 11 00:23:36.656113 kubelet[2662]: E0711 00:23:36.655282 2662 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-vqw25" Jul 11 00:23:36.656113 kubelet[2662]: E0711 00:23:36.655316 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-vqw25_kube-system(534af8e4-6518-491a-9219-a3b30c552e4b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-vqw25_kube-system(534af8e4-6518-491a-9219-a3b30c552e4b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-vqw25" podUID="534af8e4-6518-491a-9219-a3b30c552e4b" Jul 11 00:23:36.747412 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f-shm.mount: Deactivated successfully. Jul 11 00:23:37.471654 containerd[1582]: time="2025-07-11T00:23:37.471602147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8gdbn,Uid:b0fe6896-2f2f-4a85-81a1-6d288dfe16c3,Namespace:calico-system,Attempt:0,}" Jul 11 00:23:37.584000 kubelet[2662]: I0711 00:23:37.583937 2662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee" Jul 11 00:23:37.588306 kubelet[2662]: I0711 00:23:37.585977 2662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f" Jul 11 00:23:37.588794 kubelet[2662]: I0711 00:23:37.588766 2662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1" Jul 11 00:23:37.599938 kubelet[2662]: I0711 00:23:37.599885 2662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9" Jul 11 00:23:37.610030 containerd[1582]: time="2025-07-11T00:23:37.609947991Z" level=info msg="StopPodSandbox for \"794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f\"" Jul 11 00:23:37.610336 containerd[1582]: time="2025-07-11T00:23:37.610291827Z" level=info msg="StopPodSandbox for \"67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1\"" Jul 11 00:23:37.611919 containerd[1582]: time="2025-07-11T00:23:37.611886471Z" level=info msg="StopPodSandbox for \"4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee\"" Jul 11 00:23:37.614118 containerd[1582]: time="2025-07-11T00:23:37.614066186Z" level=info msg="StopPodSandbox for \"57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9\"" Jul 11 00:23:37.616293 containerd[1582]: time="2025-07-11T00:23:37.615681820Z" level=info msg="Ensure that sandbox 57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9 in task-service has been cleanup successfully" Jul 11 00:23:37.616293 containerd[1582]: time="2025-07-11T00:23:37.615734829Z" level=info msg="Ensure that sandbox 794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f in task-service has been cleanup successfully" Jul 11 00:23:37.616293 containerd[1582]: time="2025-07-11T00:23:37.615839697Z" level=info msg="Ensure that sandbox 67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1 in task-service has been cleanup successfully" Jul 11 00:23:37.616293 containerd[1582]: time="2025-07-11T00:23:37.616139158Z" level=info msg="Ensure that sandbox 4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee in task-service has been cleanup successfully" Jul 11 00:23:37.617518 kubelet[2662]: I0711 00:23:37.617446 2662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896" Jul 11 00:23:37.623908 containerd[1582]: time="2025-07-11T00:23:37.609970253Z" level=error msg="Failed to destroy network for sandbox \"e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:37.623908 containerd[1582]: time="2025-07-11T00:23:37.619343488Z" level=error msg="encountered an error cleaning up failed sandbox \"e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:37.623908 containerd[1582]: time="2025-07-11T00:23:37.619517795Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8gdbn,Uid:b0fe6896-2f2f-4a85-81a1-6d288dfe16c3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:37.625511 kubelet[2662]: E0711 00:23:37.624560 2662 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:37.625511 kubelet[2662]: E0711 00:23:37.624639 2662 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8gdbn" Jul 11 00:23:37.625511 kubelet[2662]: E0711 00:23:37.624670 2662 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8gdbn" Jul 11 00:23:37.625666 containerd[1582]: time="2025-07-11T00:23:37.625383732Z" level=info msg="StopPodSandbox for \"ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896\"" Jul 11 00:23:37.624998 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6-shm.mount: Deactivated successfully. Jul 11 00:23:37.625800 kubelet[2662]: E0711 00:23:37.624719 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8gdbn_calico-system(b0fe6896-2f2f-4a85-81a1-6d288dfe16c3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8gdbn_calico-system(b0fe6896-2f2f-4a85-81a1-6d288dfe16c3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8gdbn" podUID="b0fe6896-2f2f-4a85-81a1-6d288dfe16c3" Jul 11 00:23:37.626983 containerd[1582]: time="2025-07-11T00:23:37.626436770Z" level=info msg="Ensure that sandbox ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896 in task-service has been cleanup successfully" Jul 11 00:23:37.632982 kubelet[2662]: I0711 00:23:37.632923 2662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5" Jul 11 00:23:37.633997 containerd[1582]: time="2025-07-11T00:23:37.633951434Z" level=info msg="StopPodSandbox for \"0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5\"" Jul 11 00:23:37.644401 containerd[1582]: time="2025-07-11T00:23:37.644344424Z" level=info msg="Ensure that sandbox 0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5 in task-service has been cleanup successfully" Jul 11 00:23:37.657863 kubelet[2662]: I0711 00:23:37.656933 2662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1" Jul 11 00:23:37.659669 containerd[1582]: time="2025-07-11T00:23:37.658975354Z" level=info msg="StopPodSandbox for \"c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1\"" Jul 11 00:23:37.659669 containerd[1582]: time="2025-07-11T00:23:37.659284525Z" level=info msg="Ensure that sandbox c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1 in task-service has been cleanup successfully" Jul 11 00:23:37.700081 containerd[1582]: time="2025-07-11T00:23:37.699909187Z" level=error msg="StopPodSandbox for \"67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1\" failed" error="failed to destroy network for sandbox \"67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:37.700720 kubelet[2662]: E0711 00:23:37.700676 2662 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1" Jul 11 00:23:37.700948 kubelet[2662]: E0711 00:23:37.700874 2662 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1"} Jul 11 00:23:37.701080 kubelet[2662]: E0711 00:23:37.701055 2662 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"00f725d3-9859-45cd-81ce-7316e621f780\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:23:37.701333 kubelet[2662]: E0711 00:23:37.701303 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"00f725d3-9859-45cd-81ce-7316e621f780\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-vvl26" podUID="00f725d3-9859-45cd-81ce-7316e621f780" Jul 11 00:23:37.720405 containerd[1582]: time="2025-07-11T00:23:37.720338237Z" level=error msg="StopPodSandbox for \"57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9\" failed" error="failed to destroy network for sandbox \"57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:37.722344 kubelet[2662]: E0711 00:23:37.721557 2662 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9" Jul 11 00:23:37.722344 kubelet[2662]: E0711 00:23:37.721632 2662 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9"} Jul 11 00:23:37.722344 kubelet[2662]: E0711 00:23:37.721685 2662 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b7a1c20e-8146-4d52-a1e3-bd7a4c0025ad\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:23:37.722344 kubelet[2662]: E0711 00:23:37.721718 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b7a1c20e-8146-4d52-a1e3-bd7a4c0025ad\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59dcd5c4d5-jfmh4" podUID="b7a1c20e-8146-4d52-a1e3-bd7a4c0025ad" Jul 11 00:23:37.722651 containerd[1582]: time="2025-07-11T00:23:37.721658297Z" level=error msg="StopPodSandbox for \"ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896\" failed" error="failed to destroy network for sandbox \"ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:37.722710 kubelet[2662]: E0711 00:23:37.721914 2662 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896" Jul 11 00:23:37.722710 kubelet[2662]: E0711 00:23:37.721934 2662 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896"} Jul 11 00:23:37.722710 kubelet[2662]: E0711 00:23:37.721956 2662 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"534af8e4-6518-491a-9219-a3b30c552e4b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:23:37.722710 kubelet[2662]: E0711 00:23:37.721972 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"534af8e4-6518-491a-9219-a3b30c552e4b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-vqw25" podUID="534af8e4-6518-491a-9219-a3b30c552e4b" Jul 11 00:23:37.723924 containerd[1582]: time="2025-07-11T00:23:37.723844523Z" level=error msg="StopPodSandbox for \"0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5\" failed" error="failed to destroy network for sandbox \"0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:37.724327 kubelet[2662]: E0711 00:23:37.724278 2662 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5" Jul 11 00:23:37.724404 kubelet[2662]: E0711 00:23:37.724349 2662 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5"} Jul 11 00:23:37.724430 kubelet[2662]: E0711 00:23:37.724403 2662 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"28fd8700-a2a2-4ca6-a552-c1abce22725b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:23:37.724494 kubelet[2662]: E0711 00:23:37.724437 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"28fd8700-a2a2-4ca6-a552-c1abce22725b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85764bc598-vpq2r" podUID="28fd8700-a2a2-4ca6-a552-c1abce22725b" Jul 11 00:23:37.733131 containerd[1582]: time="2025-07-11T00:23:37.733020457Z" level=error msg="StopPodSandbox for \"4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee\" failed" error="failed to destroy network for sandbox \"4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:37.733855 kubelet[2662]: E0711 00:23:37.733694 2662 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee" Jul 11 00:23:37.733855 kubelet[2662]: E0711 00:23:37.733754 2662 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee"} Jul 11 00:23:37.733855 kubelet[2662]: E0711 00:23:37.733795 2662 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"415c8f48-d718-4200-be2e-9918b83dc600\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:23:37.733855 kubelet[2662]: E0711 00:23:37.733818 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"415c8f48-d718-4200-be2e-9918b83dc600\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-f94tf" podUID="415c8f48-d718-4200-be2e-9918b83dc600" Jul 11 00:23:37.736541 containerd[1582]: time="2025-07-11T00:23:37.736458074Z" level=error msg="StopPodSandbox for \"c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1\" failed" error="failed to destroy network for sandbox \"c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:37.736792 kubelet[2662]: E0711 00:23:37.736752 2662 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1" Jul 11 00:23:37.736792 kubelet[2662]: E0711 00:23:37.736790 2662 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1"} Jul 11 00:23:37.736906 kubelet[2662]: E0711 00:23:37.736820 2662 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6445237e-39b4-491d-b1a4-8b536523e561\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:23:37.736906 kubelet[2662]: E0711 00:23:37.736841 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6445237e-39b4-491d-b1a4-8b536523e561\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-645896967f-scc2m" podUID="6445237e-39b4-491d-b1a4-8b536523e561" Jul 11 00:23:37.741339 containerd[1582]: time="2025-07-11T00:23:37.741288557Z" level=error msg="StopPodSandbox for \"794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f\" failed" error="failed to destroy network for sandbox \"794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:37.741508 kubelet[2662]: E0711 00:23:37.741462 2662 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f" Jul 11 00:23:37.741573 kubelet[2662]: E0711 00:23:37.741525 2662 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f"} Jul 11 00:23:37.741573 kubelet[2662]: E0711 00:23:37.741566 2662 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a6481fdf-2fc3-43a0-ad5f-0da6fdc9cbf7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:23:37.741681 kubelet[2662]: E0711 00:23:37.741592 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a6481fdf-2fc3-43a0-ad5f-0da6fdc9cbf7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59dcd5c4d5-t5svc" podUID="a6481fdf-2fc3-43a0-ad5f-0da6fdc9cbf7" Jul 11 00:23:38.664916 kubelet[2662]: I0711 00:23:38.663188 2662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6" Jul 11 00:23:38.665409 containerd[1582]: time="2025-07-11T00:23:38.664399260Z" level=info msg="StopPodSandbox for \"e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6\"" Jul 11 00:23:38.665409 containerd[1582]: time="2025-07-11T00:23:38.664663115Z" level=info msg="Ensure that sandbox e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6 in task-service has been cleanup successfully" Jul 11 00:23:38.828686 containerd[1582]: time="2025-07-11T00:23:38.828586729Z" level=error msg="StopPodSandbox for \"e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6\" failed" error="failed to destroy network for sandbox \"e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:38.830959 kubelet[2662]: E0711 00:23:38.829013 2662 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6" Jul 11 00:23:38.830959 kubelet[2662]: E0711 00:23:38.829085 2662 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6"} Jul 11 00:23:38.830959 kubelet[2662]: E0711 00:23:38.829135 2662 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b0fe6896-2f2f-4a85-81a1-6d288dfe16c3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:23:38.830959 kubelet[2662]: E0711 00:23:38.829173 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b0fe6896-2f2f-4a85-81a1-6d288dfe16c3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8gdbn" podUID="b0fe6896-2f2f-4a85-81a1-6d288dfe16c3" Jul 11 00:23:42.214810 kernel: hrtimer: interrupt took 46618612 ns Jul 11 00:23:42.987393 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2975954138.mount: Deactivated successfully. Jul 11 00:23:46.209380 containerd[1582]: time="2025-07-11T00:23:46.209290115Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:23:46.218420 containerd[1582]: time="2025-07-11T00:23:46.218334872Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 11 00:23:46.222977 containerd[1582]: time="2025-07-11T00:23:46.222889090Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:23:46.228946 containerd[1582]: time="2025-07-11T00:23:46.228851221Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:23:46.230482 containerd[1582]: time="2025-07-11T00:23:46.230392864Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 9.645744697s" Jul 11 00:23:46.230482 containerd[1582]: time="2025-07-11T00:23:46.230487030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 11 00:23:46.249641 containerd[1582]: time="2025-07-11T00:23:46.249516649Z" level=info msg="CreateContainer within sandbox \"e066fa0ec53a1ead63b2e99a4153fe37ae22dda2eea8efb6db2b87f210fbb22d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 11 00:23:47.192749 containerd[1582]: time="2025-07-11T00:23:47.192668614Z" level=info msg="CreateContainer within sandbox \"e066fa0ec53a1ead63b2e99a4153fe37ae22dda2eea8efb6db2b87f210fbb22d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"693010dfed5074b49d14605184c8be37da7dfcd900ac0fad15e045f08e9f2fb1\"" Jul 11 00:23:47.193643 containerd[1582]: time="2025-07-11T00:23:47.193592829Z" level=info msg="StartContainer for \"693010dfed5074b49d14605184c8be37da7dfcd900ac0fad15e045f08e9f2fb1\"" Jul 11 00:23:47.406229 containerd[1582]: time="2025-07-11T00:23:47.405449102Z" level=info msg="StartContainer for \"693010dfed5074b49d14605184c8be37da7dfcd900ac0fad15e045f08e9f2fb1\" returns successfully" Jul 11 00:23:47.427991 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 11 00:23:47.428145 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 11 00:23:48.467949 containerd[1582]: time="2025-07-11T00:23:48.467454992Z" level=info msg="StopPodSandbox for \"794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f\"" Jul 11 00:23:48.491630 systemd[1]: Started sshd@7-10.0.0.132:22-10.0.0.1:47138.service - OpenSSH per-connection server daemon (10.0.0.1:47138). Jul 11 00:23:48.514826 containerd[1582]: time="2025-07-11T00:23:48.514764942Z" level=error msg="StopPodSandbox for \"794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f\" failed" error="failed to destroy network for sandbox \"794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:48.515255 kubelet[2662]: E0711 00:23:48.515143 2662 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f" Jul 11 00:23:48.515753 kubelet[2662]: E0711 00:23:48.515284 2662 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f"} Jul 11 00:23:48.515753 kubelet[2662]: E0711 00:23:48.515341 2662 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a6481fdf-2fc3-43a0-ad5f-0da6fdc9cbf7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:23:48.515753 kubelet[2662]: E0711 00:23:48.515374 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a6481fdf-2fc3-43a0-ad5f-0da6fdc9cbf7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59dcd5c4d5-t5svc" podUID="a6481fdf-2fc3-43a0-ad5f-0da6fdc9cbf7" Jul 11 00:23:48.553584 sshd[3964]: Accepted publickey for core from 10.0.0.1 port 47138 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:23:48.555452 sshd[3964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:23:48.564947 systemd-logind[1554]: New session 8 of user core. Jul 11 00:23:48.577711 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 11 00:23:49.146388 kubelet[2662]: I0711 00:23:49.145755 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-x9q57" podStartSLOduration=3.368023873 podStartE2EDuration="25.145729s" podCreationTimestamp="2025-07-11 00:23:24 +0000 UTC" firstStartedPulling="2025-07-11 00:23:24.45393087 +0000 UTC m=+18.136418383" lastFinishedPulling="2025-07-11 00:23:46.231635997 +0000 UTC m=+39.914123510" observedRunningTime="2025-07-11 00:23:49.142701949 +0000 UTC m=+42.825189482" watchObservedRunningTime="2025-07-11 00:23:49.145729 +0000 UTC m=+42.828216513" Jul 11 00:23:49.227733 sshd[3964]: pam_unix(sshd:session): session closed for user core Jul 11 00:23:49.233817 systemd[1]: sshd@7-10.0.0.132:22-10.0.0.1:47138.service: Deactivated successfully. Jul 11 00:23:49.236912 systemd-logind[1554]: Session 8 logged out. Waiting for processes to exit. Jul 11 00:23:49.237018 systemd[1]: session-8.scope: Deactivated successfully. Jul 11 00:23:49.238744 systemd-logind[1554]: Removed session 8. Jul 11 00:23:49.467956 containerd[1582]: time="2025-07-11T00:23:49.467450546Z" level=info msg="StopPodSandbox for \"4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee\"" Jul 11 00:23:49.467956 containerd[1582]: time="2025-07-11T00:23:49.467509407Z" level=info msg="StopPodSandbox for \"67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1\"" Jul 11 00:23:49.467956 containerd[1582]: time="2025-07-11T00:23:49.467471305Z" level=info msg="StopPodSandbox for \"57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9\"" Jul 11 00:23:49.498437 containerd[1582]: time="2025-07-11T00:23:49.497696599Z" level=error msg="StopPodSandbox for \"4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee\" failed" error="failed to destroy network for sandbox \"4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:49.498620 kubelet[2662]: E0711 00:23:49.497941 2662 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee" Jul 11 00:23:49.498620 kubelet[2662]: E0711 00:23:49.498008 2662 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee"} Jul 11 00:23:49.498620 kubelet[2662]: E0711 00:23:49.498043 2662 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"415c8f48-d718-4200-be2e-9918b83dc600\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:23:49.498620 kubelet[2662]: E0711 00:23:49.498069 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"415c8f48-d718-4200-be2e-9918b83dc600\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-f94tf" podUID="415c8f48-d718-4200-be2e-9918b83dc600" Jul 11 00:23:49.499107 containerd[1582]: time="2025-07-11T00:23:49.499057682Z" level=error msg="StopPodSandbox for \"57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9\" failed" error="failed to destroy network for sandbox \"57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:49.499404 kubelet[2662]: E0711 00:23:49.499376 2662 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9" Jul 11 00:23:49.499404 kubelet[2662]: E0711 00:23:49.499405 2662 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9"} Jul 11 00:23:49.499488 kubelet[2662]: E0711 00:23:49.499424 2662 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b7a1c20e-8146-4d52-a1e3-bd7a4c0025ad\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:23:49.499488 kubelet[2662]: E0711 00:23:49.499444 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b7a1c20e-8146-4d52-a1e3-bd7a4c0025ad\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59dcd5c4d5-jfmh4" podUID="b7a1c20e-8146-4d52-a1e3-bd7a4c0025ad" Jul 11 00:23:49.503706 containerd[1582]: time="2025-07-11T00:23:49.503640713Z" level=error msg="StopPodSandbox for \"67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1\" failed" error="failed to destroy network for sandbox \"67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:49.503877 kubelet[2662]: E0711 00:23:49.503851 2662 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1" Jul 11 00:23:49.503919 kubelet[2662]: E0711 00:23:49.503879 2662 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1"} Jul 11 00:23:49.503919 kubelet[2662]: E0711 00:23:49.503901 2662 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"00f725d3-9859-45cd-81ce-7316e621f780\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:23:49.504011 kubelet[2662]: E0711 00:23:49.503918 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"00f725d3-9859-45cd-81ce-7316e621f780\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-vvl26" podUID="00f725d3-9859-45cd-81ce-7316e621f780" Jul 11 00:23:50.467499 containerd[1582]: time="2025-07-11T00:23:50.467437358Z" level=info msg="StopPodSandbox for \"e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6\"" Jul 11 00:23:50.498563 containerd[1582]: time="2025-07-11T00:23:50.498489731Z" level=error msg="StopPodSandbox for \"e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6\" failed" error="failed to destroy network for sandbox \"e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:23:50.499149 kubelet[2662]: E0711 00:23:50.498903 2662 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6" Jul 11 00:23:50.499149 kubelet[2662]: E0711 00:23:50.498982 2662 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6"} Jul 11 00:23:50.499149 kubelet[2662]: E0711 00:23:50.499020 2662 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b0fe6896-2f2f-4a85-81a1-6d288dfe16c3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:23:50.499149 kubelet[2662]: E0711 00:23:50.499046 2662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b0fe6896-2f2f-4a85-81a1-6d288dfe16c3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8gdbn" podUID="b0fe6896-2f2f-4a85-81a1-6d288dfe16c3" Jul 11 00:23:50.539098 containerd[1582]: time="2025-07-11T00:23:50.538599831Z" level=info msg="StopPodSandbox for \"c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1\"" Jul 11 00:23:50.750863 containerd[1582]: 2025-07-11 00:23:50.625 [INFO][4076] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1" Jul 11 00:23:50.750863 containerd[1582]: 2025-07-11 00:23:50.625 [INFO][4076] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1" iface="eth0" netns="/var/run/netns/cni-de9f5d1b-d384-40e9-186c-62aa103cac14" Jul 11 00:23:50.750863 containerd[1582]: 2025-07-11 00:23:50.626 [INFO][4076] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1" iface="eth0" netns="/var/run/netns/cni-de9f5d1b-d384-40e9-186c-62aa103cac14" Jul 11 00:23:50.750863 containerd[1582]: 2025-07-11 00:23:50.626 [INFO][4076] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1" iface="eth0" netns="/var/run/netns/cni-de9f5d1b-d384-40e9-186c-62aa103cac14" Jul 11 00:23:50.750863 containerd[1582]: 2025-07-11 00:23:50.626 [INFO][4076] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1" Jul 11 00:23:50.750863 containerd[1582]: 2025-07-11 00:23:50.626 [INFO][4076] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1" Jul 11 00:23:50.750863 containerd[1582]: 2025-07-11 00:23:50.707 [INFO][4085] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1" HandleID="k8s-pod-network.c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1" Workload="localhost-k8s-whisker--645896967f--scc2m-eth0" Jul 11 00:23:50.750863 containerd[1582]: 2025-07-11 00:23:50.708 [INFO][4085] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:23:50.750863 containerd[1582]: 2025-07-11 00:23:50.708 [INFO][4085] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:23:50.750863 containerd[1582]: 2025-07-11 00:23:50.741 [WARNING][4085] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1" HandleID="k8s-pod-network.c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1" Workload="localhost-k8s-whisker--645896967f--scc2m-eth0" Jul 11 00:23:50.750863 containerd[1582]: 2025-07-11 00:23:50.741 [INFO][4085] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1" HandleID="k8s-pod-network.c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1" Workload="localhost-k8s-whisker--645896967f--scc2m-eth0" Jul 11 00:23:50.750863 containerd[1582]: 2025-07-11 00:23:50.744 [INFO][4085] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:23:50.750863 containerd[1582]: 2025-07-11 00:23:50.747 [INFO][4076] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1" Jul 11 00:23:50.759868 containerd[1582]: time="2025-07-11T00:23:50.750864987Z" level=info msg="TearDown network for sandbox \"c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1\" successfully" Jul 11 00:23:50.759868 containerd[1582]: time="2025-07-11T00:23:50.750914640Z" level=info msg="StopPodSandbox for \"c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1\" returns successfully" Jul 11 00:23:50.754479 systemd[1]: run-netns-cni\x2dde9f5d1b\x2dd384\x2d40e9\x2d186c\x2d62aa103cac14.mount: Deactivated successfully. Jul 11 00:23:50.856962 kubelet[2662]: I0711 00:23:50.856774 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6445237e-39b4-491d-b1a4-8b536523e561-whisker-ca-bundle\") pod \"6445237e-39b4-491d-b1a4-8b536523e561\" (UID: \"6445237e-39b4-491d-b1a4-8b536523e561\") " Jul 11 00:23:50.856962 kubelet[2662]: I0711 00:23:50.856838 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6445237e-39b4-491d-b1a4-8b536523e561-whisker-backend-key-pair\") pod \"6445237e-39b4-491d-b1a4-8b536523e561\" (UID: \"6445237e-39b4-491d-b1a4-8b536523e561\") " Jul 11 00:23:50.856962 kubelet[2662]: I0711 00:23:50.856860 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dx97l\" (UniqueName: \"kubernetes.io/projected/6445237e-39b4-491d-b1a4-8b536523e561-kube-api-access-dx97l\") pod \"6445237e-39b4-491d-b1a4-8b536523e561\" (UID: \"6445237e-39b4-491d-b1a4-8b536523e561\") " Jul 11 00:23:50.857874 kubelet[2662]: I0711 00:23:50.857490 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6445237e-39b4-491d-b1a4-8b536523e561-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "6445237e-39b4-491d-b1a4-8b536523e561" (UID: "6445237e-39b4-491d-b1a4-8b536523e561"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 11 00:23:50.861457 kubelet[2662]: I0711 00:23:50.861402 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6445237e-39b4-491d-b1a4-8b536523e561-kube-api-access-dx97l" (OuterVolumeSpecName: "kube-api-access-dx97l") pod "6445237e-39b4-491d-b1a4-8b536523e561" (UID: "6445237e-39b4-491d-b1a4-8b536523e561"). InnerVolumeSpecName "kube-api-access-dx97l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 11 00:23:50.865226 kubelet[2662]: I0711 00:23:50.864508 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6445237e-39b4-491d-b1a4-8b536523e561-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "6445237e-39b4-491d-b1a4-8b536523e561" (UID: "6445237e-39b4-491d-b1a4-8b536523e561"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 11 00:23:50.864629 systemd[1]: var-lib-kubelet-pods-6445237e\x2d39b4\x2d491d\x2db1a4\x2d8b536523e561-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddx97l.mount: Deactivated successfully. Jul 11 00:23:50.864914 systemd[1]: var-lib-kubelet-pods-6445237e\x2d39b4\x2d491d\x2db1a4\x2d8b536523e561-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 11 00:23:50.958167 kubelet[2662]: I0711 00:23:50.958074 2662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dx97l\" (UniqueName: \"kubernetes.io/projected/6445237e-39b4-491d-b1a4-8b536523e561-kube-api-access-dx97l\") on node \"localhost\" DevicePath \"\"" Jul 11 00:23:50.958167 kubelet[2662]: I0711 00:23:50.958124 2662 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6445237e-39b4-491d-b1a4-8b536523e561-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 11 00:23:50.958167 kubelet[2662]: I0711 00:23:50.958134 2662 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6445237e-39b4-491d-b1a4-8b536523e561-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 11 00:23:51.274319 kubelet[2662]: I0711 00:23:51.274232 2662 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:23:51.359354 kubelet[2662]: I0711 00:23:51.359262 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/99299513-6477-4a9d-8bd5-7813e9aaba71-whisker-backend-key-pair\") pod \"whisker-6f4d7498fc-gjn5s\" (UID: \"99299513-6477-4a9d-8bd5-7813e9aaba71\") " pod="calico-system/whisker-6f4d7498fc-gjn5s" Jul 11 00:23:51.359354 kubelet[2662]: I0711 00:23:51.359337 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/99299513-6477-4a9d-8bd5-7813e9aaba71-whisker-ca-bundle\") pod \"whisker-6f4d7498fc-gjn5s\" (UID: \"99299513-6477-4a9d-8bd5-7813e9aaba71\") " pod="calico-system/whisker-6f4d7498fc-gjn5s" Jul 11 00:23:51.359600 kubelet[2662]: I0711 00:23:51.359446 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz5fj\" (UniqueName: \"kubernetes.io/projected/99299513-6477-4a9d-8bd5-7813e9aaba71-kube-api-access-qz5fj\") pod \"whisker-6f4d7498fc-gjn5s\" (UID: \"99299513-6477-4a9d-8bd5-7813e9aaba71\") " pod="calico-system/whisker-6f4d7498fc-gjn5s" Jul 11 00:23:51.469958 containerd[1582]: time="2025-07-11T00:23:51.469417230Z" level=info msg="StopPodSandbox for \"ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896\"" Jul 11 00:23:51.519427 containerd[1582]: time="2025-07-11T00:23:51.518850712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f4d7498fc-gjn5s,Uid:99299513-6477-4a9d-8bd5-7813e9aaba71,Namespace:calico-system,Attempt:0,}" Jul 11 00:23:51.984560 containerd[1582]: 2025-07-11 00:23:51.848 [INFO][4141] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896" Jul 11 00:23:51.984560 containerd[1582]: 2025-07-11 00:23:51.851 [INFO][4141] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896" iface="eth0" netns="/var/run/netns/cni-0036a822-6ed8-750f-ad8a-d3873ea91bfe" Jul 11 00:23:51.984560 containerd[1582]: 2025-07-11 00:23:51.851 [INFO][4141] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896" iface="eth0" netns="/var/run/netns/cni-0036a822-6ed8-750f-ad8a-d3873ea91bfe" Jul 11 00:23:51.984560 containerd[1582]: 2025-07-11 00:23:51.851 [INFO][4141] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896" iface="eth0" netns="/var/run/netns/cni-0036a822-6ed8-750f-ad8a-d3873ea91bfe" Jul 11 00:23:51.984560 containerd[1582]: 2025-07-11 00:23:51.851 [INFO][4141] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896" Jul 11 00:23:51.984560 containerd[1582]: 2025-07-11 00:23:51.851 [INFO][4141] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896" Jul 11 00:23:51.984560 containerd[1582]: 2025-07-11 00:23:51.921 [INFO][4172] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896" HandleID="k8s-pod-network.ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896" Workload="localhost-k8s-coredns--7c65d6cfc9--vqw25-eth0" Jul 11 00:23:51.984560 containerd[1582]: 2025-07-11 00:23:51.922 [INFO][4172] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:23:51.984560 containerd[1582]: 2025-07-11 00:23:51.922 [INFO][4172] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:23:51.984560 containerd[1582]: 2025-07-11 00:23:51.942 [WARNING][4172] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896" HandleID="k8s-pod-network.ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896" Workload="localhost-k8s-coredns--7c65d6cfc9--vqw25-eth0" Jul 11 00:23:51.984560 containerd[1582]: 2025-07-11 00:23:51.944 [INFO][4172] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896" HandleID="k8s-pod-network.ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896" Workload="localhost-k8s-coredns--7c65d6cfc9--vqw25-eth0" Jul 11 00:23:51.984560 containerd[1582]: 2025-07-11 00:23:51.949 [INFO][4172] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:23:51.984560 containerd[1582]: 2025-07-11 00:23:51.973 [INFO][4141] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896" Jul 11 00:23:51.986976 containerd[1582]: time="2025-07-11T00:23:51.986504252Z" level=info msg="TearDown network for sandbox \"ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896\" successfully" Jul 11 00:23:51.986976 containerd[1582]: time="2025-07-11T00:23:51.986551170Z" level=info msg="StopPodSandbox for \"ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896\" returns successfully" Jul 11 00:23:51.990576 kubelet[2662]: E0711 00:23:51.990519 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:51.995291 containerd[1582]: time="2025-07-11T00:23:51.993466005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vqw25,Uid:534af8e4-6518-491a-9219-a3b30c552e4b,Namespace:kube-system,Attempt:1,}" Jul 11 00:23:51.994777 systemd[1]: run-netns-cni\x2d0036a822\x2d6ed8\x2d750f\x2dad8a\x2dd3873ea91bfe.mount: Deactivated successfully. Jul 11 00:23:52.275409 systemd-networkd[1242]: cali8cb6653a347: Link UP Jul 11 00:23:52.278468 systemd-networkd[1242]: cali8cb6653a347: Gained carrier Jul 11 00:23:52.313993 containerd[1582]: 2025-07-11 00:23:51.954 [INFO][4205] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 11 00:23:52.313993 containerd[1582]: 2025-07-11 00:23:51.980 [INFO][4205] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6f4d7498fc--gjn5s-eth0 whisker-6f4d7498fc- calico-system 99299513-6477-4a9d-8bd5-7813e9aaba71 1021 0 2025-07-11 00:23:51 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6f4d7498fc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6f4d7498fc-gjn5s eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali8cb6653a347 [] [] }} ContainerID="eb2d52020254b754243a948c25fc2b54762620353f66a0af9519a5dcc23bafcf" Namespace="calico-system" Pod="whisker-6f4d7498fc-gjn5s" WorkloadEndpoint="localhost-k8s-whisker--6f4d7498fc--gjn5s-" Jul 11 00:23:52.313993 containerd[1582]: 2025-07-11 00:23:51.991 [INFO][4205] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eb2d52020254b754243a948c25fc2b54762620353f66a0af9519a5dcc23bafcf" Namespace="calico-system" Pod="whisker-6f4d7498fc-gjn5s" WorkloadEndpoint="localhost-k8s-whisker--6f4d7498fc--gjn5s-eth0" Jul 11 00:23:52.313993 containerd[1582]: 2025-07-11 00:23:52.081 [INFO][4282] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eb2d52020254b754243a948c25fc2b54762620353f66a0af9519a5dcc23bafcf" HandleID="k8s-pod-network.eb2d52020254b754243a948c25fc2b54762620353f66a0af9519a5dcc23bafcf" Workload="localhost-k8s-whisker--6f4d7498fc--gjn5s-eth0" Jul 11 00:23:52.313993 containerd[1582]: 2025-07-11 00:23:52.082 [INFO][4282] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="eb2d52020254b754243a948c25fc2b54762620353f66a0af9519a5dcc23bafcf" HandleID="k8s-pod-network.eb2d52020254b754243a948c25fc2b54762620353f66a0af9519a5dcc23bafcf" Workload="localhost-k8s-whisker--6f4d7498fc--gjn5s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000116e80), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6f4d7498fc-gjn5s", "timestamp":"2025-07-11 00:23:52.080305365 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:23:52.313993 containerd[1582]: 2025-07-11 00:23:52.083 [INFO][4282] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:23:52.313993 containerd[1582]: 2025-07-11 00:23:52.083 [INFO][4282] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:23:52.313993 containerd[1582]: 2025-07-11 00:23:52.083 [INFO][4282] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:23:52.313993 containerd[1582]: 2025-07-11 00:23:52.133 [INFO][4282] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.eb2d52020254b754243a948c25fc2b54762620353f66a0af9519a5dcc23bafcf" host="localhost" Jul 11 00:23:52.313993 containerd[1582]: 2025-07-11 00:23:52.189 [INFO][4282] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:23:52.313993 containerd[1582]: 2025-07-11 00:23:52.205 [INFO][4282] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:23:52.313993 containerd[1582]: 2025-07-11 00:23:52.211 [INFO][4282] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:23:52.313993 containerd[1582]: 2025-07-11 00:23:52.224 [INFO][4282] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:23:52.313993 containerd[1582]: 2025-07-11 00:23:52.224 [INFO][4282] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.eb2d52020254b754243a948c25fc2b54762620353f66a0af9519a5dcc23bafcf" host="localhost" Jul 11 00:23:52.313993 containerd[1582]: 2025-07-11 00:23:52.227 [INFO][4282] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.eb2d52020254b754243a948c25fc2b54762620353f66a0af9519a5dcc23bafcf Jul 11 00:23:52.313993 containerd[1582]: 2025-07-11 00:23:52.237 [INFO][4282] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.eb2d52020254b754243a948c25fc2b54762620353f66a0af9519a5dcc23bafcf" host="localhost" Jul 11 00:23:52.313993 containerd[1582]: 2025-07-11 00:23:52.253 [INFO][4282] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.eb2d52020254b754243a948c25fc2b54762620353f66a0af9519a5dcc23bafcf" host="localhost" Jul 11 00:23:52.313993 containerd[1582]: 2025-07-11 00:23:52.253 [INFO][4282] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.eb2d52020254b754243a948c25fc2b54762620353f66a0af9519a5dcc23bafcf" host="localhost" Jul 11 00:23:52.313993 containerd[1582]: 2025-07-11 00:23:52.253 [INFO][4282] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:23:52.313993 containerd[1582]: 2025-07-11 00:23:52.253 [INFO][4282] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="eb2d52020254b754243a948c25fc2b54762620353f66a0af9519a5dcc23bafcf" HandleID="k8s-pod-network.eb2d52020254b754243a948c25fc2b54762620353f66a0af9519a5dcc23bafcf" Workload="localhost-k8s-whisker--6f4d7498fc--gjn5s-eth0" Jul 11 00:23:52.314850 containerd[1582]: 2025-07-11 00:23:52.259 [INFO][4205] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eb2d52020254b754243a948c25fc2b54762620353f66a0af9519a5dcc23bafcf" Namespace="calico-system" Pod="whisker-6f4d7498fc-gjn5s" WorkloadEndpoint="localhost-k8s-whisker--6f4d7498fc--gjn5s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6f4d7498fc--gjn5s-eth0", GenerateName:"whisker-6f4d7498fc-", Namespace:"calico-system", SelfLink:"", UID:"99299513-6477-4a9d-8bd5-7813e9aaba71", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 23, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6f4d7498fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6f4d7498fc-gjn5s", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8cb6653a347", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:23:52.314850 containerd[1582]: 2025-07-11 00:23:52.260 [INFO][4205] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="eb2d52020254b754243a948c25fc2b54762620353f66a0af9519a5dcc23bafcf" Namespace="calico-system" Pod="whisker-6f4d7498fc-gjn5s" WorkloadEndpoint="localhost-k8s-whisker--6f4d7498fc--gjn5s-eth0" Jul 11 00:23:52.314850 containerd[1582]: 2025-07-11 00:23:52.260 [INFO][4205] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8cb6653a347 ContainerID="eb2d52020254b754243a948c25fc2b54762620353f66a0af9519a5dcc23bafcf" Namespace="calico-system" Pod="whisker-6f4d7498fc-gjn5s" WorkloadEndpoint="localhost-k8s-whisker--6f4d7498fc--gjn5s-eth0" Jul 11 00:23:52.314850 containerd[1582]: 2025-07-11 00:23:52.280 [INFO][4205] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eb2d52020254b754243a948c25fc2b54762620353f66a0af9519a5dcc23bafcf" Namespace="calico-system" Pod="whisker-6f4d7498fc-gjn5s" WorkloadEndpoint="localhost-k8s-whisker--6f4d7498fc--gjn5s-eth0" Jul 11 00:23:52.314850 containerd[1582]: 2025-07-11 00:23:52.280 [INFO][4205] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eb2d52020254b754243a948c25fc2b54762620353f66a0af9519a5dcc23bafcf" Namespace="calico-system" Pod="whisker-6f4d7498fc-gjn5s" WorkloadEndpoint="localhost-k8s-whisker--6f4d7498fc--gjn5s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6f4d7498fc--gjn5s-eth0", GenerateName:"whisker-6f4d7498fc-", Namespace:"calico-system", SelfLink:"", UID:"99299513-6477-4a9d-8bd5-7813e9aaba71", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 23, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6f4d7498fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eb2d52020254b754243a948c25fc2b54762620353f66a0af9519a5dcc23bafcf", Pod:"whisker-6f4d7498fc-gjn5s", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8cb6653a347", MAC:"2e:6c:eb:2c:63:a9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:23:52.314850 containerd[1582]: 2025-07-11 00:23:52.299 [INFO][4205] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eb2d52020254b754243a948c25fc2b54762620353f66a0af9519a5dcc23bafcf" Namespace="calico-system" Pod="whisker-6f4d7498fc-gjn5s" WorkloadEndpoint="localhost-k8s-whisker--6f4d7498fc--gjn5s-eth0" Jul 11 00:23:52.401751 containerd[1582]: time="2025-07-11T00:23:52.401495721Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:23:52.401751 containerd[1582]: time="2025-07-11T00:23:52.401680918Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:23:52.401751 containerd[1582]: time="2025-07-11T00:23:52.401711275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:23:52.403822 containerd[1582]: time="2025-07-11T00:23:52.402620842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:23:52.455222 kernel: bpftool[4392]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 11 00:23:52.461420 systemd-resolved[1466]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:23:52.470045 containerd[1582]: time="2025-07-11T00:23:52.469990186Z" level=info msg="StopPodSandbox for \"0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5\"" Jul 11 00:23:52.473749 kubelet[2662]: I0711 00:23:52.473691 2662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6445237e-39b4-491d-b1a4-8b536523e561" path="/var/lib/kubelet/pods/6445237e-39b4-491d-b1a4-8b536523e561/volumes" Jul 11 00:23:52.501927 systemd-networkd[1242]: cali89c53d7dc3c: Link UP Jul 11 00:23:52.506720 systemd-networkd[1242]: cali89c53d7dc3c: Gained carrier Jul 11 00:23:52.516005 containerd[1582]: time="2025-07-11T00:23:52.514393787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f4d7498fc-gjn5s,Uid:99299513-6477-4a9d-8bd5-7813e9aaba71,Namespace:calico-system,Attempt:0,} returns sandbox id \"eb2d52020254b754243a948c25fc2b54762620353f66a0af9519a5dcc23bafcf\"" Jul 11 00:23:52.530286 containerd[1582]: time="2025-07-11T00:23:52.530063030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 11 00:23:52.532330 containerd[1582]: 2025-07-11 00:23:52.207 [INFO][4295] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 11 00:23:52.532330 containerd[1582]: 2025-07-11 00:23:52.233 [INFO][4295] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--vqw25-eth0 coredns-7c65d6cfc9- kube-system 534af8e4-6518-491a-9219-a3b30c552e4b 1027 0 2025-07-11 00:23:11 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-vqw25 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali89c53d7dc3c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="dd2249e224bdd57d45c96a582570cdde440c50b47e72337032bdb83b884481c8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vqw25" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--vqw25-" Jul 11 00:23:52.532330 containerd[1582]: 2025-07-11 00:23:52.233 [INFO][4295] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dd2249e224bdd57d45c96a582570cdde440c50b47e72337032bdb83b884481c8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vqw25" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--vqw25-eth0" Jul 11 00:23:52.532330 containerd[1582]: 2025-07-11 00:23:52.387 [INFO][4313] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dd2249e224bdd57d45c96a582570cdde440c50b47e72337032bdb83b884481c8" HandleID="k8s-pod-network.dd2249e224bdd57d45c96a582570cdde440c50b47e72337032bdb83b884481c8" Workload="localhost-k8s-coredns--7c65d6cfc9--vqw25-eth0" Jul 11 00:23:52.532330 containerd[1582]: 2025-07-11 00:23:52.388 [INFO][4313] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dd2249e224bdd57d45c96a582570cdde440c50b47e72337032bdb83b884481c8" HandleID="k8s-pod-network.dd2249e224bdd57d45c96a582570cdde440c50b47e72337032bdb83b884481c8" Workload="localhost-k8s-coredns--7c65d6cfc9--vqw25-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e330), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-vqw25", "timestamp":"2025-07-11 00:23:52.387503765 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:23:52.532330 containerd[1582]: 2025-07-11 00:23:52.388 [INFO][4313] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:23:52.532330 containerd[1582]: 2025-07-11 00:23:52.388 [INFO][4313] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:23:52.532330 containerd[1582]: 2025-07-11 00:23:52.388 [INFO][4313] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:23:52.532330 containerd[1582]: 2025-07-11 00:23:52.398 [INFO][4313] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dd2249e224bdd57d45c96a582570cdde440c50b47e72337032bdb83b884481c8" host="localhost" Jul 11 00:23:52.532330 containerd[1582]: 2025-07-11 00:23:52.407 [INFO][4313] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:23:52.532330 containerd[1582]: 2025-07-11 00:23:52.418 [INFO][4313] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:23:52.532330 containerd[1582]: 2025-07-11 00:23:52.420 [INFO][4313] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:23:52.532330 containerd[1582]: 2025-07-11 00:23:52.424 [INFO][4313] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:23:52.532330 containerd[1582]: 2025-07-11 00:23:52.424 [INFO][4313] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dd2249e224bdd57d45c96a582570cdde440c50b47e72337032bdb83b884481c8" host="localhost" Jul 11 00:23:52.532330 containerd[1582]: 2025-07-11 00:23:52.427 [INFO][4313] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.dd2249e224bdd57d45c96a582570cdde440c50b47e72337032bdb83b884481c8 Jul 11 00:23:52.532330 containerd[1582]: 2025-07-11 00:23:52.477 [INFO][4313] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dd2249e224bdd57d45c96a582570cdde440c50b47e72337032bdb83b884481c8" host="localhost" Jul 11 00:23:52.532330 containerd[1582]: 2025-07-11 00:23:52.489 [INFO][4313] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.dd2249e224bdd57d45c96a582570cdde440c50b47e72337032bdb83b884481c8" host="localhost" Jul 11 00:23:52.532330 containerd[1582]: 2025-07-11 00:23:52.489 [INFO][4313] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.dd2249e224bdd57d45c96a582570cdde440c50b47e72337032bdb83b884481c8" host="localhost" Jul 11 00:23:52.532330 containerd[1582]: 2025-07-11 00:23:52.489 [INFO][4313] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:23:52.532330 containerd[1582]: 2025-07-11 00:23:52.489 [INFO][4313] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="dd2249e224bdd57d45c96a582570cdde440c50b47e72337032bdb83b884481c8" HandleID="k8s-pod-network.dd2249e224bdd57d45c96a582570cdde440c50b47e72337032bdb83b884481c8" Workload="localhost-k8s-coredns--7c65d6cfc9--vqw25-eth0" Jul 11 00:23:52.532994 containerd[1582]: 2025-07-11 00:23:52.495 [INFO][4295] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dd2249e224bdd57d45c96a582570cdde440c50b47e72337032bdb83b884481c8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vqw25" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--vqw25-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--vqw25-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"534af8e4-6518-491a-9219-a3b30c552e4b", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 23, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-vqw25", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali89c53d7dc3c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:23:52.532994 containerd[1582]: 2025-07-11 00:23:52.496 [INFO][4295] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="dd2249e224bdd57d45c96a582570cdde440c50b47e72337032bdb83b884481c8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vqw25" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--vqw25-eth0" Jul 11 00:23:52.532994 containerd[1582]: 2025-07-11 00:23:52.496 [INFO][4295] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali89c53d7dc3c ContainerID="dd2249e224bdd57d45c96a582570cdde440c50b47e72337032bdb83b884481c8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vqw25" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--vqw25-eth0" Jul 11 00:23:52.532994 containerd[1582]: 2025-07-11 00:23:52.508 [INFO][4295] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dd2249e224bdd57d45c96a582570cdde440c50b47e72337032bdb83b884481c8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vqw25" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--vqw25-eth0" Jul 11 00:23:52.532994 containerd[1582]: 2025-07-11 00:23:52.509 [INFO][4295] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dd2249e224bdd57d45c96a582570cdde440c50b47e72337032bdb83b884481c8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vqw25" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--vqw25-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--vqw25-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"534af8e4-6518-491a-9219-a3b30c552e4b", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 23, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dd2249e224bdd57d45c96a582570cdde440c50b47e72337032bdb83b884481c8", Pod:"coredns-7c65d6cfc9-vqw25", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali89c53d7dc3c", MAC:"0a:f1:a7:fe:16:86", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:23:52.532994 containerd[1582]: 2025-07-11 00:23:52.524 [INFO][4295] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dd2249e224bdd57d45c96a582570cdde440c50b47e72337032bdb83b884481c8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vqw25" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--vqw25-eth0" Jul 11 00:23:52.579130 containerd[1582]: time="2025-07-11T00:23:52.577924269Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:23:52.579130 containerd[1582]: time="2025-07-11T00:23:52.578907434Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:23:52.579130 containerd[1582]: time="2025-07-11T00:23:52.578931509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:23:52.579462 containerd[1582]: time="2025-07-11T00:23:52.579233856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:23:52.616415 systemd-resolved[1466]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:23:52.657943 containerd[1582]: 2025-07-11 00:23:52.593 [INFO][4410] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5" Jul 11 00:23:52.657943 containerd[1582]: 2025-07-11 00:23:52.593 [INFO][4410] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5" iface="eth0" netns="/var/run/netns/cni-ab0a0780-060f-f784-b621-9cd7a2e83e3d" Jul 11 00:23:52.657943 containerd[1582]: 2025-07-11 00:23:52.593 [INFO][4410] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5" iface="eth0" netns="/var/run/netns/cni-ab0a0780-060f-f784-b621-9cd7a2e83e3d" Jul 11 00:23:52.657943 containerd[1582]: 2025-07-11 00:23:52.596 [INFO][4410] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5" iface="eth0" netns="/var/run/netns/cni-ab0a0780-060f-f784-b621-9cd7a2e83e3d" Jul 11 00:23:52.657943 containerd[1582]: 2025-07-11 00:23:52.596 [INFO][4410] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5" Jul 11 00:23:52.657943 containerd[1582]: 2025-07-11 00:23:52.596 [INFO][4410] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5" Jul 11 00:23:52.657943 containerd[1582]: 2025-07-11 00:23:52.636 [INFO][4459] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5" HandleID="k8s-pod-network.0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5" Workload="localhost-k8s-calico--kube--controllers--85764bc598--vpq2r-eth0" Jul 11 00:23:52.657943 containerd[1582]: 2025-07-11 00:23:52.637 [INFO][4459] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:23:52.657943 containerd[1582]: 2025-07-11 00:23:52.637 [INFO][4459] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:23:52.657943 containerd[1582]: 2025-07-11 00:23:52.645 [WARNING][4459] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5" HandleID="k8s-pod-network.0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5" Workload="localhost-k8s-calico--kube--controllers--85764bc598--vpq2r-eth0" Jul 11 00:23:52.657943 containerd[1582]: 2025-07-11 00:23:52.645 [INFO][4459] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5" HandleID="k8s-pod-network.0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5" Workload="localhost-k8s-calico--kube--controllers--85764bc598--vpq2r-eth0" Jul 11 00:23:52.657943 containerd[1582]: 2025-07-11 00:23:52.647 [INFO][4459] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:23:52.657943 containerd[1582]: 2025-07-11 00:23:52.652 [INFO][4410] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5" Jul 11 00:23:52.657943 containerd[1582]: time="2025-07-11T00:23:52.657390142Z" level=info msg="TearDown network for sandbox \"0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5\" successfully" Jul 11 00:23:52.657943 containerd[1582]: time="2025-07-11T00:23:52.657429005Z" level=info msg="StopPodSandbox for \"0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5\" returns successfully" Jul 11 00:23:52.658551 containerd[1582]: time="2025-07-11T00:23:52.658481039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85764bc598-vpq2r,Uid:28fd8700-a2a2-4ca6-a552-c1abce22725b,Namespace:calico-system,Attempt:1,}" Jul 11 00:23:52.667693 containerd[1582]: time="2025-07-11T00:23:52.667634815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vqw25,Uid:534af8e4-6518-491a-9219-a3b30c552e4b,Namespace:kube-system,Attempt:1,} returns sandbox id \"dd2249e224bdd57d45c96a582570cdde440c50b47e72337032bdb83b884481c8\"" Jul 11 00:23:52.668697 kubelet[2662]: E0711 00:23:52.668668 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:52.671100 containerd[1582]: time="2025-07-11T00:23:52.670937753Z" level=info msg="CreateContainer within sandbox \"dd2249e224bdd57d45c96a582570cdde440c50b47e72337032bdb83b884481c8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 11 00:23:52.697929 containerd[1582]: time="2025-07-11T00:23:52.697864980Z" level=info msg="CreateContainer within sandbox \"dd2249e224bdd57d45c96a582570cdde440c50b47e72337032bdb83b884481c8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9f41054a04e14e93621ae76b590a176567904e588345e548fcea89bd07ecf153\"" Jul 11 00:23:52.698478 containerd[1582]: time="2025-07-11T00:23:52.698445320Z" level=info msg="StartContainer for \"9f41054a04e14e93621ae76b590a176567904e588345e548fcea89bd07ecf153\"" Jul 11 00:23:52.760695 systemd[1]: run-netns-cni\x2dab0a0780\x2d060f\x2df784\x2db621\x2d9cd7a2e83e3d.mount: Deactivated successfully. Jul 11 00:23:52.899615 systemd-networkd[1242]: vxlan.calico: Link UP Jul 11 00:23:52.899628 systemd-networkd[1242]: vxlan.calico: Gained carrier Jul 11 00:23:52.933268 containerd[1582]: time="2025-07-11T00:23:52.933166811Z" level=info msg="StartContainer for \"9f41054a04e14e93621ae76b590a176567904e588345e548fcea89bd07ecf153\" returns successfully" Jul 11 00:23:53.022838 systemd-networkd[1242]: cali42bc9137916: Link UP Jul 11 00:23:53.023172 systemd-networkd[1242]: cali42bc9137916: Gained carrier Jul 11 00:23:53.140452 containerd[1582]: 2025-07-11 00:23:52.721 [INFO][4481] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--85764bc598--vpq2r-eth0 calico-kube-controllers-85764bc598- calico-system 28fd8700-a2a2-4ca6-a552-c1abce22725b 1042 0 2025-07-11 00:23:24 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:85764bc598 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-85764bc598-vpq2r eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali42bc9137916 [] [] }} ContainerID="335f6a749c3e427b8518479c3ebcb27ca85cffedb26d8ccf9aec5a50288ad9ad" Namespace="calico-system" Pod="calico-kube-controllers-85764bc598-vpq2r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85764bc598--vpq2r-" Jul 11 00:23:53.140452 containerd[1582]: 2025-07-11 00:23:52.722 [INFO][4481] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="335f6a749c3e427b8518479c3ebcb27ca85cffedb26d8ccf9aec5a50288ad9ad" Namespace="calico-system" Pod="calico-kube-controllers-85764bc598-vpq2r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85764bc598--vpq2r-eth0" Jul 11 00:23:53.140452 containerd[1582]: 2025-07-11 00:23:52.765 [INFO][4512] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="335f6a749c3e427b8518479c3ebcb27ca85cffedb26d8ccf9aec5a50288ad9ad" HandleID="k8s-pod-network.335f6a749c3e427b8518479c3ebcb27ca85cffedb26d8ccf9aec5a50288ad9ad" Workload="localhost-k8s-calico--kube--controllers--85764bc598--vpq2r-eth0" Jul 11 00:23:53.140452 containerd[1582]: 2025-07-11 00:23:52.765 [INFO][4512] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="335f6a749c3e427b8518479c3ebcb27ca85cffedb26d8ccf9aec5a50288ad9ad" HandleID="k8s-pod-network.335f6a749c3e427b8518479c3ebcb27ca85cffedb26d8ccf9aec5a50288ad9ad" Workload="localhost-k8s-calico--kube--controllers--85764bc598--vpq2r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ea10), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-85764bc598-vpq2r", "timestamp":"2025-07-11 00:23:52.76508228 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:23:53.140452 containerd[1582]: 2025-07-11 00:23:52.765 [INFO][4512] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:23:53.140452 containerd[1582]: 2025-07-11 00:23:52.765 [INFO][4512] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:23:53.140452 containerd[1582]: 2025-07-11 00:23:52.765 [INFO][4512] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:23:53.140452 containerd[1582]: 2025-07-11 00:23:52.816 [INFO][4512] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.335f6a749c3e427b8518479c3ebcb27ca85cffedb26d8ccf9aec5a50288ad9ad" host="localhost" Jul 11 00:23:53.140452 containerd[1582]: 2025-07-11 00:23:52.823 [INFO][4512] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:23:53.140452 containerd[1582]: 2025-07-11 00:23:52.829 [INFO][4512] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:23:53.140452 containerd[1582]: 2025-07-11 00:23:52.831 [INFO][4512] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:23:53.140452 containerd[1582]: 2025-07-11 00:23:52.834 [INFO][4512] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:23:53.140452 containerd[1582]: 2025-07-11 00:23:52.834 [INFO][4512] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.335f6a749c3e427b8518479c3ebcb27ca85cffedb26d8ccf9aec5a50288ad9ad" host="localhost" Jul 11 00:23:53.140452 containerd[1582]: 2025-07-11 00:23:52.835 [INFO][4512] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.335f6a749c3e427b8518479c3ebcb27ca85cffedb26d8ccf9aec5a50288ad9ad Jul 11 00:23:53.140452 containerd[1582]: 2025-07-11 00:23:52.905 [INFO][4512] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.335f6a749c3e427b8518479c3ebcb27ca85cffedb26d8ccf9aec5a50288ad9ad" host="localhost" Jul 11 00:23:53.140452 containerd[1582]: 2025-07-11 00:23:53.016 [INFO][4512] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.335f6a749c3e427b8518479c3ebcb27ca85cffedb26d8ccf9aec5a50288ad9ad" host="localhost" Jul 11 00:23:53.140452 containerd[1582]: 2025-07-11 00:23:53.016 [INFO][4512] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.335f6a749c3e427b8518479c3ebcb27ca85cffedb26d8ccf9aec5a50288ad9ad" host="localhost" Jul 11 00:23:53.140452 containerd[1582]: 2025-07-11 00:23:53.016 [INFO][4512] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:23:53.140452 containerd[1582]: 2025-07-11 00:23:53.016 [INFO][4512] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="335f6a749c3e427b8518479c3ebcb27ca85cffedb26d8ccf9aec5a50288ad9ad" HandleID="k8s-pod-network.335f6a749c3e427b8518479c3ebcb27ca85cffedb26d8ccf9aec5a50288ad9ad" Workload="localhost-k8s-calico--kube--controllers--85764bc598--vpq2r-eth0" Jul 11 00:23:53.141663 containerd[1582]: 2025-07-11 00:23:53.020 [INFO][4481] cni-plugin/k8s.go 418: Populated endpoint ContainerID="335f6a749c3e427b8518479c3ebcb27ca85cffedb26d8ccf9aec5a50288ad9ad" Namespace="calico-system" Pod="calico-kube-controllers-85764bc598-vpq2r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85764bc598--vpq2r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--85764bc598--vpq2r-eth0", GenerateName:"calico-kube-controllers-85764bc598-", Namespace:"calico-system", SelfLink:"", UID:"28fd8700-a2a2-4ca6-a552-c1abce22725b", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 23, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85764bc598", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-85764bc598-vpq2r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali42bc9137916", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:23:53.141663 containerd[1582]: 2025-07-11 00:23:53.020 [INFO][4481] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="335f6a749c3e427b8518479c3ebcb27ca85cffedb26d8ccf9aec5a50288ad9ad" Namespace="calico-system" Pod="calico-kube-controllers-85764bc598-vpq2r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85764bc598--vpq2r-eth0" Jul 11 00:23:53.141663 containerd[1582]: 2025-07-11 00:23:53.020 [INFO][4481] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali42bc9137916 ContainerID="335f6a749c3e427b8518479c3ebcb27ca85cffedb26d8ccf9aec5a50288ad9ad" Namespace="calico-system" Pod="calico-kube-controllers-85764bc598-vpq2r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85764bc598--vpq2r-eth0" Jul 11 00:23:53.141663 containerd[1582]: 2025-07-11 00:23:53.024 [INFO][4481] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="335f6a749c3e427b8518479c3ebcb27ca85cffedb26d8ccf9aec5a50288ad9ad" Namespace="calico-system" Pod="calico-kube-controllers-85764bc598-vpq2r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85764bc598--vpq2r-eth0" Jul 11 00:23:53.141663 containerd[1582]: 2025-07-11 00:23:53.024 [INFO][4481] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="335f6a749c3e427b8518479c3ebcb27ca85cffedb26d8ccf9aec5a50288ad9ad" Namespace="calico-system" Pod="calico-kube-controllers-85764bc598-vpq2r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85764bc598--vpq2r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--85764bc598--vpq2r-eth0", GenerateName:"calico-kube-controllers-85764bc598-", Namespace:"calico-system", SelfLink:"", UID:"28fd8700-a2a2-4ca6-a552-c1abce22725b", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 23, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85764bc598", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"335f6a749c3e427b8518479c3ebcb27ca85cffedb26d8ccf9aec5a50288ad9ad", Pod:"calico-kube-controllers-85764bc598-vpq2r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali42bc9137916", MAC:"9e:92:97:ad:86:c5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:23:53.141663 containerd[1582]: 2025-07-11 00:23:53.133 [INFO][4481] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="335f6a749c3e427b8518479c3ebcb27ca85cffedb26d8ccf9aec5a50288ad9ad" Namespace="calico-system" Pod="calico-kube-controllers-85764bc598-vpq2r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85764bc598--vpq2r-eth0" Jul 11 00:23:53.161758 kubelet[2662]: E0711 00:23:53.160501 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:53.200025 kubelet[2662]: I0711 00:23:53.197778 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-vqw25" podStartSLOduration=42.197750815 podStartE2EDuration="42.197750815s" podCreationTimestamp="2025-07-11 00:23:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:23:53.191529451 +0000 UTC m=+46.874016984" watchObservedRunningTime="2025-07-11 00:23:53.197750815 +0000 UTC m=+46.880238328" Jul 11 00:23:53.200184 containerd[1582]: time="2025-07-11T00:23:53.190210486Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:23:53.200184 containerd[1582]: time="2025-07-11T00:23:53.190452991Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:23:53.200184 containerd[1582]: time="2025-07-11T00:23:53.190464532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:23:53.200184 containerd[1582]: time="2025-07-11T00:23:53.190766328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:23:53.243339 systemd-resolved[1466]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:23:53.303057 containerd[1582]: time="2025-07-11T00:23:53.303002246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85764bc598-vpq2r,Uid:28fd8700-a2a2-4ca6-a552-c1abce22725b,Namespace:calico-system,Attempt:1,} returns sandbox id \"335f6a749c3e427b8518479c3ebcb27ca85cffedb26d8ccf9aec5a50288ad9ad\"" Jul 11 00:23:53.884522 systemd-networkd[1242]: cali89c53d7dc3c: Gained IPv6LL Jul 11 00:23:54.141685 systemd-networkd[1242]: cali42bc9137916: Gained IPv6LL Jul 11 00:23:54.180336 kubelet[2662]: E0711 00:23:54.180291 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:54.205729 systemd-networkd[1242]: cali8cb6653a347: Gained IPv6LL Jul 11 00:23:54.228493 containerd[1582]: time="2025-07-11T00:23:54.228401587Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:23:54.229394 containerd[1582]: time="2025-07-11T00:23:54.229295264Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 11 00:23:54.230745 containerd[1582]: time="2025-07-11T00:23:54.230716751Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:23:54.234806 containerd[1582]: time="2025-07-11T00:23:54.234740340Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:23:54.235429 containerd[1582]: time="2025-07-11T00:23:54.235374431Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.70525772s" Jul 11 00:23:54.235429 containerd[1582]: time="2025-07-11T00:23:54.235417622Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 11 00:23:54.236720 systemd[1]: Started sshd@8-10.0.0.132:22-10.0.0.1:39392.service - OpenSSH per-connection server daemon (10.0.0.1:39392). Jul 11 00:23:54.240603 containerd[1582]: time="2025-07-11T00:23:54.240058469Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 11 00:23:54.241546 containerd[1582]: time="2025-07-11T00:23:54.241493181Z" level=info msg="CreateContainer within sandbox \"eb2d52020254b754243a948c25fc2b54762620353f66a0af9519a5dcc23bafcf\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 11 00:23:54.263982 containerd[1582]: time="2025-07-11T00:23:54.263913028Z" level=info msg="CreateContainer within sandbox \"eb2d52020254b754243a948c25fc2b54762620353f66a0af9519a5dcc23bafcf\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"960bca32a53590a10c5d86452c6220d45c5cba879a75473a9c2a8b907f077868\"" Jul 11 00:23:54.265019 containerd[1582]: time="2025-07-11T00:23:54.264946296Z" level=info msg="StartContainer for \"960bca32a53590a10c5d86452c6220d45c5cba879a75473a9c2a8b907f077868\"" Jul 11 00:23:54.293889 sshd[4678]: Accepted publickey for core from 10.0.0.1 port 39392 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:23:54.297276 sshd[4678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:23:54.310575 systemd-logind[1554]: New session 9 of user core. Jul 11 00:23:54.320641 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 11 00:23:54.375000 containerd[1582]: time="2025-07-11T00:23:54.374947523Z" level=info msg="StartContainer for \"960bca32a53590a10c5d86452c6220d45c5cba879a75473a9c2a8b907f077868\" returns successfully" Jul 11 00:23:54.461486 systemd-networkd[1242]: vxlan.calico: Gained IPv6LL Jul 11 00:23:54.498756 sshd[4678]: pam_unix(sshd:session): session closed for user core Jul 11 00:23:54.505053 systemd[1]: sshd@8-10.0.0.132:22-10.0.0.1:39392.service: Deactivated successfully. Jul 11 00:23:54.508140 systemd-logind[1554]: Session 9 logged out. Waiting for processes to exit. Jul 11 00:23:54.508270 systemd[1]: session-9.scope: Deactivated successfully. Jul 11 00:23:54.511634 systemd-logind[1554]: Removed session 9. Jul 11 00:23:55.183891 kubelet[2662]: E0711 00:23:55.183855 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:57.656349 containerd[1582]: time="2025-07-11T00:23:57.656253079Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:23:57.670085 containerd[1582]: time="2025-07-11T00:23:57.669970355Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 11 00:23:57.697940 containerd[1582]: time="2025-07-11T00:23:57.697841065Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:23:57.763635 containerd[1582]: time="2025-07-11T00:23:57.763548922Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:23:57.765350 containerd[1582]: time="2025-07-11T00:23:57.765278307Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 3.525176536s" Jul 11 00:23:57.765350 containerd[1582]: time="2025-07-11T00:23:57.765320305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 11 00:23:57.766530 containerd[1582]: time="2025-07-11T00:23:57.766495861Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 11 00:23:57.780046 containerd[1582]: time="2025-07-11T00:23:57.779986361Z" level=info msg="CreateContainer within sandbox \"335f6a749c3e427b8518479c3ebcb27ca85cffedb26d8ccf9aec5a50288ad9ad\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 11 00:23:57.805394 containerd[1582]: time="2025-07-11T00:23:57.805293682Z" level=info msg="CreateContainer within sandbox \"335f6a749c3e427b8518479c3ebcb27ca85cffedb26d8ccf9aec5a50288ad9ad\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"88cbb3057733338f3e56435dbadf1dff1d509725691371aaa2260bec823429ec\"" Jul 11 00:23:57.806142 containerd[1582]: time="2025-07-11T00:23:57.806094594Z" level=info msg="StartContainer for \"88cbb3057733338f3e56435dbadf1dff1d509725691371aaa2260bec823429ec\"" Jul 11 00:23:57.907983 containerd[1582]: time="2025-07-11T00:23:57.907820566Z" level=info msg="StartContainer for \"88cbb3057733338f3e56435dbadf1dff1d509725691371aaa2260bec823429ec\" returns successfully" Jul 11 00:23:58.381561 kubelet[2662]: I0711 00:23:58.381471 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-85764bc598-vpq2r" podStartSLOduration=29.924847643 podStartE2EDuration="34.381446491s" podCreationTimestamp="2025-07-11 00:23:24 +0000 UTC" firstStartedPulling="2025-07-11 00:23:53.309728156 +0000 UTC m=+46.992215669" lastFinishedPulling="2025-07-11 00:23:57.766326984 +0000 UTC m=+51.448814517" observedRunningTime="2025-07-11 00:23:58.263683834 +0000 UTC m=+51.946171377" watchObservedRunningTime="2025-07-11 00:23:58.381446491 +0000 UTC m=+52.063933994" Jul 11 00:23:59.467558 containerd[1582]: time="2025-07-11T00:23:59.467464641Z" level=info msg="StopPodSandbox for \"794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f\"" Jul 11 00:23:59.508698 systemd[1]: Started sshd@9-10.0.0.132:22-10.0.0.1:39398.service - OpenSSH per-connection server daemon (10.0.0.1:39398). Jul 11 00:23:59.586513 sshd[4833]: Accepted publickey for core from 10.0.0.1 port 39398 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:23:59.589857 sshd[4833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:23:59.597100 systemd-logind[1554]: New session 10 of user core. Jul 11 00:23:59.600647 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 11 00:23:59.615171 containerd[1582]: 2025-07-11 00:23:59.561 [INFO][4826] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f" Jul 11 00:23:59.615171 containerd[1582]: 2025-07-11 00:23:59.562 [INFO][4826] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f" iface="eth0" netns="/var/run/netns/cni-668bf6c8-f6c2-3f6e-433b-a122ef0ee995" Jul 11 00:23:59.615171 containerd[1582]: 2025-07-11 00:23:59.562 [INFO][4826] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f" iface="eth0" netns="/var/run/netns/cni-668bf6c8-f6c2-3f6e-433b-a122ef0ee995" Jul 11 00:23:59.615171 containerd[1582]: 2025-07-11 00:23:59.562 [INFO][4826] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f" iface="eth0" netns="/var/run/netns/cni-668bf6c8-f6c2-3f6e-433b-a122ef0ee995" Jul 11 00:23:59.615171 containerd[1582]: 2025-07-11 00:23:59.562 [INFO][4826] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f" Jul 11 00:23:59.615171 containerd[1582]: 2025-07-11 00:23:59.562 [INFO][4826] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f" Jul 11 00:23:59.615171 containerd[1582]: 2025-07-11 00:23:59.594 [INFO][4837] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f" HandleID="k8s-pod-network.794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f" Workload="localhost-k8s-calico--apiserver--59dcd5c4d5--t5svc-eth0" Jul 11 00:23:59.615171 containerd[1582]: 2025-07-11 00:23:59.594 [INFO][4837] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:23:59.615171 containerd[1582]: 2025-07-11 00:23:59.594 [INFO][4837] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:23:59.615171 containerd[1582]: 2025-07-11 00:23:59.601 [WARNING][4837] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f" HandleID="k8s-pod-network.794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f" Workload="localhost-k8s-calico--apiserver--59dcd5c4d5--t5svc-eth0" Jul 11 00:23:59.615171 containerd[1582]: 2025-07-11 00:23:59.602 [INFO][4837] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f" HandleID="k8s-pod-network.794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f" Workload="localhost-k8s-calico--apiserver--59dcd5c4d5--t5svc-eth0" Jul 11 00:23:59.615171 containerd[1582]: 2025-07-11 00:23:59.605 [INFO][4837] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:23:59.615171 containerd[1582]: 2025-07-11 00:23:59.611 [INFO][4826] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f" Jul 11 00:23:59.615756 containerd[1582]: time="2025-07-11T00:23:59.615429277Z" level=info msg="TearDown network for sandbox \"794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f\" successfully" Jul 11 00:23:59.615756 containerd[1582]: time="2025-07-11T00:23:59.615470067Z" level=info msg="StopPodSandbox for \"794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f\" returns successfully" Jul 11 00:23:59.619043 containerd[1582]: time="2025-07-11T00:23:59.618992148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59dcd5c4d5-t5svc,Uid:a6481fdf-2fc3-43a0-ad5f-0da6fdc9cbf7,Namespace:calico-apiserver,Attempt:1,}" Jul 11 00:23:59.619605 systemd[1]: run-netns-cni\x2d668bf6c8\x2df6c2\x2d3f6e\x2d433b\x2da122ef0ee995.mount: Deactivated successfully. Jul 11 00:23:59.764721 sshd[4833]: pam_unix(sshd:session): session closed for user core Jul 11 00:23:59.769878 systemd[1]: sshd@9-10.0.0.132:22-10.0.0.1:39398.service: Deactivated successfully. Jul 11 00:23:59.772844 systemd-logind[1554]: Session 10 logged out. Waiting for processes to exit. Jul 11 00:23:59.773132 systemd[1]: session-10.scope: Deactivated successfully. Jul 11 00:23:59.774858 systemd-logind[1554]: Removed session 10. Jul 11 00:23:59.948569 systemd-networkd[1242]: cali8aad30be039: Link UP Jul 11 00:23:59.948825 systemd-networkd[1242]: cali8aad30be039: Gained carrier Jul 11 00:23:59.964153 containerd[1582]: 2025-07-11 00:23:59.689 [INFO][4846] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--59dcd5c4d5--t5svc-eth0 calico-apiserver-59dcd5c4d5- calico-apiserver a6481fdf-2fc3-43a0-ad5f-0da6fdc9cbf7 1103 0 2025-07-11 00:23:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:59dcd5c4d5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-59dcd5c4d5-t5svc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8aad30be039 [] [] }} ContainerID="d7ea0056d034c1f241af7d3d03b4931d7c38e860e907a0e71ee35506f5335afd" Namespace="calico-apiserver" Pod="calico-apiserver-59dcd5c4d5-t5svc" WorkloadEndpoint="localhost-k8s-calico--apiserver--59dcd5c4d5--t5svc-" Jul 11 00:23:59.964153 containerd[1582]: 2025-07-11 00:23:59.689 [INFO][4846] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d7ea0056d034c1f241af7d3d03b4931d7c38e860e907a0e71ee35506f5335afd" Namespace="calico-apiserver" Pod="calico-apiserver-59dcd5c4d5-t5svc" WorkloadEndpoint="localhost-k8s-calico--apiserver--59dcd5c4d5--t5svc-eth0" Jul 11 00:23:59.964153 containerd[1582]: 2025-07-11 00:23:59.725 [INFO][4870] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d7ea0056d034c1f241af7d3d03b4931d7c38e860e907a0e71ee35506f5335afd" HandleID="k8s-pod-network.d7ea0056d034c1f241af7d3d03b4931d7c38e860e907a0e71ee35506f5335afd" Workload="localhost-k8s-calico--apiserver--59dcd5c4d5--t5svc-eth0" Jul 11 00:23:59.964153 containerd[1582]: 2025-07-11 00:23:59.725 [INFO][4870] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d7ea0056d034c1f241af7d3d03b4931d7c38e860e907a0e71ee35506f5335afd" HandleID="k8s-pod-network.d7ea0056d034c1f241af7d3d03b4931d7c38e860e907a0e71ee35506f5335afd" Workload="localhost-k8s-calico--apiserver--59dcd5c4d5--t5svc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001b26b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-59dcd5c4d5-t5svc", "timestamp":"2025-07-11 00:23:59.725109121 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:23:59.964153 containerd[1582]: 2025-07-11 00:23:59.725 [INFO][4870] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:23:59.964153 containerd[1582]: 2025-07-11 00:23:59.725 [INFO][4870] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:23:59.964153 containerd[1582]: 2025-07-11 00:23:59.725 [INFO][4870] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:23:59.964153 containerd[1582]: 2025-07-11 00:23:59.734 [INFO][4870] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d7ea0056d034c1f241af7d3d03b4931d7c38e860e907a0e71ee35506f5335afd" host="localhost" Jul 11 00:23:59.964153 containerd[1582]: 2025-07-11 00:23:59.741 [INFO][4870] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:23:59.964153 containerd[1582]: 2025-07-11 00:23:59.745 [INFO][4870] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:23:59.964153 containerd[1582]: 2025-07-11 00:23:59.748 [INFO][4870] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:23:59.964153 containerd[1582]: 2025-07-11 00:23:59.750 [INFO][4870] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:23:59.964153 containerd[1582]: 2025-07-11 00:23:59.750 [INFO][4870] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d7ea0056d034c1f241af7d3d03b4931d7c38e860e907a0e71ee35506f5335afd" host="localhost" Jul 11 00:23:59.964153 containerd[1582]: 2025-07-11 00:23:59.752 [INFO][4870] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d7ea0056d034c1f241af7d3d03b4931d7c38e860e907a0e71ee35506f5335afd Jul 11 00:23:59.964153 containerd[1582]: 2025-07-11 00:23:59.848 [INFO][4870] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d7ea0056d034c1f241af7d3d03b4931d7c38e860e907a0e71ee35506f5335afd" host="localhost" Jul 11 00:23:59.964153 containerd[1582]: 2025-07-11 00:23:59.940 [INFO][4870] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.d7ea0056d034c1f241af7d3d03b4931d7c38e860e907a0e71ee35506f5335afd" host="localhost" Jul 11 00:23:59.964153 containerd[1582]: 2025-07-11 00:23:59.940 [INFO][4870] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.d7ea0056d034c1f241af7d3d03b4931d7c38e860e907a0e71ee35506f5335afd" host="localhost" Jul 11 00:23:59.964153 containerd[1582]: 2025-07-11 00:23:59.940 [INFO][4870] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:23:59.964153 containerd[1582]: 2025-07-11 00:23:59.940 [INFO][4870] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="d7ea0056d034c1f241af7d3d03b4931d7c38e860e907a0e71ee35506f5335afd" HandleID="k8s-pod-network.d7ea0056d034c1f241af7d3d03b4931d7c38e860e907a0e71ee35506f5335afd" Workload="localhost-k8s-calico--apiserver--59dcd5c4d5--t5svc-eth0" Jul 11 00:23:59.965340 containerd[1582]: 2025-07-11 00:23:59.944 [INFO][4846] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d7ea0056d034c1f241af7d3d03b4931d7c38e860e907a0e71ee35506f5335afd" Namespace="calico-apiserver" Pod="calico-apiserver-59dcd5c4d5-t5svc" WorkloadEndpoint="localhost-k8s-calico--apiserver--59dcd5c4d5--t5svc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59dcd5c4d5--t5svc-eth0", GenerateName:"calico-apiserver-59dcd5c4d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"a6481fdf-2fc3-43a0-ad5f-0da6fdc9cbf7", ResourceVersion:"1103", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 23, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59dcd5c4d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-59dcd5c4d5-t5svc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8aad30be039", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:23:59.965340 containerd[1582]: 2025-07-11 00:23:59.944 [INFO][4846] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="d7ea0056d034c1f241af7d3d03b4931d7c38e860e907a0e71ee35506f5335afd" Namespace="calico-apiserver" Pod="calico-apiserver-59dcd5c4d5-t5svc" WorkloadEndpoint="localhost-k8s-calico--apiserver--59dcd5c4d5--t5svc-eth0" Jul 11 00:23:59.965340 containerd[1582]: 2025-07-11 00:23:59.944 [INFO][4846] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8aad30be039 ContainerID="d7ea0056d034c1f241af7d3d03b4931d7c38e860e907a0e71ee35506f5335afd" Namespace="calico-apiserver" Pod="calico-apiserver-59dcd5c4d5-t5svc" WorkloadEndpoint="localhost-k8s-calico--apiserver--59dcd5c4d5--t5svc-eth0" Jul 11 00:23:59.965340 containerd[1582]: 2025-07-11 00:23:59.946 [INFO][4846] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d7ea0056d034c1f241af7d3d03b4931d7c38e860e907a0e71ee35506f5335afd" Namespace="calico-apiserver" Pod="calico-apiserver-59dcd5c4d5-t5svc" WorkloadEndpoint="localhost-k8s-calico--apiserver--59dcd5c4d5--t5svc-eth0" Jul 11 00:23:59.965340 containerd[1582]: 2025-07-11 00:23:59.947 [INFO][4846] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d7ea0056d034c1f241af7d3d03b4931d7c38e860e907a0e71ee35506f5335afd" Namespace="calico-apiserver" Pod="calico-apiserver-59dcd5c4d5-t5svc" WorkloadEndpoint="localhost-k8s-calico--apiserver--59dcd5c4d5--t5svc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59dcd5c4d5--t5svc-eth0", GenerateName:"calico-apiserver-59dcd5c4d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"a6481fdf-2fc3-43a0-ad5f-0da6fdc9cbf7", ResourceVersion:"1103", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 23, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59dcd5c4d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d7ea0056d034c1f241af7d3d03b4931d7c38e860e907a0e71ee35506f5335afd", Pod:"calico-apiserver-59dcd5c4d5-t5svc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8aad30be039", MAC:"de:e7:d8:e0:f9:95", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:23:59.965340 containerd[1582]: 2025-07-11 00:23:59.960 [INFO][4846] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d7ea0056d034c1f241af7d3d03b4931d7c38e860e907a0e71ee35506f5335afd" Namespace="calico-apiserver" Pod="calico-apiserver-59dcd5c4d5-t5svc" WorkloadEndpoint="localhost-k8s-calico--apiserver--59dcd5c4d5--t5svc-eth0" Jul 11 00:23:59.987408 containerd[1582]: time="2025-07-11T00:23:59.986897785Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:23:59.987408 containerd[1582]: time="2025-07-11T00:23:59.986969884Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:23:59.987408 containerd[1582]: time="2025-07-11T00:23:59.986984793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:23:59.987662 containerd[1582]: time="2025-07-11T00:23:59.987432147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:24:00.024553 systemd-resolved[1466]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:24:00.062535 containerd[1582]: time="2025-07-11T00:24:00.062488424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59dcd5c4d5-t5svc,Uid:a6481fdf-2fc3-43a0-ad5f-0da6fdc9cbf7,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"d7ea0056d034c1f241af7d3d03b4931d7c38e860e907a0e71ee35506f5335afd\"" Jul 11 00:24:01.365654 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1354485072.mount: Deactivated successfully. Jul 11 00:24:01.436405 systemd-networkd[1242]: cali8aad30be039: Gained IPv6LL Jul 11 00:24:01.468269 containerd[1582]: time="2025-07-11T00:24:01.468147852Z" level=info msg="StopPodSandbox for \"e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6\"" Jul 11 00:24:01.736981 containerd[1582]: 2025-07-11 00:24:01.667 [INFO][4948] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6" Jul 11 00:24:01.736981 containerd[1582]: 2025-07-11 00:24:01.668 [INFO][4948] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6" iface="eth0" netns="/var/run/netns/cni-3e13b518-87ff-febf-e875-cb101a635181" Jul 11 00:24:01.736981 containerd[1582]: 2025-07-11 00:24:01.668 [INFO][4948] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6" iface="eth0" netns="/var/run/netns/cni-3e13b518-87ff-febf-e875-cb101a635181" Jul 11 00:24:01.736981 containerd[1582]: 2025-07-11 00:24:01.669 [INFO][4948] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6" iface="eth0" netns="/var/run/netns/cni-3e13b518-87ff-febf-e875-cb101a635181" Jul 11 00:24:01.736981 containerd[1582]: 2025-07-11 00:24:01.669 [INFO][4948] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6" Jul 11 00:24:01.736981 containerd[1582]: 2025-07-11 00:24:01.669 [INFO][4948] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6" Jul 11 00:24:01.736981 containerd[1582]: 2025-07-11 00:24:01.697 [INFO][4957] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6" HandleID="k8s-pod-network.e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6" Workload="localhost-k8s-csi--node--driver--8gdbn-eth0" Jul 11 00:24:01.736981 containerd[1582]: 2025-07-11 00:24:01.697 [INFO][4957] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:24:01.736981 containerd[1582]: 2025-07-11 00:24:01.697 [INFO][4957] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:24:01.736981 containerd[1582]: 2025-07-11 00:24:01.703 [WARNING][4957] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6" HandleID="k8s-pod-network.e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6" Workload="localhost-k8s-csi--node--driver--8gdbn-eth0" Jul 11 00:24:01.736981 containerd[1582]: 2025-07-11 00:24:01.703 [INFO][4957] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6" HandleID="k8s-pod-network.e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6" Workload="localhost-k8s-csi--node--driver--8gdbn-eth0" Jul 11 00:24:01.736981 containerd[1582]: 2025-07-11 00:24:01.730 [INFO][4957] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:24:01.736981 containerd[1582]: 2025-07-11 00:24:01.734 [INFO][4948] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6" Jul 11 00:24:01.737437 containerd[1582]: time="2025-07-11T00:24:01.737211733Z" level=info msg="TearDown network for sandbox \"e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6\" successfully" Jul 11 00:24:01.737437 containerd[1582]: time="2025-07-11T00:24:01.737247322Z" level=info msg="StopPodSandbox for \"e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6\" returns successfully" Jul 11 00:24:01.738141 containerd[1582]: time="2025-07-11T00:24:01.738107351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8gdbn,Uid:b0fe6896-2f2f-4a85-81a1-6d288dfe16c3,Namespace:calico-system,Attempt:1,}" Jul 11 00:24:01.741173 systemd[1]: run-netns-cni\x2d3e13b518\x2d87ff\x2dfebf\x2de875\x2dcb101a635181.mount: Deactivated successfully. Jul 11 00:24:02.042998 containerd[1582]: time="2025-07-11T00:24:02.041579064Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:24:02.048787 containerd[1582]: time="2025-07-11T00:24:02.048659196Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 11 00:24:02.050532 containerd[1582]: time="2025-07-11T00:24:02.050495905Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:24:02.053726 containerd[1582]: time="2025-07-11T00:24:02.053691994Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:24:02.054584 containerd[1582]: time="2025-07-11T00:24:02.054538224Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 4.288001807s" Jul 11 00:24:02.054655 containerd[1582]: time="2025-07-11T00:24:02.054586076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 11 00:24:02.056071 containerd[1582]: time="2025-07-11T00:24:02.056044516Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 11 00:24:02.058284 containerd[1582]: time="2025-07-11T00:24:02.058250858Z" level=info msg="CreateContainer within sandbox \"eb2d52020254b754243a948c25fc2b54762620353f66a0af9519a5dcc23bafcf\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 11 00:24:02.085686 containerd[1582]: time="2025-07-11T00:24:02.085515543Z" level=info msg="CreateContainer within sandbox \"eb2d52020254b754243a948c25fc2b54762620353f66a0af9519a5dcc23bafcf\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"635e7f3940aeda428f8197e460d426bda681da089b443fb94e9b40a2b59dff28\"" Jul 11 00:24:02.088763 containerd[1582]: time="2025-07-11T00:24:02.088721821Z" level=info msg="StartContainer for \"635e7f3940aeda428f8197e460d426bda681da089b443fb94e9b40a2b59dff28\"" Jul 11 00:24:02.133832 systemd[1]: run-containerd-runc-k8s.io-635e7f3940aeda428f8197e460d426bda681da089b443fb94e9b40a2b59dff28-runc.gwdDm8.mount: Deactivated successfully. Jul 11 00:24:02.186820 systemd-networkd[1242]: cali4e7eee86f8e: Link UP Jul 11 00:24:02.187686 systemd-networkd[1242]: cali4e7eee86f8e: Gained carrier Jul 11 00:24:02.196740 containerd[1582]: time="2025-07-11T00:24:02.196679901Z" level=info msg="StartContainer for \"635e7f3940aeda428f8197e460d426bda681da089b443fb94e9b40a2b59dff28\" returns successfully" Jul 11 00:24:02.214084 containerd[1582]: 2025-07-11 00:24:02.093 [INFO][4968] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--8gdbn-eth0 csi-node-driver- calico-system b0fe6896-2f2f-4a85-81a1-6d288dfe16c3 1123 0 2025-07-11 00:23:24 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-8gdbn eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali4e7eee86f8e [] [] }} ContainerID="64086a15db96731b8e1389d841a56f7a063025007357f60e2b6d18ad8a214ba0" Namespace="calico-system" Pod="csi-node-driver-8gdbn" WorkloadEndpoint="localhost-k8s-csi--node--driver--8gdbn-" Jul 11 00:24:02.214084 containerd[1582]: 2025-07-11 00:24:02.093 [INFO][4968] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="64086a15db96731b8e1389d841a56f7a063025007357f60e2b6d18ad8a214ba0" Namespace="calico-system" Pod="csi-node-driver-8gdbn" WorkloadEndpoint="localhost-k8s-csi--node--driver--8gdbn-eth0" Jul 11 00:24:02.214084 containerd[1582]: 2025-07-11 00:24:02.131 [INFO][4989] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="64086a15db96731b8e1389d841a56f7a063025007357f60e2b6d18ad8a214ba0" HandleID="k8s-pod-network.64086a15db96731b8e1389d841a56f7a063025007357f60e2b6d18ad8a214ba0" Workload="localhost-k8s-csi--node--driver--8gdbn-eth0" Jul 11 00:24:02.214084 containerd[1582]: 2025-07-11 00:24:02.132 [INFO][4989] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="64086a15db96731b8e1389d841a56f7a063025007357f60e2b6d18ad8a214ba0" HandleID="k8s-pod-network.64086a15db96731b8e1389d841a56f7a063025007357f60e2b6d18ad8a214ba0" Workload="localhost-k8s-csi--node--driver--8gdbn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c72d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-8gdbn", "timestamp":"2025-07-11 00:24:02.131363736 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:24:02.214084 containerd[1582]: 2025-07-11 00:24:02.132 [INFO][4989] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:24:02.214084 containerd[1582]: 2025-07-11 00:24:02.132 [INFO][4989] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:24:02.214084 containerd[1582]: 2025-07-11 00:24:02.132 [INFO][4989] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:24:02.214084 containerd[1582]: 2025-07-11 00:24:02.142 [INFO][4989] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.64086a15db96731b8e1389d841a56f7a063025007357f60e2b6d18ad8a214ba0" host="localhost" Jul 11 00:24:02.214084 containerd[1582]: 2025-07-11 00:24:02.149 [INFO][4989] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:24:02.214084 containerd[1582]: 2025-07-11 00:24:02.153 [INFO][4989] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:24:02.214084 containerd[1582]: 2025-07-11 00:24:02.156 [INFO][4989] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:24:02.214084 containerd[1582]: 2025-07-11 00:24:02.158 [INFO][4989] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:24:02.214084 containerd[1582]: 2025-07-11 00:24:02.158 [INFO][4989] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.64086a15db96731b8e1389d841a56f7a063025007357f60e2b6d18ad8a214ba0" host="localhost" Jul 11 00:24:02.214084 containerd[1582]: 2025-07-11 00:24:02.160 [INFO][4989] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.64086a15db96731b8e1389d841a56f7a063025007357f60e2b6d18ad8a214ba0 Jul 11 00:24:02.214084 containerd[1582]: 2025-07-11 00:24:02.165 [INFO][4989] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.64086a15db96731b8e1389d841a56f7a063025007357f60e2b6d18ad8a214ba0" host="localhost" Jul 11 00:24:02.214084 containerd[1582]: 2025-07-11 00:24:02.174 [INFO][4989] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.64086a15db96731b8e1389d841a56f7a063025007357f60e2b6d18ad8a214ba0" host="localhost" Jul 11 00:24:02.214084 containerd[1582]: 2025-07-11 00:24:02.175 [INFO][4989] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.64086a15db96731b8e1389d841a56f7a063025007357f60e2b6d18ad8a214ba0" host="localhost" Jul 11 00:24:02.214084 containerd[1582]: 2025-07-11 00:24:02.175 [INFO][4989] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:24:02.214084 containerd[1582]: 2025-07-11 00:24:02.175 [INFO][4989] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="64086a15db96731b8e1389d841a56f7a063025007357f60e2b6d18ad8a214ba0" HandleID="k8s-pod-network.64086a15db96731b8e1389d841a56f7a063025007357f60e2b6d18ad8a214ba0" Workload="localhost-k8s-csi--node--driver--8gdbn-eth0" Jul 11 00:24:02.215739 containerd[1582]: 2025-07-11 00:24:02.181 [INFO][4968] cni-plugin/k8s.go 418: Populated endpoint ContainerID="64086a15db96731b8e1389d841a56f7a063025007357f60e2b6d18ad8a214ba0" Namespace="calico-system" Pod="csi-node-driver-8gdbn" WorkloadEndpoint="localhost-k8s-csi--node--driver--8gdbn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8gdbn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b0fe6896-2f2f-4a85-81a1-6d288dfe16c3", ResourceVersion:"1123", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 23, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-8gdbn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4e7eee86f8e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:24:02.215739 containerd[1582]: 2025-07-11 00:24:02.181 [INFO][4968] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="64086a15db96731b8e1389d841a56f7a063025007357f60e2b6d18ad8a214ba0" Namespace="calico-system" Pod="csi-node-driver-8gdbn" WorkloadEndpoint="localhost-k8s-csi--node--driver--8gdbn-eth0" Jul 11 00:24:02.215739 containerd[1582]: 2025-07-11 00:24:02.182 [INFO][4968] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4e7eee86f8e ContainerID="64086a15db96731b8e1389d841a56f7a063025007357f60e2b6d18ad8a214ba0" Namespace="calico-system" Pod="csi-node-driver-8gdbn" WorkloadEndpoint="localhost-k8s-csi--node--driver--8gdbn-eth0" Jul 11 00:24:02.215739 containerd[1582]: 2025-07-11 00:24:02.187 [INFO][4968] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="64086a15db96731b8e1389d841a56f7a063025007357f60e2b6d18ad8a214ba0" Namespace="calico-system" Pod="csi-node-driver-8gdbn" WorkloadEndpoint="localhost-k8s-csi--node--driver--8gdbn-eth0" Jul 11 00:24:02.215739 containerd[1582]: 2025-07-11 00:24:02.188 [INFO][4968] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="64086a15db96731b8e1389d841a56f7a063025007357f60e2b6d18ad8a214ba0" Namespace="calico-system" Pod="csi-node-driver-8gdbn" WorkloadEndpoint="localhost-k8s-csi--node--driver--8gdbn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8gdbn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b0fe6896-2f2f-4a85-81a1-6d288dfe16c3", ResourceVersion:"1123", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 23, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"64086a15db96731b8e1389d841a56f7a063025007357f60e2b6d18ad8a214ba0", Pod:"csi-node-driver-8gdbn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4e7eee86f8e", MAC:"56:79:39:5a:22:31", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:24:02.215739 containerd[1582]: 2025-07-11 00:24:02.209 [INFO][4968] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="64086a15db96731b8e1389d841a56f7a063025007357f60e2b6d18ad8a214ba0" Namespace="calico-system" Pod="csi-node-driver-8gdbn" WorkloadEndpoint="localhost-k8s-csi--node--driver--8gdbn-eth0" Jul 11 00:24:02.234338 kubelet[2662]: I0711 00:24:02.233437 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6f4d7498fc-gjn5s" podStartSLOduration=1.7070667880000001 podStartE2EDuration="11.233344869s" podCreationTimestamp="2025-07-11 00:23:51 +0000 UTC" firstStartedPulling="2025-07-11 00:23:52.52952483 +0000 UTC m=+46.212012343" lastFinishedPulling="2025-07-11 00:24:02.055802911 +0000 UTC m=+55.738290424" observedRunningTime="2025-07-11 00:24:02.227919485 +0000 UTC m=+55.910407008" watchObservedRunningTime="2025-07-11 00:24:02.233344869 +0000 UTC m=+55.915832382" Jul 11 00:24:02.258605 containerd[1582]: time="2025-07-11T00:24:02.258410427Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:24:02.258843 containerd[1582]: time="2025-07-11T00:24:02.258518455Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:24:02.259446 containerd[1582]: time="2025-07-11T00:24:02.259374655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:24:02.259791 containerd[1582]: time="2025-07-11T00:24:02.259703108Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:24:02.293047 systemd-resolved[1466]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:24:02.312237 containerd[1582]: time="2025-07-11T00:24:02.312170024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8gdbn,Uid:b0fe6896-2f2f-4a85-81a1-6d288dfe16c3,Namespace:calico-system,Attempt:1,} returns sandbox id \"64086a15db96731b8e1389d841a56f7a063025007357f60e2b6d18ad8a214ba0\"" Jul 11 00:24:02.468471 containerd[1582]: time="2025-07-11T00:24:02.468302659Z" level=info msg="StopPodSandbox for \"67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1\"" Jul 11 00:24:02.577584 containerd[1582]: 2025-07-11 00:24:02.531 [INFO][5091] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1" Jul 11 00:24:02.577584 containerd[1582]: 2025-07-11 00:24:02.531 [INFO][5091] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1" iface="eth0" netns="/var/run/netns/cni-af854ca5-00fa-11c3-5f63-abf88c59869b" Jul 11 00:24:02.577584 containerd[1582]: 2025-07-11 00:24:02.532 [INFO][5091] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1" iface="eth0" netns="/var/run/netns/cni-af854ca5-00fa-11c3-5f63-abf88c59869b" Jul 11 00:24:02.577584 containerd[1582]: 2025-07-11 00:24:02.532 [INFO][5091] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1" iface="eth0" netns="/var/run/netns/cni-af854ca5-00fa-11c3-5f63-abf88c59869b" Jul 11 00:24:02.577584 containerd[1582]: 2025-07-11 00:24:02.532 [INFO][5091] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1" Jul 11 00:24:02.577584 containerd[1582]: 2025-07-11 00:24:02.532 [INFO][5091] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1" Jul 11 00:24:02.577584 containerd[1582]: 2025-07-11 00:24:02.558 [INFO][5100] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1" HandleID="k8s-pod-network.67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1" Workload="localhost-k8s-goldmane--58fd7646b9--vvl26-eth0" Jul 11 00:24:02.577584 containerd[1582]: 2025-07-11 00:24:02.558 [INFO][5100] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:24:02.577584 containerd[1582]: 2025-07-11 00:24:02.558 [INFO][5100] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:24:02.577584 containerd[1582]: 2025-07-11 00:24:02.566 [WARNING][5100] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1" HandleID="k8s-pod-network.67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1" Workload="localhost-k8s-goldmane--58fd7646b9--vvl26-eth0" Jul 11 00:24:02.577584 containerd[1582]: 2025-07-11 00:24:02.566 [INFO][5100] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1" HandleID="k8s-pod-network.67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1" Workload="localhost-k8s-goldmane--58fd7646b9--vvl26-eth0" Jul 11 00:24:02.577584 containerd[1582]: 2025-07-11 00:24:02.568 [INFO][5100] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:24:02.577584 containerd[1582]: 2025-07-11 00:24:02.574 [INFO][5091] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1" Jul 11 00:24:02.578232 containerd[1582]: time="2025-07-11T00:24:02.577626611Z" level=info msg="TearDown network for sandbox \"67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1\" successfully" Jul 11 00:24:02.578232 containerd[1582]: time="2025-07-11T00:24:02.577660066Z" level=info msg="StopPodSandbox for \"67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1\" returns successfully" Jul 11 00:24:02.578566 containerd[1582]: time="2025-07-11T00:24:02.578537807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-vvl26,Uid:00f725d3-9859-45cd-81ce-7316e621f780,Namespace:calico-system,Attempt:1,}" Jul 11 00:24:02.698483 systemd-networkd[1242]: cali502c1da5319: Link UP Jul 11 00:24:02.698747 systemd-networkd[1242]: cali502c1da5319: Gained carrier Jul 11 00:24:02.865084 containerd[1582]: 2025-07-11 00:24:02.629 [INFO][5108] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--58fd7646b9--vvl26-eth0 goldmane-58fd7646b9- calico-system 00f725d3-9859-45cd-81ce-7316e621f780 1139 0 2025-07-11 00:23:23 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-58fd7646b9-vvl26 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali502c1da5319 [] [] }} ContainerID="9578f41ced7f55848e7009f12f320aff9f66f287f7cf63fd297e598cf982ca8f" Namespace="calico-system" Pod="goldmane-58fd7646b9-vvl26" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--vvl26-" Jul 11 00:24:02.865084 containerd[1582]: 2025-07-11 00:24:02.629 [INFO][5108] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9578f41ced7f55848e7009f12f320aff9f66f287f7cf63fd297e598cf982ca8f" Namespace="calico-system" Pod="goldmane-58fd7646b9-vvl26" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--vvl26-eth0" Jul 11 00:24:02.865084 containerd[1582]: 2025-07-11 00:24:02.656 [INFO][5123] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9578f41ced7f55848e7009f12f320aff9f66f287f7cf63fd297e598cf982ca8f" HandleID="k8s-pod-network.9578f41ced7f55848e7009f12f320aff9f66f287f7cf63fd297e598cf982ca8f" Workload="localhost-k8s-goldmane--58fd7646b9--vvl26-eth0" Jul 11 00:24:02.865084 containerd[1582]: 2025-07-11 00:24:02.656 [INFO][5123] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9578f41ced7f55848e7009f12f320aff9f66f287f7cf63fd297e598cf982ca8f" HandleID="k8s-pod-network.9578f41ced7f55848e7009f12f320aff9f66f287f7cf63fd297e598cf982ca8f" Workload="localhost-k8s-goldmane--58fd7646b9--vvl26-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df720), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-58fd7646b9-vvl26", "timestamp":"2025-07-11 00:24:02.656092675 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:24:02.865084 containerd[1582]: 2025-07-11 00:24:02.656 [INFO][5123] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:24:02.865084 containerd[1582]: 2025-07-11 00:24:02.656 [INFO][5123] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:24:02.865084 containerd[1582]: 2025-07-11 00:24:02.656 [INFO][5123] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:24:02.865084 containerd[1582]: 2025-07-11 00:24:02.664 [INFO][5123] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9578f41ced7f55848e7009f12f320aff9f66f287f7cf63fd297e598cf982ca8f" host="localhost" Jul 11 00:24:02.865084 containerd[1582]: 2025-07-11 00:24:02.670 [INFO][5123] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:24:02.865084 containerd[1582]: 2025-07-11 00:24:02.675 [INFO][5123] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:24:02.865084 containerd[1582]: 2025-07-11 00:24:02.677 [INFO][5123] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:24:02.865084 containerd[1582]: 2025-07-11 00:24:02.679 [INFO][5123] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:24:02.865084 containerd[1582]: 2025-07-11 00:24:02.680 [INFO][5123] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9578f41ced7f55848e7009f12f320aff9f66f287f7cf63fd297e598cf982ca8f" host="localhost" Jul 11 00:24:02.865084 containerd[1582]: 2025-07-11 00:24:02.681 [INFO][5123] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9578f41ced7f55848e7009f12f320aff9f66f287f7cf63fd297e598cf982ca8f Jul 11 00:24:02.865084 containerd[1582]: 2025-07-11 00:24:02.685 [INFO][5123] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9578f41ced7f55848e7009f12f320aff9f66f287f7cf63fd297e598cf982ca8f" host="localhost" Jul 11 00:24:02.865084 containerd[1582]: 2025-07-11 00:24:02.692 [INFO][5123] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.9578f41ced7f55848e7009f12f320aff9f66f287f7cf63fd297e598cf982ca8f" host="localhost" Jul 11 00:24:02.865084 containerd[1582]: 2025-07-11 00:24:02.693 [INFO][5123] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.9578f41ced7f55848e7009f12f320aff9f66f287f7cf63fd297e598cf982ca8f" host="localhost" Jul 11 00:24:02.865084 containerd[1582]: 2025-07-11 00:24:02.693 [INFO][5123] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:24:02.865084 containerd[1582]: 2025-07-11 00:24:02.693 [INFO][5123] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="9578f41ced7f55848e7009f12f320aff9f66f287f7cf63fd297e598cf982ca8f" HandleID="k8s-pod-network.9578f41ced7f55848e7009f12f320aff9f66f287f7cf63fd297e598cf982ca8f" Workload="localhost-k8s-goldmane--58fd7646b9--vvl26-eth0" Jul 11 00:24:02.866186 containerd[1582]: 2025-07-11 00:24:02.696 [INFO][5108] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9578f41ced7f55848e7009f12f320aff9f66f287f7cf63fd297e598cf982ca8f" Namespace="calico-system" Pod="goldmane-58fd7646b9-vvl26" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--vvl26-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--vvl26-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"00f725d3-9859-45cd-81ce-7316e621f780", ResourceVersion:"1139", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 23, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-58fd7646b9-vvl26", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali502c1da5319", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:24:02.866186 containerd[1582]: 2025-07-11 00:24:02.696 [INFO][5108] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="9578f41ced7f55848e7009f12f320aff9f66f287f7cf63fd297e598cf982ca8f" Namespace="calico-system" Pod="goldmane-58fd7646b9-vvl26" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--vvl26-eth0" Jul 11 00:24:02.866186 containerd[1582]: 2025-07-11 00:24:02.696 [INFO][5108] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali502c1da5319 ContainerID="9578f41ced7f55848e7009f12f320aff9f66f287f7cf63fd297e598cf982ca8f" Namespace="calico-system" Pod="goldmane-58fd7646b9-vvl26" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--vvl26-eth0" Jul 11 00:24:02.866186 containerd[1582]: 2025-07-11 00:24:02.698 [INFO][5108] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9578f41ced7f55848e7009f12f320aff9f66f287f7cf63fd297e598cf982ca8f" Namespace="calico-system" Pod="goldmane-58fd7646b9-vvl26" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--vvl26-eth0" Jul 11 00:24:02.866186 containerd[1582]: 2025-07-11 00:24:02.699 [INFO][5108] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9578f41ced7f55848e7009f12f320aff9f66f287f7cf63fd297e598cf982ca8f" Namespace="calico-system" Pod="goldmane-58fd7646b9-vvl26" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--vvl26-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--vvl26-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"00f725d3-9859-45cd-81ce-7316e621f780", ResourceVersion:"1139", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 23, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9578f41ced7f55848e7009f12f320aff9f66f287f7cf63fd297e598cf982ca8f", Pod:"goldmane-58fd7646b9-vvl26", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali502c1da5319", MAC:"36:ee:8d:6e:d6:85", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:24:02.866186 containerd[1582]: 2025-07-11 00:24:02.861 [INFO][5108] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9578f41ced7f55848e7009f12f320aff9f66f287f7cf63fd297e598cf982ca8f" Namespace="calico-system" Pod="goldmane-58fd7646b9-vvl26" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--vvl26-eth0" Jul 11 00:24:02.925225 containerd[1582]: time="2025-07-11T00:24:02.925065907Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:24:02.925225 containerd[1582]: time="2025-07-11T00:24:02.925138718Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:24:02.925225 containerd[1582]: time="2025-07-11T00:24:02.925151974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:24:02.925464 containerd[1582]: time="2025-07-11T00:24:02.925306331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:24:02.965288 systemd-resolved[1466]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:24:02.997829 containerd[1582]: time="2025-07-11T00:24:02.997703319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-vvl26,Uid:00f725d3-9859-45cd-81ce-7316e621f780,Namespace:calico-system,Attempt:1,} returns sandbox id \"9578f41ced7f55848e7009f12f320aff9f66f287f7cf63fd297e598cf982ca8f\"" Jul 11 00:24:03.076321 systemd[1]: run-netns-cni\x2daf854ca5\x2d00fa\x2d11c3\x2d5f63\x2dabf88c59869b.mount: Deactivated successfully. Jul 11 00:24:03.468021 containerd[1582]: time="2025-07-11T00:24:03.467833723Z" level=info msg="StopPodSandbox for \"4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee\"" Jul 11 00:24:03.592287 containerd[1582]: 2025-07-11 00:24:03.523 [INFO][5198] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee" Jul 11 00:24:03.592287 containerd[1582]: 2025-07-11 00:24:03.524 [INFO][5198] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee" iface="eth0" netns="/var/run/netns/cni-b7c69e95-dfc8-738f-023f-f706ae68611b" Jul 11 00:24:03.592287 containerd[1582]: 2025-07-11 00:24:03.524 [INFO][5198] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee" iface="eth0" netns="/var/run/netns/cni-b7c69e95-dfc8-738f-023f-f706ae68611b" Jul 11 00:24:03.592287 containerd[1582]: 2025-07-11 00:24:03.524 [INFO][5198] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee" iface="eth0" netns="/var/run/netns/cni-b7c69e95-dfc8-738f-023f-f706ae68611b" Jul 11 00:24:03.592287 containerd[1582]: 2025-07-11 00:24:03.524 [INFO][5198] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee" Jul 11 00:24:03.592287 containerd[1582]: 2025-07-11 00:24:03.524 [INFO][5198] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee" Jul 11 00:24:03.592287 containerd[1582]: 2025-07-11 00:24:03.558 [INFO][5206] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee" HandleID="k8s-pod-network.4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee" Workload="localhost-k8s-coredns--7c65d6cfc9--f94tf-eth0" Jul 11 00:24:03.592287 containerd[1582]: 2025-07-11 00:24:03.558 [INFO][5206] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:24:03.592287 containerd[1582]: 2025-07-11 00:24:03.558 [INFO][5206] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:24:03.592287 containerd[1582]: 2025-07-11 00:24:03.566 [WARNING][5206] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee" HandleID="k8s-pod-network.4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee" Workload="localhost-k8s-coredns--7c65d6cfc9--f94tf-eth0" Jul 11 00:24:03.592287 containerd[1582]: 2025-07-11 00:24:03.566 [INFO][5206] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee" HandleID="k8s-pod-network.4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee" Workload="localhost-k8s-coredns--7c65d6cfc9--f94tf-eth0" Jul 11 00:24:03.592287 containerd[1582]: 2025-07-11 00:24:03.569 [INFO][5206] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:24:03.592287 containerd[1582]: 2025-07-11 00:24:03.574 [INFO][5198] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee" Jul 11 00:24:03.592287 containerd[1582]: time="2025-07-11T00:24:03.590875152Z" level=info msg="TearDown network for sandbox \"4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee\" successfully" Jul 11 00:24:03.592287 containerd[1582]: time="2025-07-11T00:24:03.591037596Z" level=info msg="StopPodSandbox for \"4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee\" returns successfully" Jul 11 00:24:03.594750 kubelet[2662]: E0711 00:24:03.593744 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:03.598146 systemd[1]: run-netns-cni\x2db7c69e95\x2ddfc8\x2d738f\x2d023f\x2df706ae68611b.mount: Deactivated successfully. Jul 11 00:24:03.600581 containerd[1582]: time="2025-07-11T00:24:03.599472938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-f94tf,Uid:415c8f48-d718-4200-be2e-9918b83dc600,Namespace:kube-system,Attempt:1,}" Jul 11 00:24:03.974781 systemd-networkd[1242]: calidf7bc7637b1: Link UP Jul 11 00:24:03.975069 systemd-networkd[1242]: calidf7bc7637b1: Gained carrier Jul 11 00:24:04.188419 systemd-networkd[1242]: cali4e7eee86f8e: Gained IPv6LL Jul 11 00:24:04.271483 containerd[1582]: 2025-07-11 00:24:03.792 [INFO][5213] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--f94tf-eth0 coredns-7c65d6cfc9- kube-system 415c8f48-d718-4200-be2e-9918b83dc600 1145 0 2025-07-11 00:23:11 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-f94tf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidf7bc7637b1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="08d98f2d6be12ae5a2847971247f622818f8f279eb0f760f4e9a5690c94ccb29" Namespace="kube-system" Pod="coredns-7c65d6cfc9-f94tf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--f94tf-" Jul 11 00:24:04.271483 containerd[1582]: 2025-07-11 00:24:03.792 [INFO][5213] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="08d98f2d6be12ae5a2847971247f622818f8f279eb0f760f4e9a5690c94ccb29" Namespace="kube-system" Pod="coredns-7c65d6cfc9-f94tf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--f94tf-eth0" Jul 11 00:24:04.271483 containerd[1582]: 2025-07-11 00:24:03.824 [INFO][5227] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="08d98f2d6be12ae5a2847971247f622818f8f279eb0f760f4e9a5690c94ccb29" HandleID="k8s-pod-network.08d98f2d6be12ae5a2847971247f622818f8f279eb0f760f4e9a5690c94ccb29" Workload="localhost-k8s-coredns--7c65d6cfc9--f94tf-eth0" Jul 11 00:24:04.271483 containerd[1582]: 2025-07-11 00:24:03.824 [INFO][5227] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="08d98f2d6be12ae5a2847971247f622818f8f279eb0f760f4e9a5690c94ccb29" HandleID="k8s-pod-network.08d98f2d6be12ae5a2847971247f622818f8f279eb0f760f4e9a5690c94ccb29" Workload="localhost-k8s-coredns--7c65d6cfc9--f94tf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b7620), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-f94tf", "timestamp":"2025-07-11 00:24:03.824399 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:24:04.271483 containerd[1582]: 2025-07-11 00:24:03.824 [INFO][5227] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:24:04.271483 containerd[1582]: 2025-07-11 00:24:03.824 [INFO][5227] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:24:04.271483 containerd[1582]: 2025-07-11 00:24:03.824 [INFO][5227] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:24:04.271483 containerd[1582]: 2025-07-11 00:24:03.831 [INFO][5227] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.08d98f2d6be12ae5a2847971247f622818f8f279eb0f760f4e9a5690c94ccb29" host="localhost" Jul 11 00:24:04.271483 containerd[1582]: 2025-07-11 00:24:03.837 [INFO][5227] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:24:04.271483 containerd[1582]: 2025-07-11 00:24:03.841 [INFO][5227] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:24:04.271483 containerd[1582]: 2025-07-11 00:24:03.843 [INFO][5227] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:24:04.271483 containerd[1582]: 2025-07-11 00:24:03.844 [INFO][5227] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:24:04.271483 containerd[1582]: 2025-07-11 00:24:03.844 [INFO][5227] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.08d98f2d6be12ae5a2847971247f622818f8f279eb0f760f4e9a5690c94ccb29" host="localhost" Jul 11 00:24:04.271483 containerd[1582]: 2025-07-11 00:24:03.846 [INFO][5227] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.08d98f2d6be12ae5a2847971247f622818f8f279eb0f760f4e9a5690c94ccb29 Jul 11 00:24:04.271483 containerd[1582]: 2025-07-11 00:24:03.890 [INFO][5227] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.08d98f2d6be12ae5a2847971247f622818f8f279eb0f760f4e9a5690c94ccb29" host="localhost" Jul 11 00:24:04.271483 containerd[1582]: 2025-07-11 00:24:03.967 [INFO][5227] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.08d98f2d6be12ae5a2847971247f622818f8f279eb0f760f4e9a5690c94ccb29" host="localhost" Jul 11 00:24:04.271483 containerd[1582]: 2025-07-11 00:24:03.967 [INFO][5227] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.08d98f2d6be12ae5a2847971247f622818f8f279eb0f760f4e9a5690c94ccb29" host="localhost" Jul 11 00:24:04.271483 containerd[1582]: 2025-07-11 00:24:03.967 [INFO][5227] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:24:04.271483 containerd[1582]: 2025-07-11 00:24:03.967 [INFO][5227] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="08d98f2d6be12ae5a2847971247f622818f8f279eb0f760f4e9a5690c94ccb29" HandleID="k8s-pod-network.08d98f2d6be12ae5a2847971247f622818f8f279eb0f760f4e9a5690c94ccb29" Workload="localhost-k8s-coredns--7c65d6cfc9--f94tf-eth0" Jul 11 00:24:04.272349 containerd[1582]: 2025-07-11 00:24:03.971 [INFO][5213] cni-plugin/k8s.go 418: Populated endpoint ContainerID="08d98f2d6be12ae5a2847971247f622818f8f279eb0f760f4e9a5690c94ccb29" Namespace="kube-system" Pod="coredns-7c65d6cfc9-f94tf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--f94tf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--f94tf-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"415c8f48-d718-4200-be2e-9918b83dc600", ResourceVersion:"1145", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 23, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-f94tf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidf7bc7637b1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:24:04.272349 containerd[1582]: 2025-07-11 00:24:03.971 [INFO][5213] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="08d98f2d6be12ae5a2847971247f622818f8f279eb0f760f4e9a5690c94ccb29" Namespace="kube-system" Pod="coredns-7c65d6cfc9-f94tf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--f94tf-eth0" Jul 11 00:24:04.272349 containerd[1582]: 2025-07-11 00:24:03.971 [INFO][5213] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidf7bc7637b1 ContainerID="08d98f2d6be12ae5a2847971247f622818f8f279eb0f760f4e9a5690c94ccb29" Namespace="kube-system" Pod="coredns-7c65d6cfc9-f94tf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--f94tf-eth0" Jul 11 00:24:04.272349 containerd[1582]: 2025-07-11 00:24:03.974 [INFO][5213] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="08d98f2d6be12ae5a2847971247f622818f8f279eb0f760f4e9a5690c94ccb29" Namespace="kube-system" Pod="coredns-7c65d6cfc9-f94tf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--f94tf-eth0" Jul 11 00:24:04.272349 containerd[1582]: 2025-07-11 00:24:03.975 [INFO][5213] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="08d98f2d6be12ae5a2847971247f622818f8f279eb0f760f4e9a5690c94ccb29" Namespace="kube-system" Pod="coredns-7c65d6cfc9-f94tf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--f94tf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--f94tf-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"415c8f48-d718-4200-be2e-9918b83dc600", ResourceVersion:"1145", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 23, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"08d98f2d6be12ae5a2847971247f622818f8f279eb0f760f4e9a5690c94ccb29", Pod:"coredns-7c65d6cfc9-f94tf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidf7bc7637b1", MAC:"fe:cb:35:ef:c1:95", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:24:04.272349 containerd[1582]: 2025-07-11 00:24:04.266 [INFO][5213] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="08d98f2d6be12ae5a2847971247f622818f8f279eb0f760f4e9a5690c94ccb29" Namespace="kube-system" Pod="coredns-7c65d6cfc9-f94tf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--f94tf-eth0" Jul 11 00:24:04.453835 containerd[1582]: time="2025-07-11T00:24:04.453123103Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:24:04.453835 containerd[1582]: time="2025-07-11T00:24:04.453240660Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:24:04.453835 containerd[1582]: time="2025-07-11T00:24:04.453254435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:24:04.453835 containerd[1582]: time="2025-07-11T00:24:04.453406398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:24:04.468745 containerd[1582]: time="2025-07-11T00:24:04.468699343Z" level=info msg="StopPodSandbox for \"57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9\"" Jul 11 00:24:04.516629 systemd-resolved[1466]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:24:04.560725 containerd[1582]: time="2025-07-11T00:24:04.560394041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-f94tf,Uid:415c8f48-d718-4200-be2e-9918b83dc600,Namespace:kube-system,Attempt:1,} returns sandbox id \"08d98f2d6be12ae5a2847971247f622818f8f279eb0f760f4e9a5690c94ccb29\"" Jul 11 00:24:04.562846 kubelet[2662]: E0711 00:24:04.562027 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:04.567745 containerd[1582]: time="2025-07-11T00:24:04.567693315Z" level=info msg="CreateContainer within sandbox \"08d98f2d6be12ae5a2847971247f622818f8f279eb0f760f4e9a5690c94ccb29\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 11 00:24:04.606124 containerd[1582]: time="2025-07-11T00:24:04.605997925Z" level=info msg="CreateContainer within sandbox \"08d98f2d6be12ae5a2847971247f622818f8f279eb0f760f4e9a5690c94ccb29\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a8e665c159387b62d645803173b3594c2428e2a6e822827b3ae36e5e66414d1f\"" Jul 11 00:24:04.609229 containerd[1582]: 2025-07-11 00:24:04.556 [INFO][5278] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9" Jul 11 00:24:04.609229 containerd[1582]: 2025-07-11 00:24:04.556 [INFO][5278] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9" iface="eth0" netns="/var/run/netns/cni-b0f02f27-4225-8653-e812-ee5508131e8b" Jul 11 00:24:04.609229 containerd[1582]: 2025-07-11 00:24:04.557 [INFO][5278] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9" iface="eth0" netns="/var/run/netns/cni-b0f02f27-4225-8653-e812-ee5508131e8b" Jul 11 00:24:04.609229 containerd[1582]: 2025-07-11 00:24:04.557 [INFO][5278] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9" iface="eth0" netns="/var/run/netns/cni-b0f02f27-4225-8653-e812-ee5508131e8b" Jul 11 00:24:04.609229 containerd[1582]: 2025-07-11 00:24:04.557 [INFO][5278] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9" Jul 11 00:24:04.609229 containerd[1582]: 2025-07-11 00:24:04.557 [INFO][5278] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9" Jul 11 00:24:04.609229 containerd[1582]: 2025-07-11 00:24:04.588 [INFO][5310] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9" HandleID="k8s-pod-network.57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9" Workload="localhost-k8s-calico--apiserver--59dcd5c4d5--jfmh4-eth0" Jul 11 00:24:04.609229 containerd[1582]: 2025-07-11 00:24:04.589 [INFO][5310] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:24:04.609229 containerd[1582]: 2025-07-11 00:24:04.589 [INFO][5310] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:24:04.609229 containerd[1582]: 2025-07-11 00:24:04.595 [WARNING][5310] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9" HandleID="k8s-pod-network.57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9" Workload="localhost-k8s-calico--apiserver--59dcd5c4d5--jfmh4-eth0" Jul 11 00:24:04.609229 containerd[1582]: 2025-07-11 00:24:04.595 [INFO][5310] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9" HandleID="k8s-pod-network.57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9" Workload="localhost-k8s-calico--apiserver--59dcd5c4d5--jfmh4-eth0" Jul 11 00:24:04.609229 containerd[1582]: 2025-07-11 00:24:04.598 [INFO][5310] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:24:04.609229 containerd[1582]: 2025-07-11 00:24:04.601 [INFO][5278] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9" Jul 11 00:24:04.609229 containerd[1582]: time="2025-07-11T00:24:04.607410994Z" level=info msg="TearDown network for sandbox \"57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9\" successfully" Jul 11 00:24:04.609229 containerd[1582]: time="2025-07-11T00:24:04.607458706Z" level=info msg="StopPodSandbox for \"57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9\" returns successfully" Jul 11 00:24:04.609229 containerd[1582]: time="2025-07-11T00:24:04.608275057Z" level=info msg="StartContainer for \"a8e665c159387b62d645803173b3594c2428e2a6e822827b3ae36e5e66414d1f\"" Jul 11 00:24:04.609229 containerd[1582]: time="2025-07-11T00:24:04.608886803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59dcd5c4d5-jfmh4,Uid:b7a1c20e-8146-4d52-a1e3-bd7a4c0025ad,Namespace:calico-apiserver,Attempt:1,}" Jul 11 00:24:04.612433 systemd[1]: run-netns-cni\x2db0f02f27\x2d4225\x2d8653\x2de812\x2dee5508131e8b.mount: Deactivated successfully. Jul 11 00:24:04.637360 systemd-networkd[1242]: cali502c1da5319: Gained IPv6LL Jul 11 00:24:04.771545 systemd[1]: Started sshd@10-10.0.0.132:22-10.0.0.1:60450.service - OpenSSH per-connection server daemon (10.0.0.1:60450). Jul 11 00:24:04.830591 containerd[1582]: time="2025-07-11T00:24:04.830452091Z" level=info msg="StartContainer for \"a8e665c159387b62d645803173b3594c2428e2a6e822827b3ae36e5e66414d1f\" returns successfully" Jul 11 00:24:04.851725 sshd[5373]: Accepted publickey for core from 10.0.0.1 port 60450 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:24:04.854474 sshd[5373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:24:04.863120 systemd-logind[1554]: New session 11 of user core. Jul 11 00:24:04.867994 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 11 00:24:05.148486 systemd-networkd[1242]: calidf7bc7637b1: Gained IPv6LL Jul 11 00:24:05.186569 sshd[5373]: pam_unix(sshd:session): session closed for user core Jul 11 00:24:05.222686 systemd-logind[1554]: Session 11 logged out. Waiting for processes to exit. Jul 11 00:24:05.223287 systemd[1]: sshd@10-10.0.0.132:22-10.0.0.1:60450.service: Deactivated successfully. Jul 11 00:24:05.229077 systemd[1]: session-11.scope: Deactivated successfully. Jul 11 00:24:05.229492 kubelet[2662]: E0711 00:24:05.229461 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:05.230823 systemd-logind[1554]: Removed session 11. Jul 11 00:24:05.928694 kubelet[2662]: I0711 00:24:05.925462 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-f94tf" podStartSLOduration=54.925438838 podStartE2EDuration="54.925438838s" podCreationTimestamp="2025-07-11 00:23:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:24:05.732480829 +0000 UTC m=+59.414968342" watchObservedRunningTime="2025-07-11 00:24:05.925438838 +0000 UTC m=+59.607926351" Jul 11 00:24:05.959786 systemd-networkd[1242]: cali2674893d670: Link UP Jul 11 00:24:05.962229 systemd-networkd[1242]: cali2674893d670: Gained carrier Jul 11 00:24:05.976137 containerd[1582]: time="2025-07-11T00:24:05.976078044Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:24:05.977596 containerd[1582]: time="2025-07-11T00:24:05.977543963Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 11 00:24:05.983261 containerd[1582]: time="2025-07-11T00:24:05.981614189Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:24:05.985987 containerd[1582]: 2025-07-11 00:24:05.026 [INFO][5333] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--59dcd5c4d5--jfmh4-eth0 calico-apiserver-59dcd5c4d5- calico-apiserver b7a1c20e-8146-4d52-a1e3-bd7a4c0025ad 1157 0 2025-07-11 00:23:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:59dcd5c4d5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-59dcd5c4d5-jfmh4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2674893d670 [] [] }} ContainerID="8d56bf08ee14c58d6c4191a3ca6b805845138ba3c6ac4a50bb2b4742c8b60c3e" Namespace="calico-apiserver" Pod="calico-apiserver-59dcd5c4d5-jfmh4" WorkloadEndpoint="localhost-k8s-calico--apiserver--59dcd5c4d5--jfmh4-" Jul 11 00:24:05.985987 containerd[1582]: 2025-07-11 00:24:05.027 [INFO][5333] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8d56bf08ee14c58d6c4191a3ca6b805845138ba3c6ac4a50bb2b4742c8b60c3e" Namespace="calico-apiserver" Pod="calico-apiserver-59dcd5c4d5-jfmh4" WorkloadEndpoint="localhost-k8s-calico--apiserver--59dcd5c4d5--jfmh4-eth0" Jul 11 00:24:05.985987 containerd[1582]: 2025-07-11 00:24:05.110 [INFO][5388] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8d56bf08ee14c58d6c4191a3ca6b805845138ba3c6ac4a50bb2b4742c8b60c3e" HandleID="k8s-pod-network.8d56bf08ee14c58d6c4191a3ca6b805845138ba3c6ac4a50bb2b4742c8b60c3e" Workload="localhost-k8s-calico--apiserver--59dcd5c4d5--jfmh4-eth0" Jul 11 00:24:05.985987 containerd[1582]: 2025-07-11 00:24:05.110 [INFO][5388] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8d56bf08ee14c58d6c4191a3ca6b805845138ba3c6ac4a50bb2b4742c8b60c3e" HandleID="k8s-pod-network.8d56bf08ee14c58d6c4191a3ca6b805845138ba3c6ac4a50bb2b4742c8b60c3e" Workload="localhost-k8s-calico--apiserver--59dcd5c4d5--jfmh4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001b0790), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-59dcd5c4d5-jfmh4", "timestamp":"2025-07-11 00:24:05.110173882 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:24:05.985987 containerd[1582]: 2025-07-11 00:24:05.110 [INFO][5388] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:24:05.985987 containerd[1582]: 2025-07-11 00:24:05.110 [INFO][5388] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:24:05.985987 containerd[1582]: 2025-07-11 00:24:05.110 [INFO][5388] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:24:05.985987 containerd[1582]: 2025-07-11 00:24:05.118 [INFO][5388] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8d56bf08ee14c58d6c4191a3ca6b805845138ba3c6ac4a50bb2b4742c8b60c3e" host="localhost" Jul 11 00:24:05.985987 containerd[1582]: 2025-07-11 00:24:05.122 [INFO][5388] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:24:05.985987 containerd[1582]: 2025-07-11 00:24:05.317 [INFO][5388] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:24:05.985987 containerd[1582]: 2025-07-11 00:24:05.831 [INFO][5388] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:24:05.985987 containerd[1582]: 2025-07-11 00:24:05.913 [INFO][5388] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:24:05.985987 containerd[1582]: 2025-07-11 00:24:05.913 [INFO][5388] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8d56bf08ee14c58d6c4191a3ca6b805845138ba3c6ac4a50bb2b4742c8b60c3e" host="localhost" Jul 11 00:24:05.985987 containerd[1582]: 2025-07-11 00:24:05.925 [INFO][5388] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8d56bf08ee14c58d6c4191a3ca6b805845138ba3c6ac4a50bb2b4742c8b60c3e Jul 11 00:24:05.985987 containerd[1582]: 2025-07-11 00:24:05.940 [INFO][5388] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8d56bf08ee14c58d6c4191a3ca6b805845138ba3c6ac4a50bb2b4742c8b60c3e" host="localhost" Jul 11 00:24:05.985987 containerd[1582]: 2025-07-11 00:24:05.950 [INFO][5388] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.8d56bf08ee14c58d6c4191a3ca6b805845138ba3c6ac4a50bb2b4742c8b60c3e" host="localhost" Jul 11 00:24:05.985987 containerd[1582]: 2025-07-11 00:24:05.950 [INFO][5388] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.8d56bf08ee14c58d6c4191a3ca6b805845138ba3c6ac4a50bb2b4742c8b60c3e" host="localhost" Jul 11 00:24:05.985987 containerd[1582]: 2025-07-11 00:24:05.950 [INFO][5388] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:24:05.985987 containerd[1582]: 2025-07-11 00:24:05.950 [INFO][5388] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="8d56bf08ee14c58d6c4191a3ca6b805845138ba3c6ac4a50bb2b4742c8b60c3e" HandleID="k8s-pod-network.8d56bf08ee14c58d6c4191a3ca6b805845138ba3c6ac4a50bb2b4742c8b60c3e" Workload="localhost-k8s-calico--apiserver--59dcd5c4d5--jfmh4-eth0" Jul 11 00:24:05.986770 containerd[1582]: 2025-07-11 00:24:05.955 [INFO][5333] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8d56bf08ee14c58d6c4191a3ca6b805845138ba3c6ac4a50bb2b4742c8b60c3e" Namespace="calico-apiserver" Pod="calico-apiserver-59dcd5c4d5-jfmh4" WorkloadEndpoint="localhost-k8s-calico--apiserver--59dcd5c4d5--jfmh4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59dcd5c4d5--jfmh4-eth0", GenerateName:"calico-apiserver-59dcd5c4d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"b7a1c20e-8146-4d52-a1e3-bd7a4c0025ad", ResourceVersion:"1157", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 23, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59dcd5c4d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-59dcd5c4d5-jfmh4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2674893d670", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:24:05.986770 containerd[1582]: 2025-07-11 00:24:05.955 [INFO][5333] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="8d56bf08ee14c58d6c4191a3ca6b805845138ba3c6ac4a50bb2b4742c8b60c3e" Namespace="calico-apiserver" Pod="calico-apiserver-59dcd5c4d5-jfmh4" WorkloadEndpoint="localhost-k8s-calico--apiserver--59dcd5c4d5--jfmh4-eth0" Jul 11 00:24:05.986770 containerd[1582]: 2025-07-11 00:24:05.955 [INFO][5333] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2674893d670 ContainerID="8d56bf08ee14c58d6c4191a3ca6b805845138ba3c6ac4a50bb2b4742c8b60c3e" Namespace="calico-apiserver" Pod="calico-apiserver-59dcd5c4d5-jfmh4" WorkloadEndpoint="localhost-k8s-calico--apiserver--59dcd5c4d5--jfmh4-eth0" Jul 11 00:24:05.986770 containerd[1582]: 2025-07-11 00:24:05.963 [INFO][5333] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8d56bf08ee14c58d6c4191a3ca6b805845138ba3c6ac4a50bb2b4742c8b60c3e" Namespace="calico-apiserver" Pod="calico-apiserver-59dcd5c4d5-jfmh4" WorkloadEndpoint="localhost-k8s-calico--apiserver--59dcd5c4d5--jfmh4-eth0" Jul 11 00:24:05.986770 containerd[1582]: 2025-07-11 00:24:05.963 [INFO][5333] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8d56bf08ee14c58d6c4191a3ca6b805845138ba3c6ac4a50bb2b4742c8b60c3e" Namespace="calico-apiserver" Pod="calico-apiserver-59dcd5c4d5-jfmh4" WorkloadEndpoint="localhost-k8s-calico--apiserver--59dcd5c4d5--jfmh4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59dcd5c4d5--jfmh4-eth0", GenerateName:"calico-apiserver-59dcd5c4d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"b7a1c20e-8146-4d52-a1e3-bd7a4c0025ad", ResourceVersion:"1157", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 23, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59dcd5c4d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8d56bf08ee14c58d6c4191a3ca6b805845138ba3c6ac4a50bb2b4742c8b60c3e", Pod:"calico-apiserver-59dcd5c4d5-jfmh4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2674893d670", MAC:"66:94:22:c3:03:ff", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:24:05.986770 containerd[1582]: 2025-07-11 00:24:05.979 [INFO][5333] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8d56bf08ee14c58d6c4191a3ca6b805845138ba3c6ac4a50bb2b4742c8b60c3e" Namespace="calico-apiserver" Pod="calico-apiserver-59dcd5c4d5-jfmh4" WorkloadEndpoint="localhost-k8s-calico--apiserver--59dcd5c4d5--jfmh4-eth0" Jul 11 00:24:06.068421 containerd[1582]: time="2025-07-11T00:24:06.068356316Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:24:06.069322 containerd[1582]: time="2025-07-11T00:24:06.069275602Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 4.013105854s" Jul 11 00:24:06.069322 containerd[1582]: time="2025-07-11T00:24:06.069315168Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 11 00:24:06.070351 containerd[1582]: time="2025-07-11T00:24:06.070323675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 11 00:24:06.071651 containerd[1582]: time="2025-07-11T00:24:06.071619525Z" level=info msg="CreateContainer within sandbox \"d7ea0056d034c1f241af7d3d03b4931d7c38e860e907a0e71ee35506f5335afd\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 11 00:24:06.181514 containerd[1582]: time="2025-07-11T00:24:06.181292237Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:24:06.181514 containerd[1582]: time="2025-07-11T00:24:06.181375217Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:24:06.181514 containerd[1582]: time="2025-07-11T00:24:06.181387841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:24:06.181786 containerd[1582]: time="2025-07-11T00:24:06.181498914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:24:06.211475 systemd-resolved[1466]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:24:06.231703 kubelet[2662]: E0711 00:24:06.231659 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:06.241805 containerd[1582]: time="2025-07-11T00:24:06.241771926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59dcd5c4d5-jfmh4,Uid:b7a1c20e-8146-4d52-a1e3-bd7a4c0025ad,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"8d56bf08ee14c58d6c4191a3ca6b805845138ba3c6ac4a50bb2b4742c8b60c3e\"" Jul 11 00:24:06.244357 containerd[1582]: time="2025-07-11T00:24:06.244326114Z" level=info msg="CreateContainer within sandbox \"8d56bf08ee14c58d6c4191a3ca6b805845138ba3c6ac4a50bb2b4742c8b60c3e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 11 00:24:06.456004 containerd[1582]: time="2025-07-11T00:24:06.455954369Z" level=info msg="StopPodSandbox for \"e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6\"" Jul 11 00:24:06.540633 containerd[1582]: 2025-07-11 00:24:06.505 [WARNING][5469] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8gdbn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b0fe6896-2f2f-4a85-81a1-6d288dfe16c3", ResourceVersion:"1130", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 23, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"64086a15db96731b8e1389d841a56f7a063025007357f60e2b6d18ad8a214ba0", Pod:"csi-node-driver-8gdbn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4e7eee86f8e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:24:06.540633 containerd[1582]: 2025-07-11 00:24:06.505 [INFO][5469] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6" Jul 11 00:24:06.540633 containerd[1582]: 2025-07-11 00:24:06.505 [INFO][5469] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6" iface="eth0" netns="" Jul 11 00:24:06.540633 containerd[1582]: 2025-07-11 00:24:06.505 [INFO][5469] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6" Jul 11 00:24:06.540633 containerd[1582]: 2025-07-11 00:24:06.505 [INFO][5469] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6" Jul 11 00:24:06.540633 containerd[1582]: 2025-07-11 00:24:06.527 [INFO][5479] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6" HandleID="k8s-pod-network.e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6" Workload="localhost-k8s-csi--node--driver--8gdbn-eth0" Jul 11 00:24:06.540633 containerd[1582]: 2025-07-11 00:24:06.527 [INFO][5479] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:24:06.540633 containerd[1582]: 2025-07-11 00:24:06.527 [INFO][5479] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:24:06.540633 containerd[1582]: 2025-07-11 00:24:06.533 [WARNING][5479] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6" HandleID="k8s-pod-network.e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6" Workload="localhost-k8s-csi--node--driver--8gdbn-eth0" Jul 11 00:24:06.540633 containerd[1582]: 2025-07-11 00:24:06.533 [INFO][5479] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6" HandleID="k8s-pod-network.e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6" Workload="localhost-k8s-csi--node--driver--8gdbn-eth0" Jul 11 00:24:06.540633 containerd[1582]: 2025-07-11 00:24:06.534 [INFO][5479] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:24:06.540633 containerd[1582]: 2025-07-11 00:24:06.537 [INFO][5469] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6" Jul 11 00:24:06.541236 containerd[1582]: time="2025-07-11T00:24:06.540676910Z" level=info msg="TearDown network for sandbox \"e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6\" successfully" Jul 11 00:24:06.541236 containerd[1582]: time="2025-07-11T00:24:06.540712438Z" level=info msg="StopPodSandbox for \"e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6\" returns successfully" Jul 11 00:24:06.541455 containerd[1582]: time="2025-07-11T00:24:06.541424968Z" level=info msg="RemovePodSandbox for \"e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6\"" Jul 11 00:24:06.543995 containerd[1582]: time="2025-07-11T00:24:06.543951062Z" level=info msg="Forcibly stopping sandbox \"e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6\"" Jul 11 00:24:06.667474 containerd[1582]: 2025-07-11 00:24:06.579 [WARNING][5497] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8gdbn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b0fe6896-2f2f-4a85-81a1-6d288dfe16c3", ResourceVersion:"1130", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 23, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"64086a15db96731b8e1389d841a56f7a063025007357f60e2b6d18ad8a214ba0", Pod:"csi-node-driver-8gdbn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4e7eee86f8e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:24:06.667474 containerd[1582]: 2025-07-11 00:24:06.579 [INFO][5497] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6" Jul 11 00:24:06.667474 containerd[1582]: 2025-07-11 00:24:06.579 [INFO][5497] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6" iface="eth0" netns="" Jul 11 00:24:06.667474 containerd[1582]: 2025-07-11 00:24:06.579 [INFO][5497] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6" Jul 11 00:24:06.667474 containerd[1582]: 2025-07-11 00:24:06.579 [INFO][5497] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6" Jul 11 00:24:06.667474 containerd[1582]: 2025-07-11 00:24:06.604 [INFO][5507] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6" HandleID="k8s-pod-network.e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6" Workload="localhost-k8s-csi--node--driver--8gdbn-eth0" Jul 11 00:24:06.667474 containerd[1582]: 2025-07-11 00:24:06.605 [INFO][5507] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:24:06.667474 containerd[1582]: 2025-07-11 00:24:06.605 [INFO][5507] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:24:06.667474 containerd[1582]: 2025-07-11 00:24:06.658 [WARNING][5507] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6" HandleID="k8s-pod-network.e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6" Workload="localhost-k8s-csi--node--driver--8gdbn-eth0" Jul 11 00:24:06.667474 containerd[1582]: 2025-07-11 00:24:06.658 [INFO][5507] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6" HandleID="k8s-pod-network.e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6" Workload="localhost-k8s-csi--node--driver--8gdbn-eth0" Jul 11 00:24:06.667474 containerd[1582]: 2025-07-11 00:24:06.660 [INFO][5507] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:24:06.667474 containerd[1582]: 2025-07-11 00:24:06.663 [INFO][5497] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6" Jul 11 00:24:06.668055 containerd[1582]: time="2025-07-11T00:24:06.667511518Z" level=info msg="TearDown network for sandbox \"e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6\" successfully" Jul 11 00:24:06.737134 systemd[1]: run-containerd-runc-k8s.io-88cbb3057733338f3e56435dbadf1dff1d509725691371aaa2260bec823429ec-runc.Vq8yYF.mount: Deactivated successfully. Jul 11 00:24:06.856117 containerd[1582]: time="2025-07-11T00:24:06.855934425Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:24:06.856117 containerd[1582]: time="2025-07-11T00:24:06.856073251Z" level=info msg="RemovePodSandbox \"e6fb74b3ba3887de34d8e27cc4aef3f7bdfb6020aaf5a9722b028bf9fd247fa6\" returns successfully" Jul 11 00:24:06.856748 containerd[1582]: time="2025-07-11T00:24:06.856718100Z" level=info msg="StopPodSandbox for \"4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee\"" Jul 11 00:24:06.862824 containerd[1582]: time="2025-07-11T00:24:06.862781120Z" level=info msg="CreateContainer within sandbox \"8d56bf08ee14c58d6c4191a3ca6b805845138ba3c6ac4a50bb2b4742c8b60c3e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"67557a922d519900e85fc6d536e8378d5fec5f3e3ca5c23fe6092b586553d53b\"" Jul 11 00:24:06.863498 containerd[1582]: time="2025-07-11T00:24:06.863475034Z" level=info msg="StartContainer for \"67557a922d519900e85fc6d536e8378d5fec5f3e3ca5c23fe6092b586553d53b\"" Jul 11 00:24:06.913235 containerd[1582]: time="2025-07-11T00:24:06.913045762Z" level=info msg="CreateContainer within sandbox \"d7ea0056d034c1f241af7d3d03b4931d7c38e860e907a0e71ee35506f5335afd\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0623078c25fc363f16390fdf23a212202889af6145725f48232fec3dcde9d35f\"" Jul 11 00:24:06.915589 containerd[1582]: time="2025-07-11T00:24:06.914136497Z" level=info msg="StartContainer for \"0623078c25fc363f16390fdf23a212202889af6145725f48232fec3dcde9d35f\"" Jul 11 00:24:06.959724 containerd[1582]: 2025-07-11 00:24:06.900 [WARNING][5544] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--f94tf-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"415c8f48-d718-4200-be2e-9918b83dc600", ResourceVersion:"1174", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 23, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"08d98f2d6be12ae5a2847971247f622818f8f279eb0f760f4e9a5690c94ccb29", Pod:"coredns-7c65d6cfc9-f94tf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidf7bc7637b1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:24:06.959724 containerd[1582]: 2025-07-11 00:24:06.900 [INFO][5544] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee" Jul 11 00:24:06.959724 containerd[1582]: 2025-07-11 00:24:06.900 [INFO][5544] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee" iface="eth0" netns="" Jul 11 00:24:06.959724 containerd[1582]: 2025-07-11 00:24:06.900 [INFO][5544] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee" Jul 11 00:24:06.959724 containerd[1582]: 2025-07-11 00:24:06.900 [INFO][5544] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee" Jul 11 00:24:06.959724 containerd[1582]: 2025-07-11 00:24:06.933 [INFO][5568] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee" HandleID="k8s-pod-network.4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee" Workload="localhost-k8s-coredns--7c65d6cfc9--f94tf-eth0" Jul 11 00:24:06.959724 containerd[1582]: 2025-07-11 00:24:06.933 [INFO][5568] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:24:06.959724 containerd[1582]: 2025-07-11 00:24:06.933 [INFO][5568] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:24:06.959724 containerd[1582]: 2025-07-11 00:24:06.944 [WARNING][5568] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee" HandleID="k8s-pod-network.4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee" Workload="localhost-k8s-coredns--7c65d6cfc9--f94tf-eth0" Jul 11 00:24:06.959724 containerd[1582]: 2025-07-11 00:24:06.945 [INFO][5568] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee" HandleID="k8s-pod-network.4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee" Workload="localhost-k8s-coredns--7c65d6cfc9--f94tf-eth0" Jul 11 00:24:06.959724 containerd[1582]: 2025-07-11 00:24:06.947 [INFO][5568] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:24:06.959724 containerd[1582]: 2025-07-11 00:24:06.952 [INFO][5544] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee" Jul 11 00:24:06.960358 containerd[1582]: time="2025-07-11T00:24:06.959835266Z" level=info msg="TearDown network for sandbox \"4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee\" successfully" Jul 11 00:24:06.961042 containerd[1582]: time="2025-07-11T00:24:06.960001214Z" level=info msg="StopPodSandbox for \"4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee\" returns successfully" Jul 11 00:24:06.961696 containerd[1582]: time="2025-07-11T00:24:06.961624714Z" level=info msg="RemovePodSandbox for \"4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee\"" Jul 11 00:24:06.961696 containerd[1582]: time="2025-07-11T00:24:06.961689969Z" level=info msg="Forcibly stopping sandbox \"4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee\"" Jul 11 00:24:07.000677 containerd[1582]: time="2025-07-11T00:24:07.000523026Z" level=info msg="StartContainer for \"67557a922d519900e85fc6d536e8378d5fec5f3e3ca5c23fe6092b586553d53b\" returns successfully" Jul 11 00:24:07.024315 containerd[1582]: time="2025-07-11T00:24:07.024249055Z" level=info msg="StartContainer for \"0623078c25fc363f16390fdf23a212202889af6145725f48232fec3dcde9d35f\" returns successfully" Jul 11 00:24:07.072234 containerd[1582]: 2025-07-11 00:24:07.018 [WARNING][5623] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--f94tf-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"415c8f48-d718-4200-be2e-9918b83dc600", ResourceVersion:"1174", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 23, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"08d98f2d6be12ae5a2847971247f622818f8f279eb0f760f4e9a5690c94ccb29", Pod:"coredns-7c65d6cfc9-f94tf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidf7bc7637b1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:24:07.072234 containerd[1582]: 2025-07-11 00:24:07.019 [INFO][5623] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee" Jul 11 00:24:07.072234 containerd[1582]: 2025-07-11 00:24:07.019 [INFO][5623] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee" iface="eth0" netns="" Jul 11 00:24:07.072234 containerd[1582]: 2025-07-11 00:24:07.019 [INFO][5623] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee" Jul 11 00:24:07.072234 containerd[1582]: 2025-07-11 00:24:07.019 [INFO][5623] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee" Jul 11 00:24:07.072234 containerd[1582]: 2025-07-11 00:24:07.052 [INFO][5648] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee" HandleID="k8s-pod-network.4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee" Workload="localhost-k8s-coredns--7c65d6cfc9--f94tf-eth0" Jul 11 00:24:07.072234 containerd[1582]: 2025-07-11 00:24:07.052 [INFO][5648] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:24:07.072234 containerd[1582]: 2025-07-11 00:24:07.052 [INFO][5648] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:24:07.072234 containerd[1582]: 2025-07-11 00:24:07.060 [WARNING][5648] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee" HandleID="k8s-pod-network.4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee" Workload="localhost-k8s-coredns--7c65d6cfc9--f94tf-eth0" Jul 11 00:24:07.072234 containerd[1582]: 2025-07-11 00:24:07.060 [INFO][5648] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee" HandleID="k8s-pod-network.4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee" Workload="localhost-k8s-coredns--7c65d6cfc9--f94tf-eth0" Jul 11 00:24:07.072234 containerd[1582]: 2025-07-11 00:24:07.062 [INFO][5648] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:24:07.072234 containerd[1582]: 2025-07-11 00:24:07.066 [INFO][5623] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee" Jul 11 00:24:07.072234 containerd[1582]: time="2025-07-11T00:24:07.069899975Z" level=info msg="TearDown network for sandbox \"4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee\" successfully" Jul 11 00:24:07.080285 containerd[1582]: time="2025-07-11T00:24:07.080214371Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:24:07.080457 containerd[1582]: time="2025-07-11T00:24:07.080348488Z" level=info msg="RemovePodSandbox \"4b86339638912449872e207bc5daceae62ba6ae740d86002e6df70ca13cc11ee\" returns successfully" Jul 11 00:24:07.082913 containerd[1582]: time="2025-07-11T00:24:07.082486163Z" level=info msg="StopPodSandbox for \"57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9\"" Jul 11 00:24:07.198342 containerd[1582]: 2025-07-11 00:24:07.130 [WARNING][5671] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59dcd5c4d5--jfmh4-eth0", GenerateName:"calico-apiserver-59dcd5c4d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"b7a1c20e-8146-4d52-a1e3-bd7a4c0025ad", ResourceVersion:"1181", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 23, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59dcd5c4d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8d56bf08ee14c58d6c4191a3ca6b805845138ba3c6ac4a50bb2b4742c8b60c3e", Pod:"calico-apiserver-59dcd5c4d5-jfmh4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2674893d670", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:24:07.198342 containerd[1582]: 2025-07-11 00:24:07.130 [INFO][5671] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9" Jul 11 00:24:07.198342 containerd[1582]: 2025-07-11 00:24:07.130 [INFO][5671] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9" iface="eth0" netns="" Jul 11 00:24:07.198342 containerd[1582]: 2025-07-11 00:24:07.130 [INFO][5671] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9" Jul 11 00:24:07.198342 containerd[1582]: 2025-07-11 00:24:07.130 [INFO][5671] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9" Jul 11 00:24:07.198342 containerd[1582]: 2025-07-11 00:24:07.165 [INFO][5683] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9" HandleID="k8s-pod-network.57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9" Workload="localhost-k8s-calico--apiserver--59dcd5c4d5--jfmh4-eth0" Jul 11 00:24:07.198342 containerd[1582]: 2025-07-11 00:24:07.166 [INFO][5683] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:24:07.198342 containerd[1582]: 2025-07-11 00:24:07.166 [INFO][5683] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:24:07.198342 containerd[1582]: 2025-07-11 00:24:07.176 [WARNING][5683] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9" HandleID="k8s-pod-network.57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9" Workload="localhost-k8s-calico--apiserver--59dcd5c4d5--jfmh4-eth0" Jul 11 00:24:07.198342 containerd[1582]: 2025-07-11 00:24:07.176 [INFO][5683] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9" HandleID="k8s-pod-network.57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9" Workload="localhost-k8s-calico--apiserver--59dcd5c4d5--jfmh4-eth0" Jul 11 00:24:07.198342 containerd[1582]: 2025-07-11 00:24:07.185 [INFO][5683] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:24:07.198342 containerd[1582]: 2025-07-11 00:24:07.193 [INFO][5671] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9" Jul 11 00:24:07.199216 containerd[1582]: time="2025-07-11T00:24:07.198587693Z" level=info msg="TearDown network for sandbox \"57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9\" successfully" Jul 11 00:24:07.199216 containerd[1582]: time="2025-07-11T00:24:07.198623371Z" level=info msg="StopPodSandbox for \"57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9\" returns successfully" Jul 11 00:24:07.200323 containerd[1582]: time="2025-07-11T00:24:07.199858493Z" level=info msg="RemovePodSandbox for \"57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9\"" Jul 11 00:24:07.200323 containerd[1582]: time="2025-07-11T00:24:07.199902327Z" level=info msg="Forcibly stopping sandbox \"57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9\"" Jul 11 00:24:07.239516 kubelet[2662]: E0711 00:24:07.239462 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:07.359435 kubelet[2662]: I0711 00:24:07.359232 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-59dcd5c4d5-t5svc" podStartSLOduration=40.353131468 podStartE2EDuration="46.359185034s" podCreationTimestamp="2025-07-11 00:23:21 +0000 UTC" firstStartedPulling="2025-07-11 00:24:00.064099613 +0000 UTC m=+53.746587126" lastFinishedPulling="2025-07-11 00:24:06.070153169 +0000 UTC m=+59.752640692" observedRunningTime="2025-07-11 00:24:07.35869352 +0000 UTC m=+61.041181063" watchObservedRunningTime="2025-07-11 00:24:07.359185034 +0000 UTC m=+61.041672547" Jul 11 00:24:07.370425 containerd[1582]: 2025-07-11 00:24:07.309 [WARNING][5702] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59dcd5c4d5--jfmh4-eth0", GenerateName:"calico-apiserver-59dcd5c4d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"b7a1c20e-8146-4d52-a1e3-bd7a4c0025ad", ResourceVersion:"1181", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 23, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59dcd5c4d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8d56bf08ee14c58d6c4191a3ca6b805845138ba3c6ac4a50bb2b4742c8b60c3e", Pod:"calico-apiserver-59dcd5c4d5-jfmh4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2674893d670", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:24:07.370425 containerd[1582]: 2025-07-11 00:24:07.310 [INFO][5702] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9" Jul 11 00:24:07.370425 containerd[1582]: 2025-07-11 00:24:07.310 [INFO][5702] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9" iface="eth0" netns="" Jul 11 00:24:07.370425 containerd[1582]: 2025-07-11 00:24:07.310 [INFO][5702] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9" Jul 11 00:24:07.370425 containerd[1582]: 2025-07-11 00:24:07.310 [INFO][5702] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9" Jul 11 00:24:07.370425 containerd[1582]: 2025-07-11 00:24:07.338 [INFO][5711] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9" HandleID="k8s-pod-network.57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9" Workload="localhost-k8s-calico--apiserver--59dcd5c4d5--jfmh4-eth0" Jul 11 00:24:07.370425 containerd[1582]: 2025-07-11 00:24:07.338 [INFO][5711] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:24:07.370425 containerd[1582]: 2025-07-11 00:24:07.338 [INFO][5711] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:24:07.370425 containerd[1582]: 2025-07-11 00:24:07.356 [WARNING][5711] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9" HandleID="k8s-pod-network.57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9" Workload="localhost-k8s-calico--apiserver--59dcd5c4d5--jfmh4-eth0" Jul 11 00:24:07.370425 containerd[1582]: 2025-07-11 00:24:07.356 [INFO][5711] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9" HandleID="k8s-pod-network.57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9" Workload="localhost-k8s-calico--apiserver--59dcd5c4d5--jfmh4-eth0" Jul 11 00:24:07.370425 containerd[1582]: 2025-07-11 00:24:07.361 [INFO][5711] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:24:07.370425 containerd[1582]: 2025-07-11 00:24:07.366 [INFO][5702] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9" Jul 11 00:24:07.371040 containerd[1582]: time="2025-07-11T00:24:07.370475533Z" level=info msg="TearDown network for sandbox \"57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9\" successfully" Jul 11 00:24:07.388467 systemd-networkd[1242]: cali2674893d670: Gained IPv6LL Jul 11 00:24:07.717731 kubelet[2662]: I0711 00:24:07.717647 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-59dcd5c4d5-jfmh4" podStartSLOduration=46.717627359 podStartE2EDuration="46.717627359s" podCreationTimestamp="2025-07-11 00:23:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:24:07.71760698 +0000 UTC m=+61.400094493" watchObservedRunningTime="2025-07-11 00:24:07.717627359 +0000 UTC m=+61.400114872" Jul 11 00:24:07.854531 containerd[1582]: time="2025-07-11T00:24:07.854454636Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:24:07.854714 containerd[1582]: time="2025-07-11T00:24:07.854576050Z" level=info msg="RemovePodSandbox \"57c62bc295d998b47924e9bf07f3f46899f5571a178cced5204b8ab12a2289d9\" returns successfully" Jul 11 00:24:07.864278 containerd[1582]: time="2025-07-11T00:24:07.855287616Z" level=info msg="StopPodSandbox for \"0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5\"" Jul 11 00:24:07.979264 containerd[1582]: 2025-07-11 00:24:07.917 [WARNING][5732] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--85764bc598--vpq2r-eth0", GenerateName:"calico-kube-controllers-85764bc598-", Namespace:"calico-system", SelfLink:"", UID:"28fd8700-a2a2-4ca6-a552-c1abce22725b", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 23, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85764bc598", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"335f6a749c3e427b8518479c3ebcb27ca85cffedb26d8ccf9aec5a50288ad9ad", Pod:"calico-kube-controllers-85764bc598-vpq2r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali42bc9137916", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:24:07.979264 containerd[1582]: 2025-07-11 00:24:07.918 [INFO][5732] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5" Jul 11 00:24:07.979264 containerd[1582]: 2025-07-11 00:24:07.918 [INFO][5732] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5" iface="eth0" netns="" Jul 11 00:24:07.979264 containerd[1582]: 2025-07-11 00:24:07.918 [INFO][5732] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5" Jul 11 00:24:07.979264 containerd[1582]: 2025-07-11 00:24:07.918 [INFO][5732] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5" Jul 11 00:24:07.979264 containerd[1582]: 2025-07-11 00:24:07.950 [INFO][5740] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5" HandleID="k8s-pod-network.0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5" Workload="localhost-k8s-calico--kube--controllers--85764bc598--vpq2r-eth0" Jul 11 00:24:07.979264 containerd[1582]: 2025-07-11 00:24:07.950 [INFO][5740] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:24:07.979264 containerd[1582]: 2025-07-11 00:24:07.950 [INFO][5740] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:24:07.979264 containerd[1582]: 2025-07-11 00:24:07.969 [WARNING][5740] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5" HandleID="k8s-pod-network.0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5" Workload="localhost-k8s-calico--kube--controllers--85764bc598--vpq2r-eth0" Jul 11 00:24:07.979264 containerd[1582]: 2025-07-11 00:24:07.969 [INFO][5740] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5" HandleID="k8s-pod-network.0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5" Workload="localhost-k8s-calico--kube--controllers--85764bc598--vpq2r-eth0" Jul 11 00:24:07.979264 containerd[1582]: 2025-07-11 00:24:07.971 [INFO][5740] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:24:07.979264 containerd[1582]: 2025-07-11 00:24:07.975 [INFO][5732] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5" Jul 11 00:24:07.979264 containerd[1582]: time="2025-07-11T00:24:07.979231324Z" level=info msg="TearDown network for sandbox \"0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5\" successfully" Jul 11 00:24:07.979264 containerd[1582]: time="2025-07-11T00:24:07.979266181Z" level=info msg="StopPodSandbox for \"0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5\" returns successfully" Jul 11 00:24:07.979901 containerd[1582]: time="2025-07-11T00:24:07.979804464Z" level=info msg="RemovePodSandbox for \"0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5\"" Jul 11 00:24:07.979901 containerd[1582]: time="2025-07-11T00:24:07.979835855Z" level=info msg="Forcibly stopping sandbox \"0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5\"" Jul 11 00:24:08.085163 containerd[1582]: 2025-07-11 00:24:08.027 [WARNING][5758] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--85764bc598--vpq2r-eth0", GenerateName:"calico-kube-controllers-85764bc598-", Namespace:"calico-system", SelfLink:"", UID:"28fd8700-a2a2-4ca6-a552-c1abce22725b", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 23, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85764bc598", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"335f6a749c3e427b8518479c3ebcb27ca85cffedb26d8ccf9aec5a50288ad9ad", Pod:"calico-kube-controllers-85764bc598-vpq2r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali42bc9137916", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:24:08.085163 containerd[1582]: 2025-07-11 00:24:08.028 [INFO][5758] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5" Jul 11 00:24:08.085163 containerd[1582]: 2025-07-11 00:24:08.028 [INFO][5758] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5" iface="eth0" netns="" Jul 11 00:24:08.085163 containerd[1582]: 2025-07-11 00:24:08.028 [INFO][5758] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5" Jul 11 00:24:08.085163 containerd[1582]: 2025-07-11 00:24:08.028 [INFO][5758] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5" Jul 11 00:24:08.085163 containerd[1582]: 2025-07-11 00:24:08.067 [INFO][5766] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5" HandleID="k8s-pod-network.0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5" Workload="localhost-k8s-calico--kube--controllers--85764bc598--vpq2r-eth0" Jul 11 00:24:08.085163 containerd[1582]: 2025-07-11 00:24:08.068 [INFO][5766] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:24:08.085163 containerd[1582]: 2025-07-11 00:24:08.068 [INFO][5766] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:24:08.085163 containerd[1582]: 2025-07-11 00:24:08.075 [WARNING][5766] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5" HandleID="k8s-pod-network.0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5" Workload="localhost-k8s-calico--kube--controllers--85764bc598--vpq2r-eth0" Jul 11 00:24:08.085163 containerd[1582]: 2025-07-11 00:24:08.075 [INFO][5766] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5" HandleID="k8s-pod-network.0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5" Workload="localhost-k8s-calico--kube--controllers--85764bc598--vpq2r-eth0" Jul 11 00:24:08.085163 containerd[1582]: 2025-07-11 00:24:08.076 [INFO][5766] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:24:08.085163 containerd[1582]: 2025-07-11 00:24:08.081 [INFO][5758] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5" Jul 11 00:24:08.086222 containerd[1582]: time="2025-07-11T00:24:08.085294726Z" level=info msg="TearDown network for sandbox \"0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5\" successfully" Jul 11 00:24:08.260571 kubelet[2662]: I0711 00:24:08.260388 2662 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:24:08.265835 containerd[1582]: time="2025-07-11T00:24:08.265759319Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:24:08.265975 containerd[1582]: time="2025-07-11T00:24:08.265868158Z" level=info msg="RemovePodSandbox \"0ddb6491a9d206af59519faf53f0efe114c8c6793123d0d72da0f3f0d826f3c5\" returns successfully" Jul 11 00:24:08.266878 containerd[1582]: time="2025-07-11T00:24:08.266842799Z" level=info msg="StopPodSandbox for \"c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1\"" Jul 11 00:24:08.374453 containerd[1582]: 2025-07-11 00:24:08.320 [WARNING][5785] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1" WorkloadEndpoint="localhost-k8s-whisker--645896967f--scc2m-eth0" Jul 11 00:24:08.374453 containerd[1582]: 2025-07-11 00:24:08.320 [INFO][5785] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1" Jul 11 00:24:08.374453 containerd[1582]: 2025-07-11 00:24:08.320 [INFO][5785] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1" iface="eth0" netns="" Jul 11 00:24:08.374453 containerd[1582]: 2025-07-11 00:24:08.320 [INFO][5785] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1" Jul 11 00:24:08.374453 containerd[1582]: 2025-07-11 00:24:08.320 [INFO][5785] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1" Jul 11 00:24:08.374453 containerd[1582]: 2025-07-11 00:24:08.353 [INFO][5794] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1" HandleID="k8s-pod-network.c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1" Workload="localhost-k8s-whisker--645896967f--scc2m-eth0" Jul 11 00:24:08.374453 containerd[1582]: 2025-07-11 00:24:08.353 [INFO][5794] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:24:08.374453 containerd[1582]: 2025-07-11 00:24:08.353 [INFO][5794] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:24:08.374453 containerd[1582]: 2025-07-11 00:24:08.363 [WARNING][5794] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1" HandleID="k8s-pod-network.c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1" Workload="localhost-k8s-whisker--645896967f--scc2m-eth0" Jul 11 00:24:08.374453 containerd[1582]: 2025-07-11 00:24:08.363 [INFO][5794] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1" HandleID="k8s-pod-network.c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1" Workload="localhost-k8s-whisker--645896967f--scc2m-eth0" Jul 11 00:24:08.374453 containerd[1582]: 2025-07-11 00:24:08.365 [INFO][5794] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:24:08.374453 containerd[1582]: 2025-07-11 00:24:08.370 [INFO][5785] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1" Jul 11 00:24:08.375086 containerd[1582]: time="2025-07-11T00:24:08.374502859Z" level=info msg="TearDown network for sandbox \"c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1\" successfully" Jul 11 00:24:08.375086 containerd[1582]: time="2025-07-11T00:24:08.374531163Z" level=info msg="StopPodSandbox for \"c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1\" returns successfully" Jul 11 00:24:08.375160 containerd[1582]: time="2025-07-11T00:24:08.375080227Z" level=info msg="RemovePodSandbox for \"c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1\"" Jul 11 00:24:08.375160 containerd[1582]: time="2025-07-11T00:24:08.375122528Z" level=info msg="Forcibly stopping sandbox \"c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1\"" Jul 11 00:24:08.515141 containerd[1582]: 2025-07-11 00:24:08.449 [WARNING][5812] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1" WorkloadEndpoint="localhost-k8s-whisker--645896967f--scc2m-eth0" Jul 11 00:24:08.515141 containerd[1582]: 2025-07-11 00:24:08.450 [INFO][5812] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1" Jul 11 00:24:08.515141 containerd[1582]: 2025-07-11 00:24:08.450 [INFO][5812] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1" iface="eth0" netns="" Jul 11 00:24:08.515141 containerd[1582]: 2025-07-11 00:24:08.450 [INFO][5812] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1" Jul 11 00:24:08.515141 containerd[1582]: 2025-07-11 00:24:08.450 [INFO][5812] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1" Jul 11 00:24:08.515141 containerd[1582]: 2025-07-11 00:24:08.489 [INFO][5821] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1" HandleID="k8s-pod-network.c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1" Workload="localhost-k8s-whisker--645896967f--scc2m-eth0" Jul 11 00:24:08.515141 containerd[1582]: 2025-07-11 00:24:08.490 [INFO][5821] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:24:08.515141 containerd[1582]: 2025-07-11 00:24:08.490 [INFO][5821] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:24:08.515141 containerd[1582]: 2025-07-11 00:24:08.499 [WARNING][5821] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1" HandleID="k8s-pod-network.c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1" Workload="localhost-k8s-whisker--645896967f--scc2m-eth0" Jul 11 00:24:08.515141 containerd[1582]: 2025-07-11 00:24:08.499 [INFO][5821] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1" HandleID="k8s-pod-network.c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1" Workload="localhost-k8s-whisker--645896967f--scc2m-eth0" Jul 11 00:24:08.515141 containerd[1582]: 2025-07-11 00:24:08.502 [INFO][5821] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:24:08.515141 containerd[1582]: 2025-07-11 00:24:08.508 [INFO][5812] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1" Jul 11 00:24:08.515141 containerd[1582]: time="2025-07-11T00:24:08.515087182Z" level=info msg="TearDown network for sandbox \"c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1\" successfully" Jul 11 00:24:08.524189 containerd[1582]: time="2025-07-11T00:24:08.524080690Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:24:08.524353 containerd[1582]: time="2025-07-11T00:24:08.524270814Z" level=info msg="RemovePodSandbox \"c32601573f63d2e1b1436a6fda94e9c12b19bddf8e30e8af59d13c709d5b02c1\" returns successfully" Jul 11 00:24:08.526725 containerd[1582]: time="2025-07-11T00:24:08.526029028Z" level=info msg="StopPodSandbox for \"67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1\"" Jul 11 00:24:08.661730 containerd[1582]: 2025-07-11 00:24:08.611 [WARNING][5842] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--vvl26-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"00f725d3-9859-45cd-81ce-7316e621f780", ResourceVersion:"1142", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 23, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9578f41ced7f55848e7009f12f320aff9f66f287f7cf63fd297e598cf982ca8f", Pod:"goldmane-58fd7646b9-vvl26", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali502c1da5319", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:24:08.661730 containerd[1582]: 2025-07-11 00:24:08.611 [INFO][5842] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1" Jul 11 00:24:08.661730 containerd[1582]: 2025-07-11 00:24:08.611 [INFO][5842] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1" iface="eth0" netns="" Jul 11 00:24:08.661730 containerd[1582]: 2025-07-11 00:24:08.611 [INFO][5842] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1" Jul 11 00:24:08.661730 containerd[1582]: 2025-07-11 00:24:08.611 [INFO][5842] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1" Jul 11 00:24:08.661730 containerd[1582]: 2025-07-11 00:24:08.643 [INFO][5851] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1" HandleID="k8s-pod-network.67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1" Workload="localhost-k8s-goldmane--58fd7646b9--vvl26-eth0" Jul 11 00:24:08.661730 containerd[1582]: 2025-07-11 00:24:08.644 [INFO][5851] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:24:08.661730 containerd[1582]: 2025-07-11 00:24:08.644 [INFO][5851] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:24:08.661730 containerd[1582]: 2025-07-11 00:24:08.651 [WARNING][5851] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1" HandleID="k8s-pod-network.67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1" Workload="localhost-k8s-goldmane--58fd7646b9--vvl26-eth0" Jul 11 00:24:08.661730 containerd[1582]: 2025-07-11 00:24:08.651 [INFO][5851] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1" HandleID="k8s-pod-network.67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1" Workload="localhost-k8s-goldmane--58fd7646b9--vvl26-eth0" Jul 11 00:24:08.661730 containerd[1582]: 2025-07-11 00:24:08.653 [INFO][5851] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:24:08.661730 containerd[1582]: 2025-07-11 00:24:08.657 [INFO][5842] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1" Jul 11 00:24:08.662683 containerd[1582]: time="2025-07-11T00:24:08.662325261Z" level=info msg="TearDown network for sandbox \"67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1\" successfully" Jul 11 00:24:08.662683 containerd[1582]: time="2025-07-11T00:24:08.662364046Z" level=info msg="StopPodSandbox for \"67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1\" returns successfully" Jul 11 00:24:08.663394 containerd[1582]: time="2025-07-11T00:24:08.663349367Z" level=info msg="RemovePodSandbox for \"67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1\"" Jul 11 00:24:08.663394 containerd[1582]: time="2025-07-11T00:24:08.663391818Z" level=info msg="Forcibly stopping sandbox \"67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1\"" Jul 11 00:24:08.677650 containerd[1582]: time="2025-07-11T00:24:08.677571319Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:24:08.680243 containerd[1582]: time="2025-07-11T00:24:08.678662133Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 11 00:24:08.680243 containerd[1582]: time="2025-07-11T00:24:08.679639758Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:24:08.686547 containerd[1582]: time="2025-07-11T00:24:08.686487990Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:24:08.691274 containerd[1582]: time="2025-07-11T00:24:08.691226163Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 2.620868793s" Jul 11 00:24:08.691574 containerd[1582]: time="2025-07-11T00:24:08.691273744Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 11 00:24:08.696255 containerd[1582]: time="2025-07-11T00:24:08.696181833Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 11 00:24:08.699649 containerd[1582]: time="2025-07-11T00:24:08.699595974Z" level=info msg="CreateContainer within sandbox \"64086a15db96731b8e1389d841a56f7a063025007357f60e2b6d18ad8a214ba0\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 11 00:24:08.720384 containerd[1582]: time="2025-07-11T00:24:08.720339581Z" level=info msg="CreateContainer within sandbox \"64086a15db96731b8e1389d841a56f7a063025007357f60e2b6d18ad8a214ba0\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"24488cad487aada72667124a304ac349be1bffb656c2d4c8af93259c5939130d\"" Jul 11 00:24:08.721306 containerd[1582]: time="2025-07-11T00:24:08.721286168Z" level=info msg="StartContainer for \"24488cad487aada72667124a304ac349be1bffb656c2d4c8af93259c5939130d\"" Jul 11 00:24:08.797548 containerd[1582]: 2025-07-11 00:24:08.728 [WARNING][5868] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--vvl26-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"00f725d3-9859-45cd-81ce-7316e621f780", ResourceVersion:"1142", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 23, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9578f41ced7f55848e7009f12f320aff9f66f287f7cf63fd297e598cf982ca8f", Pod:"goldmane-58fd7646b9-vvl26", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali502c1da5319", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:24:08.797548 containerd[1582]: 2025-07-11 00:24:08.729 [INFO][5868] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1" Jul 11 00:24:08.797548 containerd[1582]: 2025-07-11 00:24:08.729 [INFO][5868] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1" iface="eth0" netns="" Jul 11 00:24:08.797548 containerd[1582]: 2025-07-11 00:24:08.729 [INFO][5868] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1" Jul 11 00:24:08.797548 containerd[1582]: 2025-07-11 00:24:08.729 [INFO][5868] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1" Jul 11 00:24:08.797548 containerd[1582]: 2025-07-11 00:24:08.774 [INFO][5883] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1" HandleID="k8s-pod-network.67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1" Workload="localhost-k8s-goldmane--58fd7646b9--vvl26-eth0" Jul 11 00:24:08.797548 containerd[1582]: 2025-07-11 00:24:08.775 [INFO][5883] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:24:08.797548 containerd[1582]: 2025-07-11 00:24:08.776 [INFO][5883] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:24:08.797548 containerd[1582]: 2025-07-11 00:24:08.783 [WARNING][5883] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1" HandleID="k8s-pod-network.67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1" Workload="localhost-k8s-goldmane--58fd7646b9--vvl26-eth0" Jul 11 00:24:08.797548 containerd[1582]: 2025-07-11 00:24:08.783 [INFO][5883] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1" HandleID="k8s-pod-network.67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1" Workload="localhost-k8s-goldmane--58fd7646b9--vvl26-eth0" Jul 11 00:24:08.797548 containerd[1582]: 2025-07-11 00:24:08.785 [INFO][5883] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:24:08.797548 containerd[1582]: 2025-07-11 00:24:08.791 [INFO][5868] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1" Jul 11 00:24:08.797548 containerd[1582]: time="2025-07-11T00:24:08.796242440Z" level=info msg="TearDown network for sandbox \"67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1\" successfully" Jul 11 00:24:08.802603 containerd[1582]: time="2025-07-11T00:24:08.802512191Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:24:08.802932 containerd[1582]: time="2025-07-11T00:24:08.802684181Z" level=info msg="RemovePodSandbox \"67479d809c9ded34467d999565d02dff5973708e6d15232d25f8aa48ba8269a1\" returns successfully" Jul 11 00:24:08.804785 containerd[1582]: time="2025-07-11T00:24:08.803848776Z" level=info msg="StopPodSandbox for \"794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f\"" Jul 11 00:24:08.838051 containerd[1582]: time="2025-07-11T00:24:08.837353041Z" level=info msg="StartContainer for \"24488cad487aada72667124a304ac349be1bffb656c2d4c8af93259c5939130d\" returns successfully" Jul 11 00:24:08.925042 containerd[1582]: 2025-07-11 00:24:08.868 [WARNING][5920] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59dcd5c4d5--t5svc-eth0", GenerateName:"calico-apiserver-59dcd5c4d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"a6481fdf-2fc3-43a0-ad5f-0da6fdc9cbf7", ResourceVersion:"1196", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 23, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59dcd5c4d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d7ea0056d034c1f241af7d3d03b4931d7c38e860e907a0e71ee35506f5335afd", Pod:"calico-apiserver-59dcd5c4d5-t5svc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8aad30be039", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:24:08.925042 containerd[1582]: 2025-07-11 00:24:08.869 [INFO][5920] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f" Jul 11 00:24:08.925042 containerd[1582]: 2025-07-11 00:24:08.869 [INFO][5920] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f" iface="eth0" netns="" Jul 11 00:24:08.925042 containerd[1582]: 2025-07-11 00:24:08.869 [INFO][5920] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f" Jul 11 00:24:08.925042 containerd[1582]: 2025-07-11 00:24:08.869 [INFO][5920] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f" Jul 11 00:24:08.925042 containerd[1582]: 2025-07-11 00:24:08.905 [INFO][5939] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f" HandleID="k8s-pod-network.794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f" Workload="localhost-k8s-calico--apiserver--59dcd5c4d5--t5svc-eth0" Jul 11 00:24:08.925042 containerd[1582]: 2025-07-11 00:24:08.907 [INFO][5939] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:24:08.925042 containerd[1582]: 2025-07-11 00:24:08.907 [INFO][5939] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:24:08.925042 containerd[1582]: 2025-07-11 00:24:08.915 [WARNING][5939] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f" HandleID="k8s-pod-network.794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f" Workload="localhost-k8s-calico--apiserver--59dcd5c4d5--t5svc-eth0" Jul 11 00:24:08.925042 containerd[1582]: 2025-07-11 00:24:08.915 [INFO][5939] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f" HandleID="k8s-pod-network.794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f" Workload="localhost-k8s-calico--apiserver--59dcd5c4d5--t5svc-eth0" Jul 11 00:24:08.925042 containerd[1582]: 2025-07-11 00:24:08.916 [INFO][5939] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:24:08.925042 containerd[1582]: 2025-07-11 00:24:08.920 [INFO][5920] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f" Jul 11 00:24:08.925594 containerd[1582]: time="2025-07-11T00:24:08.925110077Z" level=info msg="TearDown network for sandbox \"794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f\" successfully" Jul 11 00:24:08.925594 containerd[1582]: time="2025-07-11T00:24:08.925153641Z" level=info msg="StopPodSandbox for \"794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f\" returns successfully" Jul 11 00:24:08.925869 containerd[1582]: time="2025-07-11T00:24:08.925831843Z" level=info msg="RemovePodSandbox for \"794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f\"" Jul 11 00:24:08.925869 containerd[1582]: time="2025-07-11T00:24:08.925865357Z" level=info msg="Forcibly stopping sandbox \"794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f\"" Jul 11 00:24:09.099860 containerd[1582]: 2025-07-11 00:24:09.028 [WARNING][5957] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59dcd5c4d5--t5svc-eth0", GenerateName:"calico-apiserver-59dcd5c4d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"a6481fdf-2fc3-43a0-ad5f-0da6fdc9cbf7", ResourceVersion:"1196", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 23, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59dcd5c4d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d7ea0056d034c1f241af7d3d03b4931d7c38e860e907a0e71ee35506f5335afd", Pod:"calico-apiserver-59dcd5c4d5-t5svc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8aad30be039", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:24:09.099860 containerd[1582]: 2025-07-11 00:24:09.028 [INFO][5957] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f" Jul 11 00:24:09.099860 containerd[1582]: 2025-07-11 00:24:09.028 [INFO][5957] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f" iface="eth0" netns="" Jul 11 00:24:09.099860 containerd[1582]: 2025-07-11 00:24:09.028 [INFO][5957] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f" Jul 11 00:24:09.099860 containerd[1582]: 2025-07-11 00:24:09.028 [INFO][5957] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f" Jul 11 00:24:09.099860 containerd[1582]: 2025-07-11 00:24:09.067 [INFO][5968] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f" HandleID="k8s-pod-network.794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f" Workload="localhost-k8s-calico--apiserver--59dcd5c4d5--t5svc-eth0" Jul 11 00:24:09.099860 containerd[1582]: 2025-07-11 00:24:09.067 [INFO][5968] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:24:09.099860 containerd[1582]: 2025-07-11 00:24:09.067 [INFO][5968] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:24:09.099860 containerd[1582]: 2025-07-11 00:24:09.083 [WARNING][5968] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f" HandleID="k8s-pod-network.794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f" Workload="localhost-k8s-calico--apiserver--59dcd5c4d5--t5svc-eth0" Jul 11 00:24:09.099860 containerd[1582]: 2025-07-11 00:24:09.083 [INFO][5968] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f" HandleID="k8s-pod-network.794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f" Workload="localhost-k8s-calico--apiserver--59dcd5c4d5--t5svc-eth0" Jul 11 00:24:09.099860 containerd[1582]: 2025-07-11 00:24:09.089 [INFO][5968] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:24:09.099860 containerd[1582]: 2025-07-11 00:24:09.093 [INFO][5957] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f" Jul 11 00:24:09.099860 containerd[1582]: time="2025-07-11T00:24:09.099771489Z" level=info msg="TearDown network for sandbox \"794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f\" successfully" Jul 11 00:24:09.111591 containerd[1582]: time="2025-07-11T00:24:09.111489822Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:24:09.111796 containerd[1582]: time="2025-07-11T00:24:09.111663215Z" level=info msg="RemovePodSandbox \"794ff4f135e1f61af8785f51d1b67bd2986e7001865945b73ceee1fca5c3522f\" returns successfully" Jul 11 00:24:09.112828 containerd[1582]: time="2025-07-11T00:24:09.112754097Z" level=info msg="StopPodSandbox for \"ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896\"" Jul 11 00:24:09.211574 containerd[1582]: 2025-07-11 00:24:09.172 [WARNING][5986] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--vqw25-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"534af8e4-6518-491a-9219-a3b30c552e4b", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 23, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dd2249e224bdd57d45c96a582570cdde440c50b47e72337032bdb83b884481c8", Pod:"coredns-7c65d6cfc9-vqw25", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali89c53d7dc3c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:24:09.211574 containerd[1582]: 2025-07-11 00:24:09.172 [INFO][5986] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896" Jul 11 00:24:09.211574 containerd[1582]: 2025-07-11 00:24:09.172 [INFO][5986] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896" iface="eth0" netns="" Jul 11 00:24:09.211574 containerd[1582]: 2025-07-11 00:24:09.172 [INFO][5986] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896" Jul 11 00:24:09.211574 containerd[1582]: 2025-07-11 00:24:09.172 [INFO][5986] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896" Jul 11 00:24:09.211574 containerd[1582]: 2025-07-11 00:24:09.195 [INFO][5995] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896" HandleID="k8s-pod-network.ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896" Workload="localhost-k8s-coredns--7c65d6cfc9--vqw25-eth0" Jul 11 00:24:09.211574 containerd[1582]: 2025-07-11 00:24:09.195 [INFO][5995] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:24:09.211574 containerd[1582]: 2025-07-11 00:24:09.195 [INFO][5995] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:24:09.211574 containerd[1582]: 2025-07-11 00:24:09.202 [WARNING][5995] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896" HandleID="k8s-pod-network.ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896" Workload="localhost-k8s-coredns--7c65d6cfc9--vqw25-eth0" Jul 11 00:24:09.211574 containerd[1582]: 2025-07-11 00:24:09.202 [INFO][5995] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896" HandleID="k8s-pod-network.ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896" Workload="localhost-k8s-coredns--7c65d6cfc9--vqw25-eth0" Jul 11 00:24:09.211574 containerd[1582]: 2025-07-11 00:24:09.203 [INFO][5995] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:24:09.211574 containerd[1582]: 2025-07-11 00:24:09.207 [INFO][5986] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896" Jul 11 00:24:09.212396 containerd[1582]: time="2025-07-11T00:24:09.211642659Z" level=info msg="TearDown network for sandbox \"ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896\" successfully" Jul 11 00:24:09.212396 containerd[1582]: time="2025-07-11T00:24:09.211682725Z" level=info msg="StopPodSandbox for \"ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896\" returns successfully" Jul 11 00:24:09.212396 containerd[1582]: time="2025-07-11T00:24:09.212385263Z" level=info msg="RemovePodSandbox for \"ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896\"" Jul 11 00:24:09.212498 containerd[1582]: time="2025-07-11T00:24:09.212419679Z" level=info msg="Forcibly stopping sandbox \"ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896\"" Jul 11 00:24:09.310317 containerd[1582]: 2025-07-11 00:24:09.258 [WARNING][6013] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--vqw25-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"534af8e4-6518-491a-9219-a3b30c552e4b", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 23, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dd2249e224bdd57d45c96a582570cdde440c50b47e72337032bdb83b884481c8", Pod:"coredns-7c65d6cfc9-vqw25", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali89c53d7dc3c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:24:09.310317 containerd[1582]: 2025-07-11 00:24:09.258 [INFO][6013] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896" Jul 11 00:24:09.310317 containerd[1582]: 2025-07-11 00:24:09.258 [INFO][6013] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896" iface="eth0" netns="" Jul 11 00:24:09.310317 containerd[1582]: 2025-07-11 00:24:09.258 [INFO][6013] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896" Jul 11 00:24:09.310317 containerd[1582]: 2025-07-11 00:24:09.258 [INFO][6013] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896" Jul 11 00:24:09.310317 containerd[1582]: 2025-07-11 00:24:09.288 [INFO][6021] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896" HandleID="k8s-pod-network.ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896" Workload="localhost-k8s-coredns--7c65d6cfc9--vqw25-eth0" Jul 11 00:24:09.310317 containerd[1582]: 2025-07-11 00:24:09.288 [INFO][6021] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:24:09.310317 containerd[1582]: 2025-07-11 00:24:09.288 [INFO][6021] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:24:09.310317 containerd[1582]: 2025-07-11 00:24:09.295 [WARNING][6021] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896" HandleID="k8s-pod-network.ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896" Workload="localhost-k8s-coredns--7c65d6cfc9--vqw25-eth0" Jul 11 00:24:09.310317 containerd[1582]: 2025-07-11 00:24:09.295 [INFO][6021] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896" HandleID="k8s-pod-network.ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896" Workload="localhost-k8s-coredns--7c65d6cfc9--vqw25-eth0" Jul 11 00:24:09.310317 containerd[1582]: 2025-07-11 00:24:09.300 [INFO][6021] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:24:09.310317 containerd[1582]: 2025-07-11 00:24:09.306 [INFO][6013] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896" Jul 11 00:24:09.310934 containerd[1582]: time="2025-07-11T00:24:09.310360964Z" level=info msg="TearDown network for sandbox \"ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896\" successfully" Jul 11 00:24:09.315226 containerd[1582]: time="2025-07-11T00:24:09.315144358Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:24:09.315335 containerd[1582]: time="2025-07-11T00:24:09.315258106Z" level=info msg="RemovePodSandbox \"ef9b03c10ea319af7f5a8fe7f01489620b6fa14573079e74bef9f28872dc6896\" returns successfully" Jul 11 00:24:10.201714 systemd[1]: Started sshd@11-10.0.0.132:22-10.0.0.1:43772.service - OpenSSH per-connection server daemon (10.0.0.1:43772). Jul 11 00:24:10.677750 sshd[6029]: Accepted publickey for core from 10.0.0.1 port 43772 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:24:10.679752 sshd[6029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:24:10.685375 systemd-logind[1554]: New session 12 of user core. Jul 11 00:24:10.692786 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 11 00:24:11.207211 sshd[6029]: pam_unix(sshd:session): session closed for user core Jul 11 00:24:11.214499 systemd[1]: Started sshd@12-10.0.0.132:22-10.0.0.1:43786.service - OpenSSH per-connection server daemon (10.0.0.1:43786). Jul 11 00:24:11.215332 systemd[1]: sshd@11-10.0.0.132:22-10.0.0.1:43772.service: Deactivated successfully. Jul 11 00:24:11.217835 systemd[1]: session-12.scope: Deactivated successfully. Jul 11 00:24:11.220067 systemd-logind[1554]: Session 12 logged out. Waiting for processes to exit. Jul 11 00:24:11.221445 systemd-logind[1554]: Removed session 12. Jul 11 00:24:11.254515 sshd[6045]: Accepted publickey for core from 10.0.0.1 port 43786 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:24:11.256648 sshd[6045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:24:11.261559 systemd-logind[1554]: New session 13 of user core. Jul 11 00:24:11.268581 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 11 00:24:11.742218 sshd[6045]: pam_unix(sshd:session): session closed for user core Jul 11 00:24:11.754787 systemd[1]: Started sshd@13-10.0.0.132:22-10.0.0.1:43800.service - OpenSSH per-connection server daemon (10.0.0.1:43800). Jul 11 00:24:11.756089 systemd[1]: sshd@12-10.0.0.132:22-10.0.0.1:43786.service: Deactivated successfully. Jul 11 00:24:11.767455 systemd[1]: session-13.scope: Deactivated successfully. Jul 11 00:24:11.769030 systemd-logind[1554]: Session 13 logged out. Waiting for processes to exit. Jul 11 00:24:11.771724 systemd-logind[1554]: Removed session 13. Jul 11 00:24:11.800942 sshd[6061]: Accepted publickey for core from 10.0.0.1 port 43800 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:24:11.803590 sshd[6061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:24:11.809760 systemd-logind[1554]: New session 14 of user core. Jul 11 00:24:11.818638 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 11 00:24:11.920802 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2226508195.mount: Deactivated successfully. Jul 11 00:24:11.957007 systemd-journald[1157]: Under memory pressure, flushing caches. Jul 11 00:24:11.956531 systemd-resolved[1466]: Under memory pressure, flushing caches. Jul 11 00:24:11.956612 systemd-resolved[1466]: Flushed all caches. Jul 11 00:24:12.117868 sshd[6061]: pam_unix(sshd:session): session closed for user core Jul 11 00:24:12.126393 systemd[1]: sshd@13-10.0.0.132:22-10.0.0.1:43800.service: Deactivated successfully. Jul 11 00:24:12.130352 systemd[1]: session-14.scope: Deactivated successfully. Jul 11 00:24:12.131143 systemd-logind[1554]: Session 14 logged out. Waiting for processes to exit. Jul 11 00:24:12.132669 systemd-logind[1554]: Removed session 14. Jul 11 00:24:13.227894 containerd[1582]: time="2025-07-11T00:24:13.226631512Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:24:13.228716 containerd[1582]: time="2025-07-11T00:24:13.228104051Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 11 00:24:13.230212 containerd[1582]: time="2025-07-11T00:24:13.230128013Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:24:13.233355 containerd[1582]: time="2025-07-11T00:24:13.233295324Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:24:13.234354 containerd[1582]: time="2025-07-11T00:24:13.234300798Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 4.537720651s" Jul 11 00:24:13.234354 containerd[1582]: time="2025-07-11T00:24:13.234340684Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 11 00:24:13.235581 containerd[1582]: time="2025-07-11T00:24:13.235550068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 11 00:24:13.238000 containerd[1582]: time="2025-07-11T00:24:13.237933098Z" level=info msg="CreateContainer within sandbox \"9578f41ced7f55848e7009f12f320aff9f66f287f7cf63fd297e598cf982ca8f\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 11 00:24:13.254570 containerd[1582]: time="2025-07-11T00:24:13.254506381Z" level=info msg="CreateContainer within sandbox \"9578f41ced7f55848e7009f12f320aff9f66f287f7cf63fd297e598cf982ca8f\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"791125d17fc5fc86ba561b222626aba69f0cfa4d34951cffe179374b7ac9c37c\"" Jul 11 00:24:13.256748 containerd[1582]: time="2025-07-11T00:24:13.256714146Z" level=info msg="StartContainer for \"791125d17fc5fc86ba561b222626aba69f0cfa4d34951cffe179374b7ac9c37c\"" Jul 11 00:24:13.526084 containerd[1582]: time="2025-07-11T00:24:13.525883996Z" level=info msg="StartContainer for \"791125d17fc5fc86ba561b222626aba69f0cfa4d34951cffe179374b7ac9c37c\" returns successfully" Jul 11 00:24:13.980558 systemd-resolved[1466]: Under memory pressure, flushing caches. Jul 11 00:24:13.980584 systemd-resolved[1466]: Flushed all caches. Jul 11 00:24:13.982253 systemd-journald[1157]: Under memory pressure, flushing caches. Jul 11 00:24:14.505398 kubelet[2662]: I0711 00:24:14.504448 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-vvl26" podStartSLOduration=41.268348764 podStartE2EDuration="51.504424253s" podCreationTimestamp="2025-07-11 00:23:23 +0000 UTC" firstStartedPulling="2025-07-11 00:24:02.999279075 +0000 UTC m=+56.681766598" lastFinishedPulling="2025-07-11 00:24:13.235354564 +0000 UTC m=+66.917842087" observedRunningTime="2025-07-11 00:24:14.503802974 +0000 UTC m=+68.186290487" watchObservedRunningTime="2025-07-11 00:24:14.504424253 +0000 UTC m=+68.186911766" Jul 11 00:24:17.135718 systemd[1]: Started sshd@14-10.0.0.132:22-10.0.0.1:43806.service - OpenSSH per-connection server daemon (10.0.0.1:43806). Jul 11 00:24:17.223960 sshd[6179]: Accepted publickey for core from 10.0.0.1 port 43806 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:24:17.226464 sshd[6179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:24:17.232406 systemd-logind[1554]: New session 15 of user core. Jul 11 00:24:17.246726 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 11 00:24:17.680577 sshd[6179]: pam_unix(sshd:session): session closed for user core Jul 11 00:24:17.685704 systemd[1]: sshd@14-10.0.0.132:22-10.0.0.1:43806.service: Deactivated successfully. Jul 11 00:24:17.689173 systemd-logind[1554]: Session 15 logged out. Waiting for processes to exit. Jul 11 00:24:17.689574 systemd[1]: session-15.scope: Deactivated successfully. Jul 11 00:24:17.691173 systemd-logind[1554]: Removed session 15. Jul 11 00:24:17.891084 containerd[1582]: time="2025-07-11T00:24:17.890969118Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:24:17.893073 containerd[1582]: time="2025-07-11T00:24:17.892633937Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 11 00:24:17.895239 containerd[1582]: time="2025-07-11T00:24:17.895151305Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:24:17.898526 containerd[1582]: time="2025-07-11T00:24:17.898402965Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:24:17.899412 containerd[1582]: time="2025-07-11T00:24:17.899236676Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 4.66365047s" Jul 11 00:24:17.899412 containerd[1582]: time="2025-07-11T00:24:17.899292132Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 11 00:24:17.902739 containerd[1582]: time="2025-07-11T00:24:17.902676085Z" level=info msg="CreateContainer within sandbox \"64086a15db96731b8e1389d841a56f7a063025007357f60e2b6d18ad8a214ba0\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 11 00:24:17.934347 containerd[1582]: time="2025-07-11T00:24:17.934104293Z" level=info msg="CreateContainer within sandbox \"64086a15db96731b8e1389d841a56f7a063025007357f60e2b6d18ad8a214ba0\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"893c30c66948b52d985fd42386f73dab9849d09bd434df27a9dfa898b3d39005\"" Jul 11 00:24:17.935689 containerd[1582]: time="2025-07-11T00:24:17.935634937Z" level=info msg="StartContainer for \"893c30c66948b52d985fd42386f73dab9849d09bd434df27a9dfa898b3d39005\"" Jul 11 00:24:18.035650 containerd[1582]: time="2025-07-11T00:24:18.035575864Z" level=info msg="StartContainer for \"893c30c66948b52d985fd42386f73dab9849d09bd434df27a9dfa898b3d39005\" returns successfully" Jul 11 00:24:18.360594 kubelet[2662]: I0711 00:24:18.360527 2662 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 11 00:24:18.366953 kubelet[2662]: I0711 00:24:18.366911 2662 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 11 00:24:19.934659 systemd-journald[1157]: Under memory pressure, flushing caches. Jul 11 00:24:19.932437 systemd-resolved[1466]: Under memory pressure, flushing caches. Jul 11 00:24:19.932473 systemd-resolved[1466]: Flushed all caches. Jul 11 00:24:22.695563 systemd[1]: Started sshd@15-10.0.0.132:22-10.0.0.1:48394.service - OpenSSH per-connection server daemon (10.0.0.1:48394). Jul 11 00:24:22.767670 sshd[6261]: Accepted publickey for core from 10.0.0.1 port 48394 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:24:22.771688 sshd[6261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:24:22.781081 systemd-logind[1554]: New session 16 of user core. Jul 11 00:24:22.788165 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 11 00:24:23.053821 sshd[6261]: pam_unix(sshd:session): session closed for user core Jul 11 00:24:23.059109 systemd[1]: sshd@15-10.0.0.132:22-10.0.0.1:48394.service: Deactivated successfully. Jul 11 00:24:23.062335 systemd-logind[1554]: Session 16 logged out. Waiting for processes to exit. Jul 11 00:24:23.062464 systemd[1]: session-16.scope: Deactivated successfully. Jul 11 00:24:23.063781 systemd-logind[1554]: Removed session 16. Jul 11 00:24:23.473432 kubelet[2662]: E0711 00:24:23.473329 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:24.467029 kubelet[2662]: E0711 00:24:24.466842 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:28.068710 systemd[1]: Started sshd@16-10.0.0.132:22-10.0.0.1:48408.service - OpenSSH per-connection server daemon (10.0.0.1:48408). Jul 11 00:24:28.126988 sshd[6278]: Accepted publickey for core from 10.0.0.1 port 48408 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:24:28.129277 sshd[6278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:24:28.134223 systemd-logind[1554]: New session 17 of user core. Jul 11 00:24:28.145574 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 11 00:24:28.402784 sshd[6278]: pam_unix(sshd:session): session closed for user core Jul 11 00:24:28.407382 systemd[1]: sshd@16-10.0.0.132:22-10.0.0.1:48408.service: Deactivated successfully. Jul 11 00:24:28.410764 systemd[1]: session-17.scope: Deactivated successfully. Jul 11 00:24:28.411395 systemd-logind[1554]: Session 17 logged out. Waiting for processes to exit. Jul 11 00:24:28.412944 systemd-logind[1554]: Removed session 17. Jul 11 00:24:31.467972 kubelet[2662]: E0711 00:24:31.467326 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:33.421571 systemd[1]: Started sshd@17-10.0.0.132:22-10.0.0.1:40980.service - OpenSSH per-connection server daemon (10.0.0.1:40980). Jul 11 00:24:33.509314 sshd[6299]: Accepted publickey for core from 10.0.0.1 port 40980 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:24:33.511546 sshd[6299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:24:33.516963 systemd-logind[1554]: New session 18 of user core. Jul 11 00:24:33.523566 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 11 00:24:33.716416 sshd[6299]: pam_unix(sshd:session): session closed for user core Jul 11 00:24:33.723668 systemd[1]: sshd@17-10.0.0.132:22-10.0.0.1:40980.service: Deactivated successfully. Jul 11 00:24:33.726513 systemd-logind[1554]: Session 18 logged out. Waiting for processes to exit. Jul 11 00:24:33.726641 systemd[1]: session-18.scope: Deactivated successfully. Jul 11 00:24:33.727672 systemd-logind[1554]: Removed session 18. Jul 11 00:24:34.446248 kubelet[2662]: I0711 00:24:34.445029 2662 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:24:34.548242 kubelet[2662]: I0711 00:24:34.547317 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-8gdbn" podStartSLOduration=54.960499519 podStartE2EDuration="1m10.547297607s" podCreationTimestamp="2025-07-11 00:23:24 +0000 UTC" firstStartedPulling="2025-07-11 00:24:02.313980723 +0000 UTC m=+55.996468236" lastFinishedPulling="2025-07-11 00:24:17.900778811 +0000 UTC m=+71.583266324" observedRunningTime="2025-07-11 00:24:18.316102981 +0000 UTC m=+71.998590524" watchObservedRunningTime="2025-07-11 00:24:34.547297607 +0000 UTC m=+88.229785120" Jul 11 00:24:38.467157 kubelet[2662]: E0711 00:24:38.467097 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:38.729531 systemd[1]: Started sshd@18-10.0.0.132:22-10.0.0.1:40984.service - OpenSSH per-connection server daemon (10.0.0.1:40984). Jul 11 00:24:38.824596 sshd[6377]: Accepted publickey for core from 10.0.0.1 port 40984 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:24:38.826487 sshd[6377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:24:38.830942 systemd-logind[1554]: New session 19 of user core. Jul 11 00:24:38.834552 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 11 00:24:38.957085 sshd[6377]: pam_unix(sshd:session): session closed for user core Jul 11 00:24:38.967518 systemd[1]: Started sshd@19-10.0.0.132:22-10.0.0.1:40992.service - OpenSSH per-connection server daemon (10.0.0.1:40992). Jul 11 00:24:38.968113 systemd[1]: sshd@18-10.0.0.132:22-10.0.0.1:40984.service: Deactivated successfully. Jul 11 00:24:38.972436 systemd-logind[1554]: Session 19 logged out. Waiting for processes to exit. Jul 11 00:24:38.973836 systemd[1]: session-19.scope: Deactivated successfully. Jul 11 00:24:38.974906 systemd-logind[1554]: Removed session 19. Jul 11 00:24:39.002426 sshd[6389]: Accepted publickey for core from 10.0.0.1 port 40992 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:24:39.004337 sshd[6389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:24:39.009032 systemd-logind[1554]: New session 20 of user core. Jul 11 00:24:39.019508 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 11 00:24:39.675241 sshd[6389]: pam_unix(sshd:session): session closed for user core Jul 11 00:24:39.692594 systemd[1]: Started sshd@20-10.0.0.132:22-10.0.0.1:35564.service - OpenSSH per-connection server daemon (10.0.0.1:35564). Jul 11 00:24:39.693381 systemd[1]: sshd@19-10.0.0.132:22-10.0.0.1:40992.service: Deactivated successfully. Jul 11 00:24:39.695818 systemd[1]: session-20.scope: Deactivated successfully. Jul 11 00:24:39.698030 systemd-logind[1554]: Session 20 logged out. Waiting for processes to exit. Jul 11 00:24:39.699171 systemd-logind[1554]: Removed session 20. Jul 11 00:24:39.728933 sshd[6402]: Accepted publickey for core from 10.0.0.1 port 35564 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:24:39.730925 sshd[6402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:24:39.735304 systemd-logind[1554]: New session 21 of user core. Jul 11 00:24:39.741512 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 11 00:24:42.112431 sshd[6402]: pam_unix(sshd:session): session closed for user core Jul 11 00:24:42.122758 systemd[1]: Started sshd@21-10.0.0.132:22-10.0.0.1:35568.service - OpenSSH per-connection server daemon (10.0.0.1:35568). Jul 11 00:24:42.123958 systemd[1]: sshd@20-10.0.0.132:22-10.0.0.1:35564.service: Deactivated successfully. Jul 11 00:24:42.133574 systemd[1]: session-21.scope: Deactivated successfully. Jul 11 00:24:42.134796 systemd-logind[1554]: Session 21 logged out. Waiting for processes to exit. Jul 11 00:24:42.137679 systemd-logind[1554]: Removed session 21. Jul 11 00:24:42.169379 sshd[6444]: Accepted publickey for core from 10.0.0.1 port 35568 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:24:42.171639 sshd[6444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:24:42.176472 systemd-logind[1554]: New session 22 of user core. Jul 11 00:24:42.184673 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 11 00:24:42.467713 kubelet[2662]: E0711 00:24:42.467558 2662 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:24:42.923774 sshd[6444]: pam_unix(sshd:session): session closed for user core Jul 11 00:24:42.943492 systemd[1]: Started sshd@22-10.0.0.132:22-10.0.0.1:35572.service - OpenSSH per-connection server daemon (10.0.0.1:35572). Jul 11 00:24:42.945717 systemd[1]: sshd@21-10.0.0.132:22-10.0.0.1:35568.service: Deactivated successfully. Jul 11 00:24:42.951107 systemd[1]: session-22.scope: Deactivated successfully. Jul 11 00:24:42.953861 systemd-logind[1554]: Session 22 logged out. Waiting for processes to exit. Jul 11 00:24:42.955327 systemd-logind[1554]: Removed session 22. Jul 11 00:24:42.992085 sshd[6461]: Accepted publickey for core from 10.0.0.1 port 35572 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:24:42.994454 sshd[6461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:24:43.000910 systemd-logind[1554]: New session 23 of user core. Jul 11 00:24:43.011721 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 11 00:24:43.188500 sshd[6461]: pam_unix(sshd:session): session closed for user core Jul 11 00:24:43.193683 systemd[1]: sshd@22-10.0.0.132:22-10.0.0.1:35572.service: Deactivated successfully. Jul 11 00:24:43.197100 systemd-logind[1554]: Session 23 logged out. Waiting for processes to exit. Jul 11 00:24:43.197240 systemd[1]: session-23.scope: Deactivated successfully. Jul 11 00:24:43.198811 systemd-logind[1554]: Removed session 23. Jul 11 00:24:43.932619 systemd-resolved[1466]: Under memory pressure, flushing caches. Jul 11 00:24:43.932630 systemd-resolved[1466]: Flushed all caches. Jul 11 00:24:43.935236 systemd-journald[1157]: Under memory pressure, flushing caches. Jul 11 00:24:48.213502 systemd[1]: Started sshd@23-10.0.0.132:22-10.0.0.1:35582.service - OpenSSH per-connection server daemon (10.0.0.1:35582). Jul 11 00:24:48.247982 sshd[6482]: Accepted publickey for core from 10.0.0.1 port 35582 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:24:48.249772 sshd[6482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:24:48.254708 systemd-logind[1554]: New session 24 of user core. Jul 11 00:24:48.264507 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 11 00:24:48.415469 sshd[6482]: pam_unix(sshd:session): session closed for user core Jul 11 00:24:48.420038 systemd[1]: sshd@23-10.0.0.132:22-10.0.0.1:35582.service: Deactivated successfully. Jul 11 00:24:48.423377 systemd-logind[1554]: Session 24 logged out. Waiting for processes to exit. Jul 11 00:24:48.423423 systemd[1]: session-24.scope: Deactivated successfully. Jul 11 00:24:48.424828 systemd-logind[1554]: Removed session 24. Jul 11 00:24:53.426575 systemd[1]: Started sshd@24-10.0.0.132:22-10.0.0.1:48300.service - OpenSSH per-connection server daemon (10.0.0.1:48300). Jul 11 00:24:53.466450 sshd[6522]: Accepted publickey for core from 10.0.0.1 port 48300 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:24:53.469890 sshd[6522]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:24:53.474995 systemd-logind[1554]: New session 25 of user core. Jul 11 00:24:53.482593 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 11 00:24:53.727085 sshd[6522]: pam_unix(sshd:session): session closed for user core Jul 11 00:24:53.731753 systemd[1]: sshd@24-10.0.0.132:22-10.0.0.1:48300.service: Deactivated successfully. Jul 11 00:24:53.736231 systemd[1]: session-25.scope: Deactivated successfully. Jul 11 00:24:53.737446 systemd-logind[1554]: Session 25 logged out. Waiting for processes to exit. Jul 11 00:24:53.738595 systemd-logind[1554]: Removed session 25. Jul 11 00:24:58.737476 systemd[1]: Started sshd@25-10.0.0.132:22-10.0.0.1:48312.service - OpenSSH per-connection server daemon (10.0.0.1:48312). Jul 11 00:24:58.786929 sshd[6537]: Accepted publickey for core from 10.0.0.1 port 48312 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:24:58.789964 sshd[6537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:24:58.795168 systemd-logind[1554]: New session 26 of user core. Jul 11 00:24:58.805637 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 11 00:24:59.130976 sshd[6537]: pam_unix(sshd:session): session closed for user core Jul 11 00:24:59.135535 systemd[1]: sshd@25-10.0.0.132:22-10.0.0.1:48312.service: Deactivated successfully. Jul 11 00:24:59.138245 systemd-logind[1554]: Session 26 logged out. Waiting for processes to exit. Jul 11 00:24:59.138265 systemd[1]: session-26.scope: Deactivated successfully. Jul 11 00:24:59.139578 systemd-logind[1554]: Removed session 26. Jul 11 00:25:04.140520 systemd[1]: Started sshd@26-10.0.0.132:22-10.0.0.1:48834.service - OpenSSH per-connection server daemon (10.0.0.1:48834). Jul 11 00:25:04.191763 sshd[6553]: Accepted publickey for core from 10.0.0.1 port 48834 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:25:04.194255 sshd[6553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:25:04.198850 systemd-logind[1554]: New session 27 of user core. Jul 11 00:25:04.204607 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 11 00:25:04.530553 sshd[6553]: pam_unix(sshd:session): session closed for user core Jul 11 00:25:04.536662 systemd[1]: sshd@26-10.0.0.132:22-10.0.0.1:48834.service: Deactivated successfully. Jul 11 00:25:04.540020 systemd-logind[1554]: Session 27 logged out. Waiting for processes to exit. Jul 11 00:25:04.540144 systemd[1]: session-27.scope: Deactivated successfully. Jul 11 00:25:04.542632 systemd-logind[1554]: Removed session 27.