Jul 14 22:14:29.926794 kernel: Linux version 6.6.97-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jul 14 20:23:49 -00 2025 Jul 14 22:14:29.926843 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bfa97d577a2baa7448b0ab2cae71f1606bd0084ffae5b72cc7eef5122a2ca497 Jul 14 22:14:29.926856 kernel: BIOS-provided physical RAM map: Jul 14 22:14:29.926862 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 14 22:14:29.926868 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 14 22:14:29.926874 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 14 22:14:29.926882 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jul 14 22:14:29.926888 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jul 14 22:14:29.926894 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 14 22:14:29.926903 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 14 22:14:29.926909 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 14 22:14:29.926915 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 14 22:14:29.926921 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 14 22:14:29.926928 kernel: NX (Execute Disable) protection: active Jul 14 22:14:29.926935 kernel: APIC: Static calls initialized Jul 14 22:14:29.926944 kernel: SMBIOS 2.8 present. Jul 14 22:14:29.926951 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jul 14 22:14:29.926958 kernel: Hypervisor detected: KVM Jul 14 22:14:29.926964 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 14 22:14:29.926971 kernel: kvm-clock: using sched offset of 2291916289 cycles Jul 14 22:14:29.926978 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 14 22:14:29.926985 kernel: tsc: Detected 2794.748 MHz processor Jul 14 22:14:29.926992 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 14 22:14:29.926999 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 14 22:14:29.927006 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jul 14 22:14:29.927016 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 14 22:14:29.927023 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 14 22:14:29.927029 kernel: Using GB pages for direct mapping Jul 14 22:14:29.927036 kernel: ACPI: Early table checksum verification disabled Jul 14 22:14:29.927043 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jul 14 22:14:29.927050 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:14:29.927057 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:14:29.927064 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:14:29.927073 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jul 14 22:14:29.927080 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:14:29.927086 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:14:29.927093 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:14:29.927100 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:14:29.927107 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jul 14 22:14:29.927114 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jul 14 22:14:29.927124 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jul 14 22:14:29.927134 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jul 14 22:14:29.927141 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jul 14 22:14:29.927148 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jul 14 22:14:29.927155 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jul 14 22:14:29.927162 kernel: No NUMA configuration found Jul 14 22:14:29.927169 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jul 14 22:14:29.927176 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jul 14 22:14:29.927185 kernel: Zone ranges: Jul 14 22:14:29.927193 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 14 22:14:29.927200 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jul 14 22:14:29.927207 kernel: Normal empty Jul 14 22:14:29.927214 kernel: Movable zone start for each node Jul 14 22:14:29.927221 kernel: Early memory node ranges Jul 14 22:14:29.927228 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 14 22:14:29.927235 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jul 14 22:14:29.927242 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jul 14 22:14:29.927251 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 14 22:14:29.927258 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 14 22:14:29.927266 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jul 14 22:14:29.927273 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 14 22:14:29.927280 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 14 22:14:29.927287 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 14 22:14:29.927294 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 14 22:14:29.927301 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 14 22:14:29.927308 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 14 22:14:29.927318 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 14 22:14:29.927325 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 14 22:14:29.927332 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 14 22:14:29.927339 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 14 22:14:29.927346 kernel: TSC deadline timer available Jul 14 22:14:29.927353 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jul 14 22:14:29.927360 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 14 22:14:29.927367 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 14 22:14:29.927374 kernel: kvm-guest: setup PV sched yield Jul 14 22:14:29.927381 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 14 22:14:29.927391 kernel: Booting paravirtualized kernel on KVM Jul 14 22:14:29.927398 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 14 22:14:29.927405 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 14 22:14:29.927413 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u524288 Jul 14 22:14:29.927420 kernel: pcpu-alloc: s197096 r8192 d32280 u524288 alloc=1*2097152 Jul 14 22:14:29.927427 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 14 22:14:29.927434 kernel: kvm-guest: PV spinlocks enabled Jul 14 22:14:29.927441 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 14 22:14:29.927449 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bfa97d577a2baa7448b0ab2cae71f1606bd0084ffae5b72cc7eef5122a2ca497 Jul 14 22:14:29.927460 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 14 22:14:29.927467 kernel: random: crng init done Jul 14 22:14:29.927474 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 14 22:14:29.927481 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 14 22:14:29.927488 kernel: Fallback order for Node 0: 0 Jul 14 22:14:29.927495 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jul 14 22:14:29.927502 kernel: Policy zone: DMA32 Jul 14 22:14:29.927509 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 14 22:14:29.927519 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42876K init, 2316K bss, 136900K reserved, 0K cma-reserved) Jul 14 22:14:29.927526 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 14 22:14:29.927533 kernel: ftrace: allocating 37970 entries in 149 pages Jul 14 22:14:29.927540 kernel: ftrace: allocated 149 pages with 4 groups Jul 14 22:14:29.927547 kernel: Dynamic Preempt: voluntary Jul 14 22:14:29.927554 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 14 22:14:29.927562 kernel: rcu: RCU event tracing is enabled. Jul 14 22:14:29.927570 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 14 22:14:29.927578 kernel: Trampoline variant of Tasks RCU enabled. Jul 14 22:14:29.927588 kernel: Rude variant of Tasks RCU enabled. Jul 14 22:14:29.927595 kernel: Tracing variant of Tasks RCU enabled. Jul 14 22:14:29.927602 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 14 22:14:29.927609 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 14 22:14:29.927616 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 14 22:14:29.927623 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 14 22:14:29.927630 kernel: Console: colour VGA+ 80x25 Jul 14 22:14:29.927637 kernel: printk: console [ttyS0] enabled Jul 14 22:14:29.927644 kernel: ACPI: Core revision 20230628 Jul 14 22:14:29.927662 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 14 22:14:29.927669 kernel: APIC: Switch to symmetric I/O mode setup Jul 14 22:14:29.927676 kernel: x2apic enabled Jul 14 22:14:29.927684 kernel: APIC: Switched APIC routing to: physical x2apic Jul 14 22:14:29.927691 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 14 22:14:29.927698 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 14 22:14:29.927708 kernel: kvm-guest: setup PV IPIs Jul 14 22:14:29.927730 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 14 22:14:29.927740 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 14 22:14:29.927748 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jul 14 22:14:29.927755 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 14 22:14:29.927763 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 14 22:14:29.927773 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 14 22:14:29.927780 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 14 22:14:29.927787 kernel: Spectre V2 : Mitigation: Retpolines Jul 14 22:14:29.927795 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 14 22:14:29.927805 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 14 22:14:29.927813 kernel: RETBleed: Mitigation: untrained return thunk Jul 14 22:14:29.927838 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 14 22:14:29.927846 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 14 22:14:29.927854 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 14 22:14:29.927862 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 14 22:14:29.927869 kernel: x86/bugs: return thunk changed Jul 14 22:14:29.927877 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 14 22:14:29.927884 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 14 22:14:29.927896 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 14 22:14:29.927906 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 14 22:14:29.927916 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 14 22:14:29.927925 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 14 22:14:29.927935 kernel: Freeing SMP alternatives memory: 32K Jul 14 22:14:29.927945 kernel: pid_max: default: 32768 minimum: 301 Jul 14 22:14:29.927955 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 14 22:14:29.927965 kernel: landlock: Up and running. Jul 14 22:14:29.927975 kernel: SELinux: Initializing. Jul 14 22:14:29.927988 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 14 22:14:29.927996 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 14 22:14:29.928004 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 14 22:14:29.928011 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 14 22:14:29.928019 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 14 22:14:29.928027 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 14 22:14:29.928034 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 14 22:14:29.928042 kernel: ... version: 0 Jul 14 22:14:29.928051 kernel: ... bit width: 48 Jul 14 22:14:29.928059 kernel: ... generic registers: 6 Jul 14 22:14:29.928068 kernel: ... value mask: 0000ffffffffffff Jul 14 22:14:29.928078 kernel: ... max period: 00007fffffffffff Jul 14 22:14:29.928087 kernel: ... fixed-purpose events: 0 Jul 14 22:14:29.928097 kernel: ... event mask: 000000000000003f Jul 14 22:14:29.928106 kernel: signal: max sigframe size: 1776 Jul 14 22:14:29.928116 kernel: rcu: Hierarchical SRCU implementation. Jul 14 22:14:29.928126 kernel: rcu: Max phase no-delay instances is 400. Jul 14 22:14:29.928136 kernel: smp: Bringing up secondary CPUs ... Jul 14 22:14:29.928149 kernel: smpboot: x86: Booting SMP configuration: Jul 14 22:14:29.928159 kernel: .... node #0, CPUs: #1 #2 #3 Jul 14 22:14:29.928169 kernel: smp: Brought up 1 node, 4 CPUs Jul 14 22:14:29.928179 kernel: smpboot: Max logical packages: 1 Jul 14 22:14:29.928186 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jul 14 22:14:29.928194 kernel: devtmpfs: initialized Jul 14 22:14:29.928201 kernel: x86/mm: Memory block size: 128MB Jul 14 22:14:29.928209 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 14 22:14:29.928216 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 14 22:14:29.928226 kernel: pinctrl core: initialized pinctrl subsystem Jul 14 22:14:29.928234 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 14 22:14:29.928242 kernel: audit: initializing netlink subsys (disabled) Jul 14 22:14:29.928249 kernel: audit: type=2000 audit(1752531269.029:1): state=initialized audit_enabled=0 res=1 Jul 14 22:14:29.928256 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 14 22:14:29.928264 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 14 22:14:29.928271 kernel: cpuidle: using governor menu Jul 14 22:14:29.928279 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 14 22:14:29.928286 kernel: dca service started, version 1.12.1 Jul 14 22:14:29.928296 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jul 14 22:14:29.928304 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jul 14 22:14:29.928311 kernel: PCI: Using configuration type 1 for base access Jul 14 22:14:29.928318 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 14 22:14:29.928326 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 14 22:14:29.928333 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 14 22:14:29.928341 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 14 22:14:29.928348 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 14 22:14:29.928355 kernel: ACPI: Added _OSI(Module Device) Jul 14 22:14:29.928365 kernel: ACPI: Added _OSI(Processor Device) Jul 14 22:14:29.928372 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 14 22:14:29.928380 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 14 22:14:29.928387 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 14 22:14:29.928394 kernel: ACPI: Interpreter enabled Jul 14 22:14:29.928402 kernel: ACPI: PM: (supports S0 S3 S5) Jul 14 22:14:29.928409 kernel: ACPI: Using IOAPIC for interrupt routing Jul 14 22:14:29.928416 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 14 22:14:29.928424 kernel: PCI: Using E820 reservations for host bridge windows Jul 14 22:14:29.928433 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 14 22:14:29.928441 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 14 22:14:29.928635 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 14 22:14:29.928791 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 14 22:14:29.928949 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 14 22:14:29.928961 kernel: PCI host bridge to bus 0000:00 Jul 14 22:14:29.929084 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 14 22:14:29.929229 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 14 22:14:29.929346 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 14 22:14:29.929470 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jul 14 22:14:29.929582 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 14 22:14:29.929728 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jul 14 22:14:29.929869 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 14 22:14:29.930029 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jul 14 22:14:29.930201 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jul 14 22:14:29.930356 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jul 14 22:14:29.930504 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jul 14 22:14:29.930768 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jul 14 22:14:29.930962 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 14 22:14:29.931143 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jul 14 22:14:29.931306 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jul 14 22:14:29.931471 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jul 14 22:14:29.931632 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jul 14 22:14:29.931837 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jul 14 22:14:29.932007 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jul 14 22:14:29.932244 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jul 14 22:14:29.932444 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jul 14 22:14:29.932648 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jul 14 22:14:29.932836 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jul 14 22:14:29.933005 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jul 14 22:14:29.933170 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jul 14 22:14:29.933336 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jul 14 22:14:29.933512 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jul 14 22:14:29.933712 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 14 22:14:29.933951 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jul 14 22:14:29.934119 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jul 14 22:14:29.934284 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jul 14 22:14:29.934456 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jul 14 22:14:29.934619 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jul 14 22:14:29.934636 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 14 22:14:29.934669 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 14 22:14:29.934697 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 14 22:14:29.934706 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 14 22:14:29.934713 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 14 22:14:29.934721 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 14 22:14:29.934729 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 14 22:14:29.934737 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 14 22:14:29.934744 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 14 22:14:29.934752 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 14 22:14:29.934759 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 14 22:14:29.934770 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 14 22:14:29.934781 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 14 22:14:29.934789 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 14 22:14:29.934796 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 14 22:14:29.934804 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 14 22:14:29.934811 kernel: iommu: Default domain type: Translated Jul 14 22:14:29.934831 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 14 22:14:29.934838 kernel: PCI: Using ACPI for IRQ routing Jul 14 22:14:29.934846 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 14 22:14:29.934857 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 14 22:14:29.934865 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jul 14 22:14:29.935000 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 14 22:14:29.935155 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 14 22:14:29.935308 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 14 22:14:29.935324 kernel: vgaarb: loaded Jul 14 22:14:29.935339 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 14 22:14:29.935354 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 14 22:14:29.935372 kernel: clocksource: Switched to clocksource kvm-clock Jul 14 22:14:29.935383 kernel: VFS: Disk quotas dquot_6.6.0 Jul 14 22:14:29.935394 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 14 22:14:29.935404 kernel: pnp: PnP ACPI init Jul 14 22:14:29.935549 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 14 22:14:29.935562 kernel: pnp: PnP ACPI: found 6 devices Jul 14 22:14:29.935569 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 14 22:14:29.935577 kernel: NET: Registered PF_INET protocol family Jul 14 22:14:29.935592 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 14 22:14:29.935602 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 14 22:14:29.935613 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 14 22:14:29.935623 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 14 22:14:29.935634 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 14 22:14:29.935644 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 14 22:14:29.935666 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 14 22:14:29.935676 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 14 22:14:29.935684 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 14 22:14:29.935694 kernel: NET: Registered PF_XDP protocol family Jul 14 22:14:29.935867 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 14 22:14:29.935991 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 14 22:14:29.936102 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 14 22:14:29.936211 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jul 14 22:14:29.936321 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 14 22:14:29.936430 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jul 14 22:14:29.936440 kernel: PCI: CLS 0 bytes, default 64 Jul 14 22:14:29.936452 kernel: Initialise system trusted keyrings Jul 14 22:14:29.936460 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 14 22:14:29.936468 kernel: Key type asymmetric registered Jul 14 22:14:29.936475 kernel: Asymmetric key parser 'x509' registered Jul 14 22:14:29.936483 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 14 22:14:29.936491 kernel: io scheduler mq-deadline registered Jul 14 22:14:29.936498 kernel: io scheduler kyber registered Jul 14 22:14:29.936506 kernel: io scheduler bfq registered Jul 14 22:14:29.936513 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 14 22:14:29.936524 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 14 22:14:29.936532 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 14 22:14:29.936539 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 14 22:14:29.936547 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 14 22:14:29.936554 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 14 22:14:29.936562 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 14 22:14:29.936570 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 14 22:14:29.936577 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 14 22:14:29.936729 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 14 22:14:29.936746 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 14 22:14:29.936878 kernel: rtc_cmos 00:04: registered as rtc0 Jul 14 22:14:29.936994 kernel: rtc_cmos 00:04: setting system clock to 2025-07-14T22:14:29 UTC (1752531269) Jul 14 22:14:29.937108 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 14 22:14:29.937118 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 14 22:14:29.937126 kernel: NET: Registered PF_INET6 protocol family Jul 14 22:14:29.937134 kernel: Segment Routing with IPv6 Jul 14 22:14:29.937141 kernel: In-situ OAM (IOAM) with IPv6 Jul 14 22:14:29.937153 kernel: NET: Registered PF_PACKET protocol family Jul 14 22:14:29.937160 kernel: Key type dns_resolver registered Jul 14 22:14:29.937168 kernel: IPI shorthand broadcast: enabled Jul 14 22:14:29.937176 kernel: sched_clock: Marking stable (603003182, 102935738)->(756260148, -50321228) Jul 14 22:14:29.937183 kernel: registered taskstats version 1 Jul 14 22:14:29.937191 kernel: Loading compiled-in X.509 certificates Jul 14 22:14:29.937199 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.97-flatcar: ff10e110ca3923b510cf0133f4e9f48dd636b870' Jul 14 22:14:29.937207 kernel: Key type .fscrypt registered Jul 14 22:14:29.937214 kernel: Key type fscrypt-provisioning registered Jul 14 22:14:29.937224 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 14 22:14:29.937232 kernel: ima: Allocated hash algorithm: sha1 Jul 14 22:14:29.937240 kernel: ima: No architecture policies found Jul 14 22:14:29.937247 kernel: clk: Disabling unused clocks Jul 14 22:14:29.937255 kernel: Freeing unused kernel image (initmem) memory: 42876K Jul 14 22:14:29.937262 kernel: Write protecting the kernel read-only data: 36864k Jul 14 22:14:29.937270 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Jul 14 22:14:29.937277 kernel: Run /init as init process Jul 14 22:14:29.937285 kernel: with arguments: Jul 14 22:14:29.937295 kernel: /init Jul 14 22:14:29.937302 kernel: with environment: Jul 14 22:14:29.937310 kernel: HOME=/ Jul 14 22:14:29.937317 kernel: TERM=linux Jul 14 22:14:29.937325 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 14 22:14:29.937335 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 14 22:14:29.937344 systemd[1]: Detected virtualization kvm. Jul 14 22:14:29.937353 systemd[1]: Detected architecture x86-64. Jul 14 22:14:29.937363 systemd[1]: Running in initrd. Jul 14 22:14:29.937371 systemd[1]: No hostname configured, using default hostname. Jul 14 22:14:29.937379 systemd[1]: Hostname set to . Jul 14 22:14:29.937387 systemd[1]: Initializing machine ID from VM UUID. Jul 14 22:14:29.937395 systemd[1]: Queued start job for default target initrd.target. Jul 14 22:14:29.937404 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 22:14:29.937412 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 22:14:29.937420 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 14 22:14:29.937432 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 14 22:14:29.937452 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 14 22:14:29.937463 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 14 22:14:29.937473 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 14 22:14:29.937484 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 14 22:14:29.937493 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 22:14:29.937501 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 14 22:14:29.937509 systemd[1]: Reached target paths.target - Path Units. Jul 14 22:14:29.937517 systemd[1]: Reached target slices.target - Slice Units. Jul 14 22:14:29.937526 systemd[1]: Reached target swap.target - Swaps. Jul 14 22:14:29.937534 systemd[1]: Reached target timers.target - Timer Units. Jul 14 22:14:29.937545 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 14 22:14:29.937553 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 14 22:14:29.937564 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 14 22:14:29.937572 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 14 22:14:29.937581 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 14 22:14:29.937593 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 14 22:14:29.937605 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 22:14:29.937616 systemd[1]: Reached target sockets.target - Socket Units. Jul 14 22:14:29.937628 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 14 22:14:29.937639 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 14 22:14:29.937652 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 14 22:14:29.937669 systemd[1]: Starting systemd-fsck-usr.service... Jul 14 22:14:29.937678 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 14 22:14:29.937686 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 14 22:14:29.937694 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 22:14:29.937703 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 14 22:14:29.937711 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 22:14:29.937720 systemd[1]: Finished systemd-fsck-usr.service. Jul 14 22:14:29.937752 systemd-journald[193]: Collecting audit messages is disabled. Jul 14 22:14:29.937774 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 14 22:14:29.937783 systemd-journald[193]: Journal started Jul 14 22:14:29.937805 systemd-journald[193]: Runtime Journal (/run/log/journal/7f053fb396aa4f9e91b5be8bb3a737cc) is 6.0M, max 48.4M, 42.3M free. Jul 14 22:14:29.932558 systemd-modules-load[194]: Inserted module 'overlay' Jul 14 22:14:29.974445 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 14 22:14:29.974483 kernel: Bridge firewalling registered Jul 14 22:14:29.964840 systemd-modules-load[194]: Inserted module 'br_netfilter' Jul 14 22:14:29.976579 systemd[1]: Started systemd-journald.service - Journal Service. Jul 14 22:14:29.978583 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 14 22:14:29.981784 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 22:14:29.984890 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 14 22:14:30.004226 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 22:14:30.005224 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 14 22:14:30.006484 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 14 22:14:30.012319 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 14 22:14:30.021864 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 22:14:30.024945 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 14 22:14:30.026520 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 22:14:30.029169 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 22:14:30.049150 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 14 22:14:30.052046 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 14 22:14:30.062809 dracut-cmdline[227]: dracut-dracut-053 Jul 14 22:14:30.068797 dracut-cmdline[227]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bfa97d577a2baa7448b0ab2cae71f1606bd0084ffae5b72cc7eef5122a2ca497 Jul 14 22:14:30.108983 systemd-resolved[229]: Positive Trust Anchors: Jul 14 22:14:30.109004 systemd-resolved[229]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 22:14:30.109044 systemd-resolved[229]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 14 22:14:30.112305 systemd-resolved[229]: Defaulting to hostname 'linux'. Jul 14 22:14:30.113548 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 14 22:14:30.120006 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 14 22:14:30.146872 kernel: SCSI subsystem initialized Jul 14 22:14:30.156853 kernel: Loading iSCSI transport class v2.0-870. Jul 14 22:14:30.167877 kernel: iscsi: registered transport (tcp) Jul 14 22:14:30.190868 kernel: iscsi: registered transport (qla4xxx) Jul 14 22:14:30.190938 kernel: QLogic iSCSI HBA Driver Jul 14 22:14:30.241608 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 14 22:14:30.255039 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 14 22:14:30.281867 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 14 22:14:30.281927 kernel: device-mapper: uevent: version 1.0.3 Jul 14 22:14:30.281942 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 14 22:14:30.328869 kernel: raid6: avx2x4 gen() 26386 MB/s Jul 14 22:14:30.345861 kernel: raid6: avx2x2 gen() 25573 MB/s Jul 14 22:14:30.362955 kernel: raid6: avx2x1 gen() 22417 MB/s Jul 14 22:14:30.363012 kernel: raid6: using algorithm avx2x4 gen() 26386 MB/s Jul 14 22:14:30.381023 kernel: raid6: .... xor() 7609 MB/s, rmw enabled Jul 14 22:14:30.381092 kernel: raid6: using avx2x2 recovery algorithm Jul 14 22:14:30.402852 kernel: xor: automatically using best checksumming function avx Jul 14 22:14:30.571850 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 14 22:14:30.584006 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 14 22:14:30.597021 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 22:14:30.610933 systemd-udevd[413]: Using default interface naming scheme 'v255'. Jul 14 22:14:30.616047 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 22:14:30.624013 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 14 22:14:30.637664 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation Jul 14 22:14:30.669912 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 14 22:14:30.684008 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 14 22:14:30.747067 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 22:14:30.758020 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 14 22:14:30.769839 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 14 22:14:30.774036 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 14 22:14:30.777168 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 22:14:30.779375 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 14 22:14:30.781164 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 14 22:14:30.785022 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 14 22:14:30.790923 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 14 22:14:30.790954 kernel: GPT:9289727 != 19775487 Jul 14 22:14:30.790965 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 14 22:14:30.790974 kernel: GPT:9289727 != 19775487 Jul 14 22:14:30.790990 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 14 22:14:30.790999 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 22:14:30.798809 kernel: cryptd: max_cpu_qlen set to 1000 Jul 14 22:14:30.795176 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 14 22:14:30.807170 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 14 22:14:30.813850 kernel: libata version 3.00 loaded. Jul 14 22:14:30.820293 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 14 22:14:30.824647 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 22:14:30.828443 kernel: AVX2 version of gcm_enc/dec engaged. Jul 14 22:14:30.828473 kernel: ahci 0000:00:1f.2: version 3.0 Jul 14 22:14:30.828682 kernel: AES CTR mode by8 optimization enabled Jul 14 22:14:30.828697 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 14 22:14:30.831940 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 22:14:30.837460 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jul 14 22:14:30.837695 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 14 22:14:30.835771 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 14 22:14:30.835932 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 22:14:30.840061 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 22:14:30.850941 kernel: BTRFS: device fsid d23b6972-ad36-4741-bf36-4d440b923127 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (469) Jul 14 22:14:30.850964 kernel: scsi host0: ahci Jul 14 22:14:30.851152 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (477) Jul 14 22:14:30.852835 kernel: scsi host1: ahci Jul 14 22:14:30.853847 kernel: scsi host2: ahci Jul 14 22:14:30.857114 kernel: scsi host3: ahci Jul 14 22:14:30.857985 kernel: scsi host4: ahci Jul 14 22:14:30.858203 kernel: scsi host5: ahci Jul 14 22:14:30.859219 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 22:14:30.867535 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jul 14 22:14:30.867558 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jul 14 22:14:30.867573 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jul 14 22:14:30.867586 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jul 14 22:14:30.867600 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jul 14 22:14:30.867619 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jul 14 22:14:30.888598 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 14 22:14:30.911927 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 22:14:30.919233 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 14 22:14:30.927379 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 14 22:14:30.930071 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 14 22:14:30.937787 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 14 22:14:30.950968 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 14 22:14:30.954422 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 22:14:30.966881 disk-uuid[555]: Primary Header is updated. Jul 14 22:14:30.966881 disk-uuid[555]: Secondary Entries is updated. Jul 14 22:14:30.966881 disk-uuid[555]: Secondary Header is updated. Jul 14 22:14:30.971859 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 22:14:30.975875 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 22:14:30.977928 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 22:14:31.174010 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 14 22:14:31.174078 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 14 22:14:31.174092 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 14 22:14:31.175861 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 14 22:14:31.175944 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 14 22:14:31.176852 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 14 22:14:31.177843 kernel: ata3.00: applying bridge limits Jul 14 22:14:31.177859 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 14 22:14:31.178856 kernel: ata3.00: configured for UDMA/100 Jul 14 22:14:31.179869 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 14 22:14:31.243863 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 14 22:14:31.244214 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 14 22:14:31.260851 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 14 22:14:31.993855 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 22:14:31.994461 disk-uuid[558]: The operation has completed successfully. Jul 14 22:14:32.025765 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 14 22:14:32.025911 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 14 22:14:32.048009 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 14 22:14:32.054066 sh[593]: Success Jul 14 22:14:32.066855 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 14 22:14:32.102657 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 14 22:14:32.117046 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 14 22:14:32.121219 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 14 22:14:32.132140 kernel: BTRFS info (device dm-0): first mount of filesystem d23b6972-ad36-4741-bf36-4d440b923127 Jul 14 22:14:32.132183 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 14 22:14:32.132194 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 14 22:14:32.133282 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 14 22:14:32.134851 kernel: BTRFS info (device dm-0): using free space tree Jul 14 22:14:32.139359 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 14 22:14:32.140127 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 14 22:14:32.151967 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 14 22:14:32.152727 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 14 22:14:32.163129 kernel: BTRFS info (device vda6): first mount of filesystem 1f379987-f438-494c-89f9-63473ca1b18d Jul 14 22:14:32.163164 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 14 22:14:32.163175 kernel: BTRFS info (device vda6): using free space tree Jul 14 22:14:32.165841 kernel: BTRFS info (device vda6): auto enabling async discard Jul 14 22:14:32.174936 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 14 22:14:32.176677 kernel: BTRFS info (device vda6): last unmount of filesystem 1f379987-f438-494c-89f9-63473ca1b18d Jul 14 22:14:32.269594 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 14 22:14:32.291980 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 14 22:14:32.314392 systemd-networkd[771]: lo: Link UP Jul 14 22:14:32.314403 systemd-networkd[771]: lo: Gained carrier Jul 14 22:14:32.316013 systemd-networkd[771]: Enumeration completed Jul 14 22:14:32.316109 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 14 22:14:32.316445 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 22:14:32.316447 systemd[1]: Reached target network.target - Network. Jul 14 22:14:32.316450 systemd-networkd[771]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 14 22:14:32.317411 systemd-networkd[771]: eth0: Link UP Jul 14 22:14:32.317415 systemd-networkd[771]: eth0: Gained carrier Jul 14 22:14:32.317422 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 22:14:32.347888 systemd-networkd[771]: eth0: DHCPv4 address 10.0.0.96/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 14 22:14:32.349703 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 14 22:14:32.356019 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 14 22:14:32.407024 ignition[776]: Ignition 2.19.0 Jul 14 22:14:32.407039 ignition[776]: Stage: fetch-offline Jul 14 22:14:32.407097 ignition[776]: no configs at "/usr/lib/ignition/base.d" Jul 14 22:14:32.407111 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:14:32.407255 ignition[776]: parsed url from cmdline: "" Jul 14 22:14:32.407260 ignition[776]: no config URL provided Jul 14 22:14:32.407268 ignition[776]: reading system config file "/usr/lib/ignition/user.ign" Jul 14 22:14:32.407281 ignition[776]: no config at "/usr/lib/ignition/user.ign" Jul 14 22:14:32.407317 ignition[776]: op(1): [started] loading QEMU firmware config module Jul 14 22:14:32.407324 ignition[776]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 14 22:14:32.415653 ignition[776]: op(1): [finished] loading QEMU firmware config module Jul 14 22:14:32.454930 ignition[776]: parsing config with SHA512: e24b8e4af092588f87e443b19a2ce664d8af3938aa677832b0a3790fd2c8ef5d02c3f71a97bd850003428cd0d4717b38792dcf243d3bb0f76d68d437ebd55bcd Jul 14 22:14:32.459995 unknown[776]: fetched base config from "system" Jul 14 22:14:32.460010 unknown[776]: fetched user config from "qemu" Jul 14 22:14:32.461395 ignition[776]: fetch-offline: fetch-offline passed Jul 14 22:14:32.461463 ignition[776]: Ignition finished successfully Jul 14 22:14:32.465678 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 14 22:14:32.465951 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 14 22:14:32.477011 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 14 22:14:32.493145 ignition[785]: Ignition 2.19.0 Jul 14 22:14:32.493155 ignition[785]: Stage: kargs Jul 14 22:14:32.493344 ignition[785]: no configs at "/usr/lib/ignition/base.d" Jul 14 22:14:32.493357 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:14:32.494393 ignition[785]: kargs: kargs passed Jul 14 22:14:32.494435 ignition[785]: Ignition finished successfully Jul 14 22:14:32.501020 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 14 22:14:32.509054 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 14 22:14:32.522692 ignition[793]: Ignition 2.19.0 Jul 14 22:14:32.522707 ignition[793]: Stage: disks Jul 14 22:14:32.522947 ignition[793]: no configs at "/usr/lib/ignition/base.d" Jul 14 22:14:32.522962 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:14:32.524154 ignition[793]: disks: disks passed Jul 14 22:14:32.524203 ignition[793]: Ignition finished successfully Jul 14 22:14:32.529575 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 14 22:14:32.529886 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 14 22:14:32.531481 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 14 22:14:32.531814 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 14 22:14:32.536485 systemd[1]: Reached target sysinit.target - System Initialization. Jul 14 22:14:32.537454 systemd[1]: Reached target basic.target - Basic System. Jul 14 22:14:32.549013 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 14 22:14:32.561498 systemd-fsck[802]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 14 22:14:32.567944 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 14 22:14:32.574935 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 14 22:14:32.663861 kernel: EXT4-fs (vda9): mounted filesystem dda007d3-640b-4d11-976f-3b761ca7aabd r/w with ordered data mode. Quota mode: none. Jul 14 22:14:32.664672 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 14 22:14:32.666943 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 14 22:14:32.682909 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 14 22:14:32.685640 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 14 22:14:32.688106 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 14 22:14:32.688169 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 14 22:14:32.690048 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 14 22:14:32.691844 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (810) Jul 14 22:14:32.694153 kernel: BTRFS info (device vda6): first mount of filesystem 1f379987-f438-494c-89f9-63473ca1b18d Jul 14 22:14:32.694178 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 14 22:14:32.694192 kernel: BTRFS info (device vda6): using free space tree Jul 14 22:14:32.698842 kernel: BTRFS info (device vda6): auto enabling async discard Jul 14 22:14:32.700431 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 14 22:14:32.702430 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 14 22:14:32.706000 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 14 22:14:32.741781 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Jul 14 22:14:32.746526 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Jul 14 22:14:32.751184 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Jul 14 22:14:32.755619 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Jul 14 22:14:32.840028 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 14 22:14:32.849914 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 14 22:14:32.852567 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 14 22:14:32.860847 kernel: BTRFS info (device vda6): last unmount of filesystem 1f379987-f438-494c-89f9-63473ca1b18d Jul 14 22:14:32.878630 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 14 22:14:32.883172 ignition[925]: INFO : Ignition 2.19.0 Jul 14 22:14:32.883172 ignition[925]: INFO : Stage: mount Jul 14 22:14:32.885152 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 22:14:32.885152 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:14:32.885152 ignition[925]: INFO : mount: mount passed Jul 14 22:14:32.885152 ignition[925]: INFO : Ignition finished successfully Jul 14 22:14:32.886371 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 14 22:14:32.896948 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 14 22:14:33.131392 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 14 22:14:33.146085 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 14 22:14:33.153851 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (936) Jul 14 22:14:33.156721 kernel: BTRFS info (device vda6): first mount of filesystem 1f379987-f438-494c-89f9-63473ca1b18d Jul 14 22:14:33.156750 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 14 22:14:33.156762 kernel: BTRFS info (device vda6): using free space tree Jul 14 22:14:33.159847 kernel: BTRFS info (device vda6): auto enabling async discard Jul 14 22:14:33.161833 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 14 22:14:33.188406 ignition[953]: INFO : Ignition 2.19.0 Jul 14 22:14:33.188406 ignition[953]: INFO : Stage: files Jul 14 22:14:33.190262 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 22:14:33.190262 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:14:33.190262 ignition[953]: DEBUG : files: compiled without relabeling support, skipping Jul 14 22:14:33.194036 ignition[953]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 14 22:14:33.194036 ignition[953]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 14 22:14:33.194036 ignition[953]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 14 22:14:33.194036 ignition[953]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 14 22:14:33.194036 ignition[953]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 14 22:14:33.193510 unknown[953]: wrote ssh authorized keys file for user: core Jul 14 22:14:33.201665 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 14 22:14:33.201665 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jul 14 22:14:34.311085 systemd-networkd[771]: eth0: Gained IPv6LL Jul 14 22:14:43.267073 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 14 22:14:43.372031 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 14 22:14:43.372031 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 14 22:14:43.375912 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 14 22:14:43.375912 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 14 22:14:43.375912 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 14 22:14:43.375912 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 14 22:14:43.375912 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 14 22:14:43.375912 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 14 22:14:43.375912 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 14 22:14:43.375912 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 22:14:43.375912 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 22:14:43.375912 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 14 22:14:43.375912 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 14 22:14:43.375912 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 14 22:14:43.375912 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jul 14 22:15:14.055938 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 14 22:15:15.151719 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 14 22:15:15.151719 ignition[953]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 14 22:15:15.155235 ignition[953]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 14 22:15:15.157193 ignition[953]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 14 22:15:15.157193 ignition[953]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 14 22:15:15.157193 ignition[953]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 14 22:15:15.161277 ignition[953]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 22:15:15.163070 ignition[953]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 22:15:15.163070 ignition[953]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 14 22:15:15.163070 ignition[953]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 14 22:15:15.184916 ignition[953]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 22:15:15.189857 ignition[953]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 22:15:15.191612 ignition[953]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 14 22:15:15.191612 ignition[953]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 14 22:15:15.194671 ignition[953]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 14 22:15:15.196262 ignition[953]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 14 22:15:15.198250 ignition[953]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 14 22:15:15.200105 ignition[953]: INFO : files: files passed Jul 14 22:15:15.200945 ignition[953]: INFO : Ignition finished successfully Jul 14 22:15:15.204164 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 14 22:15:15.221007 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 14 22:15:15.223465 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 14 22:15:15.226289 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 14 22:15:15.227363 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 14 22:15:15.234401 initrd-setup-root-after-ignition[981]: grep: /sysroot/oem/oem-release: No such file or directory Jul 14 22:15:15.237128 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 14 22:15:15.238931 initrd-setup-root-after-ignition[983]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 14 22:15:15.240570 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 14 22:15:15.243622 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 14 22:15:15.246268 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 14 22:15:15.255985 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 14 22:15:15.280489 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 14 22:15:15.281509 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 14 22:15:15.284154 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 14 22:15:15.286120 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 14 22:15:15.288154 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 14 22:15:15.298195 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 14 22:15:15.313223 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 14 22:15:15.329953 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 14 22:15:15.339676 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 14 22:15:15.340884 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 22:15:15.343040 systemd[1]: Stopped target timers.target - Timer Units. Jul 14 22:15:15.344960 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 14 22:15:15.345067 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 14 22:15:15.347284 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 14 22:15:15.348754 systemd[1]: Stopped target basic.target - Basic System. Jul 14 22:15:15.350684 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 14 22:15:15.352633 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 14 22:15:15.354544 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 14 22:15:15.356599 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 14 22:15:15.358632 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 14 22:15:15.360813 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 14 22:15:15.362710 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 14 22:15:15.364793 systemd[1]: Stopped target swap.target - Swaps. Jul 14 22:15:15.366483 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 14 22:15:15.366595 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 14 22:15:15.408159 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 14 22:15:15.409867 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 22:15:15.410972 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 14 22:15:15.411081 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 22:15:15.411445 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 14 22:15:15.411550 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 14 22:15:15.412420 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 14 22:15:15.412524 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 14 22:15:15.413148 systemd[1]: Stopped target paths.target - Path Units. Jul 14 22:15:15.413381 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 14 22:15:15.418903 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 22:15:15.420404 systemd[1]: Stopped target slices.target - Slice Units. Jul 14 22:15:15.422174 systemd[1]: Stopped target sockets.target - Socket Units. Jul 14 22:15:15.424422 systemd[1]: iscsid.socket: Deactivated successfully. Jul 14 22:15:15.424512 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 14 22:15:15.426302 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 14 22:15:15.426387 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 14 22:15:15.428264 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 14 22:15:15.428368 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 14 22:15:15.430408 systemd[1]: ignition-files.service: Deactivated successfully. Jul 14 22:15:15.430508 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 14 22:15:15.440950 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 14 22:15:15.441844 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 14 22:15:15.441956 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 22:15:15.483024 ignition[1007]: INFO : Ignition 2.19.0 Jul 14 22:15:15.483024 ignition[1007]: INFO : Stage: umount Jul 14 22:15:15.483024 ignition[1007]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 22:15:15.483024 ignition[1007]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:15:15.483024 ignition[1007]: INFO : umount: umount passed Jul 14 22:15:15.483024 ignition[1007]: INFO : Ignition finished successfully Jul 14 22:15:15.444810 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 14 22:15:15.446190 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 14 22:15:15.446399 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 22:15:15.448495 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 14 22:15:15.449147 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 14 22:15:15.453865 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 14 22:15:15.453985 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 14 22:15:15.481138 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 14 22:15:15.481243 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 14 22:15:15.485038 systemd[1]: Stopped target network.target - Network. Jul 14 22:15:15.486379 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 14 22:15:15.486431 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 14 22:15:15.488605 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 14 22:15:15.488651 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 14 22:15:15.490355 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 14 22:15:15.490400 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 14 22:15:15.492136 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 14 22:15:15.492184 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 14 22:15:15.494171 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 14 22:15:15.533322 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 14 22:15:15.535864 systemd-networkd[771]: eth0: DHCPv6 lease lost Jul 14 22:15:15.536200 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 14 22:15:15.536759 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 14 22:15:15.536894 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 14 22:15:15.538649 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 14 22:15:15.538777 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 14 22:15:15.541431 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 14 22:15:15.541489 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 14 22:15:15.542688 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 14 22:15:15.542744 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 14 22:15:15.552927 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 14 22:15:15.553873 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 14 22:15:15.553929 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 14 22:15:15.556247 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 22:15:15.558802 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 14 22:15:15.558944 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 14 22:15:15.614719 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 14 22:15:15.614793 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 14 22:15:15.616016 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 14 22:15:15.616064 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 14 22:15:15.617937 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 14 22:15:15.617984 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 22:15:15.620277 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 14 22:15:15.620446 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 22:15:15.622536 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 14 22:15:15.622652 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 14 22:15:15.625205 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 14 22:15:15.625265 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 14 22:15:15.626380 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 14 22:15:15.626418 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 22:15:15.628313 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 14 22:15:15.628363 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 14 22:15:15.630398 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 14 22:15:15.630444 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 14 22:15:15.632285 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 14 22:15:15.632332 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 22:15:15.648078 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 14 22:15:15.673666 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 14 22:15:15.673757 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 22:15:15.676116 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 14 22:15:15.676167 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 22:15:15.678523 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 14 22:15:15.678652 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 14 22:15:15.681154 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 14 22:15:15.691962 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 14 22:15:15.698449 systemd[1]: Switching root. Jul 14 22:15:15.730384 systemd-journald[193]: Journal stopped Jul 14 22:15:16.985936 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Jul 14 22:15:16.986000 kernel: SELinux: policy capability network_peer_controls=1 Jul 14 22:15:16.986017 kernel: SELinux: policy capability open_perms=1 Jul 14 22:15:16.986029 kernel: SELinux: policy capability extended_socket_class=1 Jul 14 22:15:16.986040 kernel: SELinux: policy capability always_check_network=0 Jul 14 22:15:16.986051 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 14 22:15:16.986066 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 14 22:15:16.986078 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 14 22:15:16.986089 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 14 22:15:16.986100 kernel: audit: type=1403 audit(1752531316.263:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 14 22:15:16.986123 systemd[1]: Successfully loaded SELinux policy in 40.585ms. Jul 14 22:15:16.986147 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.970ms. Jul 14 22:15:16.986160 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 14 22:15:16.986172 systemd[1]: Detected virtualization kvm. Jul 14 22:15:16.986184 systemd[1]: Detected architecture x86-64. Jul 14 22:15:16.986200 systemd[1]: Detected first boot. Jul 14 22:15:16.986212 systemd[1]: Initializing machine ID from VM UUID. Jul 14 22:15:16.986223 zram_generator::config[1051]: No configuration found. Jul 14 22:15:16.986243 systemd[1]: Populated /etc with preset unit settings. Jul 14 22:15:16.986255 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 14 22:15:16.986267 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 14 22:15:16.986279 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 14 22:15:16.986292 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 14 22:15:16.986306 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 14 22:15:16.986318 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 14 22:15:16.986330 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 14 22:15:16.986342 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 14 22:15:16.986354 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 14 22:15:16.986367 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 14 22:15:16.986379 systemd[1]: Created slice user.slice - User and Session Slice. Jul 14 22:15:16.986390 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 22:15:16.986405 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 22:15:16.986417 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 14 22:15:16.986434 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 14 22:15:16.986446 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 14 22:15:16.986459 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 14 22:15:16.986471 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 14 22:15:16.986482 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 22:15:16.986495 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 14 22:15:16.986508 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 14 22:15:16.986530 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 14 22:15:16.986543 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 14 22:15:16.986556 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 22:15:16.986568 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 14 22:15:16.986580 systemd[1]: Reached target slices.target - Slice Units. Jul 14 22:15:16.986592 systemd[1]: Reached target swap.target - Swaps. Jul 14 22:15:16.986604 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 14 22:15:16.986616 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 14 22:15:16.986630 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 14 22:15:16.986642 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 14 22:15:16.986654 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 22:15:16.986666 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 14 22:15:16.986678 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 14 22:15:16.986690 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 14 22:15:16.986702 systemd[1]: Mounting media.mount - External Media Directory... Jul 14 22:15:16.986714 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:15:16.986725 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 14 22:15:16.986740 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 14 22:15:16.986752 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 14 22:15:16.986764 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 14 22:15:16.986777 systemd[1]: Reached target machines.target - Containers. Jul 14 22:15:16.986789 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 14 22:15:16.986801 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 22:15:16.986832 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 14 22:15:16.986844 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 14 22:15:16.986859 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 22:15:16.986871 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 14 22:15:16.986883 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 22:15:16.986895 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 14 22:15:16.986907 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 22:15:16.986919 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 14 22:15:16.986931 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 14 22:15:16.986943 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 14 22:15:16.986955 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 14 22:15:16.986970 systemd[1]: Stopped systemd-fsck-usr.service. Jul 14 22:15:16.986982 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 14 22:15:16.986993 kernel: loop: module loaded Jul 14 22:15:16.987004 kernel: fuse: init (API version 7.39) Jul 14 22:15:16.987016 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 14 22:15:16.987028 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 14 22:15:16.987040 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 14 22:15:16.987054 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 14 22:15:16.987066 systemd[1]: verity-setup.service: Deactivated successfully. Jul 14 22:15:16.987080 systemd[1]: Stopped verity-setup.service. Jul 14 22:15:16.987093 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:15:16.987105 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 14 22:15:16.987117 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 14 22:15:16.987146 systemd-journald[1121]: Collecting audit messages is disabled. Jul 14 22:15:16.987167 systemd[1]: Mounted media.mount - External Media Directory. Jul 14 22:15:16.987182 kernel: ACPI: bus type drm_connector registered Jul 14 22:15:16.987193 systemd-journald[1121]: Journal started Jul 14 22:15:16.987215 systemd-journald[1121]: Runtime Journal (/run/log/journal/7f053fb396aa4f9e91b5be8bb3a737cc) is 6.0M, max 48.4M, 42.3M free. Jul 14 22:15:16.777576 systemd[1]: Queued start job for default target multi-user.target. Jul 14 22:15:16.794573 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 14 22:15:16.795046 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 14 22:15:16.990426 systemd[1]: Started systemd-journald.service - Journal Service. Jul 14 22:15:16.990707 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 14 22:15:16.992007 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 14 22:15:16.993361 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 14 22:15:16.994641 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 22:15:16.996208 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 14 22:15:16.996380 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 14 22:15:16.997910 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 14 22:15:16.999378 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 22:15:16.999561 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 22:15:17.001005 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 14 22:15:17.001179 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 14 22:15:17.002758 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 22:15:17.002940 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 22:15:17.004582 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 14 22:15:17.004752 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 14 22:15:17.006137 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 22:15:17.006307 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 22:15:17.007778 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 14 22:15:17.009202 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 14 22:15:17.010827 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 14 22:15:17.025256 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 14 22:15:17.033954 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 14 22:15:17.036269 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 14 22:15:17.037412 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 14 22:15:17.037442 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 14 22:15:17.039439 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 14 22:15:17.041741 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 14 22:15:17.044381 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 14 22:15:17.045508 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 22:15:17.048054 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 14 22:15:17.051787 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 14 22:15:17.053601 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 22:15:17.055004 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 14 22:15:17.056195 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 14 22:15:17.064410 systemd-journald[1121]: Time spent on flushing to /var/log/journal/7f053fb396aa4f9e91b5be8bb3a737cc is 15.484ms for 946 entries. Jul 14 22:15:17.064410 systemd-journald[1121]: System Journal (/var/log/journal/7f053fb396aa4f9e91b5be8bb3a737cc) is 8.0M, max 195.6M, 187.6M free. Jul 14 22:15:17.092745 systemd-journald[1121]: Received client request to flush runtime journal. Jul 14 22:15:17.092779 kernel: loop0: detected capacity change from 0 to 140768 Jul 14 22:15:17.062113 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 14 22:15:17.064817 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 14 22:15:17.068992 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 14 22:15:17.073200 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 14 22:15:17.077057 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 14 22:15:17.078989 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 14 22:15:17.081353 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 22:15:17.085198 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 14 22:15:17.089259 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 14 22:15:17.100099 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 14 22:15:17.105072 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 14 22:15:17.108205 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 14 22:15:17.112887 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 14 22:15:17.119742 udevadm[1177]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 14 22:15:17.135657 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 14 22:15:17.135295 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 14 22:15:17.135998 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 14 22:15:17.137872 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 14 22:15:17.150068 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 14 22:15:17.164881 kernel: loop1: detected capacity change from 0 to 229808 Jul 14 22:15:17.174020 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Jul 14 22:15:17.174400 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Jul 14 22:15:17.181631 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 22:15:17.197946 kernel: loop2: detected capacity change from 0 to 142488 Jul 14 22:15:17.232844 kernel: loop3: detected capacity change from 0 to 140768 Jul 14 22:15:17.244853 kernel: loop4: detected capacity change from 0 to 229808 Jul 14 22:15:17.251837 kernel: loop5: detected capacity change from 0 to 142488 Jul 14 22:15:17.260038 (sd-merge)[1190]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 14 22:15:17.260705 (sd-merge)[1190]: Merged extensions into '/usr'. Jul 14 22:15:17.264467 systemd[1]: Reloading requested from client PID 1165 ('systemd-sysext') (unit systemd-sysext.service)... Jul 14 22:15:17.264482 systemd[1]: Reloading... Jul 14 22:15:17.310951 zram_generator::config[1214]: No configuration found. Jul 14 22:15:17.380495 ldconfig[1160]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 14 22:15:17.437906 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 22:15:17.487391 systemd[1]: Reloading finished in 222 ms. Jul 14 22:15:17.517786 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 14 22:15:17.519448 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 14 22:15:17.538984 systemd[1]: Starting ensure-sysext.service... Jul 14 22:15:17.540836 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 14 22:15:17.547496 systemd[1]: Reloading requested from client PID 1253 ('systemctl') (unit ensure-sysext.service)... Jul 14 22:15:17.547520 systemd[1]: Reloading... Jul 14 22:15:17.562492 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 14 22:15:17.562886 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 14 22:15:17.563890 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 14 22:15:17.564193 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Jul 14 22:15:17.564280 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Jul 14 22:15:17.567551 systemd-tmpfiles[1254]: Detected autofs mount point /boot during canonicalization of boot. Jul 14 22:15:17.567564 systemd-tmpfiles[1254]: Skipping /boot Jul 14 22:15:17.580136 systemd-tmpfiles[1254]: Detected autofs mount point /boot during canonicalization of boot. Jul 14 22:15:17.580151 systemd-tmpfiles[1254]: Skipping /boot Jul 14 22:15:17.608847 zram_generator::config[1287]: No configuration found. Jul 14 22:15:17.705764 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 22:15:17.754968 systemd[1]: Reloading finished in 207 ms. Jul 14 22:15:17.772266 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 14 22:15:17.773995 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 22:15:17.792845 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 14 22:15:17.795728 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 14 22:15:17.798300 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 14 22:15:17.802581 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 14 22:15:17.805188 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 22:15:17.808019 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 14 22:15:17.812970 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:15:17.813140 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 22:15:17.816552 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 22:15:17.821174 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 22:15:17.823506 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 22:15:17.824903 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 22:15:17.825016 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:15:17.826067 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 22:15:17.826278 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 22:15:17.830550 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 22:15:17.830986 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 22:15:17.835251 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 22:15:17.835676 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 22:15:17.842280 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 14 22:15:17.844546 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 14 22:15:17.847211 systemd-udevd[1326]: Using default interface naming scheme 'v255'. Jul 14 22:15:17.850347 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:15:17.850563 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 22:15:17.855201 augenrules[1350]: No rules Jul 14 22:15:17.857201 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 22:15:17.859989 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 14 22:15:17.863408 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 22:15:17.867030 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 22:15:17.869009 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 22:15:17.873923 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 14 22:15:17.880661 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 14 22:15:17.882514 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:15:17.884694 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 22:15:17.888187 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 14 22:15:17.896545 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 22:15:17.896842 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 22:15:17.899387 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 14 22:15:17.901477 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 14 22:15:17.901729 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 14 22:15:17.903963 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 22:15:17.904267 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 22:15:17.906545 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 22:15:17.906734 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 22:15:17.908587 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 14 22:15:17.913200 systemd[1]: Finished ensure-sysext.service. Jul 14 22:15:17.927400 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 14 22:15:17.937102 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 14 22:15:17.938303 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 22:15:17.938388 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 14 22:15:17.941428 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 14 22:15:17.942602 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 14 22:15:17.942751 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 14 22:15:17.953842 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1376) Jul 14 22:15:17.994875 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 14 22:15:17.999849 kernel: ACPI: button: Power Button [PWRF] Jul 14 22:15:18.009283 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 14 22:15:18.021032 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 14 22:15:18.036843 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 14 22:15:18.045545 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 14 22:15:18.045845 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jul 14 22:15:18.046036 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 14 22:15:18.056592 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 14 22:15:18.058649 systemd[1]: Reached target time-set.target - System Time Set. Jul 14 22:15:18.062666 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 14 22:15:18.064742 systemd-networkd[1387]: lo: Link UP Jul 14 22:15:18.065094 systemd-networkd[1387]: lo: Gained carrier Jul 14 22:15:18.066851 systemd-networkd[1387]: Enumeration completed Jul 14 22:15:18.067119 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 14 22:15:18.067576 systemd-networkd[1387]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 22:15:18.067648 systemd-networkd[1387]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 14 22:15:18.068711 systemd-networkd[1387]: eth0: Link UP Jul 14 22:15:18.068763 systemd-networkd[1387]: eth0: Gained carrier Jul 14 22:15:18.068930 systemd-networkd[1387]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 22:15:18.076093 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 14 22:15:18.083895 systemd-networkd[1387]: eth0: DHCPv4 address 10.0.0.96/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 14 22:15:18.085345 systemd-timesyncd[1391]: Network configuration changed, trying to establish connection. Jul 14 22:15:18.089991 systemd-timesyncd[1391]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 14 22:15:18.090140 systemd-timesyncd[1391]: Initial clock synchronization to Mon 2025-07-14 22:15:18.228207 UTC. Jul 14 22:15:18.097484 systemd-resolved[1324]: Positive Trust Anchors: Jul 14 22:15:18.097518 systemd-resolved[1324]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 22:15:18.097552 systemd-resolved[1324]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 14 22:15:18.106673 systemd-resolved[1324]: Defaulting to hostname 'linux'. Jul 14 22:15:18.145857 kernel: mousedev: PS/2 mouse device common for all mice Jul 14 22:15:18.146292 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 22:15:18.147692 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 14 22:15:18.153043 systemd[1]: Reached target network.target - Network. Jul 14 22:15:18.154096 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 14 22:15:18.160920 kernel: kvm_amd: TSC scaling supported Jul 14 22:15:18.160968 kernel: kvm_amd: Nested Virtualization enabled Jul 14 22:15:18.160981 kernel: kvm_amd: Nested Paging enabled Jul 14 22:15:18.160996 kernel: kvm_amd: LBR virtualization supported Jul 14 22:15:18.161954 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 14 22:15:18.161984 kernel: kvm_amd: Virtual GIF supported Jul 14 22:15:18.184895 kernel: EDAC MC: Ver: 3.0.0 Jul 14 22:15:18.225682 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 14 22:15:18.250978 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 14 22:15:18.252604 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 22:15:18.259907 lvm[1419]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 14 22:15:18.289157 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 14 22:15:18.290882 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 14 22:15:18.292002 systemd[1]: Reached target sysinit.target - System Initialization. Jul 14 22:15:18.293128 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 14 22:15:18.294349 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 14 22:15:18.295777 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 14 22:15:18.297122 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 14 22:15:18.298410 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 14 22:15:18.299734 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 14 22:15:18.299760 systemd[1]: Reached target paths.target - Path Units. Jul 14 22:15:18.300673 systemd[1]: Reached target timers.target - Timer Units. Jul 14 22:15:18.302627 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 14 22:15:18.305388 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 14 22:15:18.312557 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 14 22:15:18.315366 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 14 22:15:18.317150 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 14 22:15:18.318289 systemd[1]: Reached target sockets.target - Socket Units. Jul 14 22:15:18.319247 systemd[1]: Reached target basic.target - Basic System. Jul 14 22:15:18.320244 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 14 22:15:18.320278 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 14 22:15:18.321295 systemd[1]: Starting containerd.service - containerd container runtime... Jul 14 22:15:18.323610 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 14 22:15:18.326918 lvm[1424]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 14 22:15:18.326934 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 14 22:15:18.332120 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 14 22:15:18.333194 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 14 22:15:18.334510 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 14 22:15:18.336885 jq[1427]: false Jul 14 22:15:18.339774 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 14 22:15:18.342549 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 14 22:15:18.346752 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 14 22:15:18.354018 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 14 22:15:18.355791 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 14 22:15:18.356308 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 14 22:15:18.356464 extend-filesystems[1428]: Found loop3 Jul 14 22:15:18.362028 extend-filesystems[1428]: Found loop4 Jul 14 22:15:18.362028 extend-filesystems[1428]: Found loop5 Jul 14 22:15:18.362028 extend-filesystems[1428]: Found sr0 Jul 14 22:15:18.362028 extend-filesystems[1428]: Found vda Jul 14 22:15:18.362028 extend-filesystems[1428]: Found vda1 Jul 14 22:15:18.362028 extend-filesystems[1428]: Found vda2 Jul 14 22:15:18.362028 extend-filesystems[1428]: Found vda3 Jul 14 22:15:18.362028 extend-filesystems[1428]: Found usr Jul 14 22:15:18.362028 extend-filesystems[1428]: Found vda4 Jul 14 22:15:18.362028 extend-filesystems[1428]: Found vda6 Jul 14 22:15:18.362028 extend-filesystems[1428]: Found vda7 Jul 14 22:15:18.362028 extend-filesystems[1428]: Found vda9 Jul 14 22:15:18.362028 extend-filesystems[1428]: Checking size of /dev/vda9 Jul 14 22:15:18.391345 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 14 22:15:18.391373 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1364) Jul 14 22:15:18.359886 systemd[1]: Starting update-engine.service - Update Engine... Jul 14 22:15:18.371265 dbus-daemon[1426]: [system] SELinux support is enabled Jul 14 22:15:18.397083 extend-filesystems[1428]: Resized partition /dev/vda9 Jul 14 22:15:18.372035 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 14 22:15:18.398839 extend-filesystems[1449]: resize2fs 1.47.1 (20-May-2024) Jul 14 22:15:18.375972 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 14 22:15:18.380333 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 14 22:15:18.399936 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 14 22:15:18.400224 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 14 22:15:18.400765 systemd[1]: motdgen.service: Deactivated successfully. Jul 14 22:15:18.401081 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 14 22:15:18.403285 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 14 22:15:18.403518 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 14 22:15:18.409418 jq[1445]: true Jul 14 22:15:18.415645 (ntainerd)[1453]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 14 22:15:18.419928 update_engine[1438]: I20250714 22:15:18.419857 1438 main.cc:92] Flatcar Update Engine starting Jul 14 22:15:18.432987 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 14 22:15:18.433053 update_engine[1438]: I20250714 22:15:18.428675 1438 update_check_scheduler.cc:74] Next update check in 10m3s Jul 14 22:15:18.441121 tar[1452]: linux-amd64/LICENSE Jul 14 22:15:18.445177 systemd[1]: Started update-engine.service - Update Engine. Jul 14 22:15:18.450796 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 14 22:15:18.455127 jq[1457]: true Jul 14 22:15:18.455540 tar[1452]: linux-amd64/helm Jul 14 22:15:18.450840 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 14 22:15:18.452795 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 14 22:15:18.452845 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 14 22:15:18.457239 systemd-logind[1434]: Watching system buttons on /dev/input/event1 (Power Button) Jul 14 22:15:18.457267 systemd-logind[1434]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 14 22:15:18.467398 extend-filesystems[1449]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 14 22:15:18.467398 extend-filesystems[1449]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 14 22:15:18.467398 extend-filesystems[1449]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 14 22:15:18.458364 systemd-logind[1434]: New seat seat0. Jul 14 22:15:18.480074 extend-filesystems[1428]: Resized filesystem in /dev/vda9 Jul 14 22:15:18.465080 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 14 22:15:18.466876 systemd[1]: Started systemd-logind.service - User Login Management. Jul 14 22:15:18.468494 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 14 22:15:18.470888 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 14 22:15:18.511303 locksmithd[1465]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 14 22:15:18.550494 bash[1487]: Updated "/home/core/.ssh/authorized_keys" Jul 14 22:15:18.551790 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 14 22:15:18.555279 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 14 22:15:18.665734 containerd[1453]: time="2025-07-14T22:15:18.665627204Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 14 22:15:18.688511 containerd[1453]: time="2025-07-14T22:15:18.688396703Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 14 22:15:18.690227 containerd[1453]: time="2025-07-14T22:15:18.690174528Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.97-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 14 22:15:18.690227 containerd[1453]: time="2025-07-14T22:15:18.690218320Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 14 22:15:18.690227 containerd[1453]: time="2025-07-14T22:15:18.690237826Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 14 22:15:18.690434 containerd[1453]: time="2025-07-14T22:15:18.690415299Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 14 22:15:18.690472 containerd[1453]: time="2025-07-14T22:15:18.690435958Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 14 22:15:18.690563 containerd[1453]: time="2025-07-14T22:15:18.690516319Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 22:15:18.690563 containerd[1453]: time="2025-07-14T22:15:18.690535645Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 14 22:15:18.690888 containerd[1453]: time="2025-07-14T22:15:18.690726984Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 22:15:18.690888 containerd[1453]: time="2025-07-14T22:15:18.690746731Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 14 22:15:18.690888 containerd[1453]: time="2025-07-14T22:15:18.690759755Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 22:15:18.690888 containerd[1453]: time="2025-07-14T22:15:18.690770065Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 14 22:15:18.690888 containerd[1453]: time="2025-07-14T22:15:18.690877947Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 14 22:15:18.691124 containerd[1453]: time="2025-07-14T22:15:18.691104813Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 14 22:15:18.691250 containerd[1453]: time="2025-07-14T22:15:18.691230218Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 22:15:18.691250 containerd[1453]: time="2025-07-14T22:15:18.691246999Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 14 22:15:18.691384 containerd[1453]: time="2025-07-14T22:15:18.691343310Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 14 22:15:18.691416 containerd[1453]: time="2025-07-14T22:15:18.691403633Z" level=info msg="metadata content store policy set" policy=shared Jul 14 22:15:18.876582 tar[1452]: linux-amd64/README.md Jul 14 22:15:18.889191 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 14 22:15:19.002747 sshd_keygen[1446]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 14 22:15:19.006409 containerd[1453]: time="2025-07-14T22:15:19.006338267Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 14 22:15:19.006409 containerd[1453]: time="2025-07-14T22:15:19.006407742Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 14 22:15:19.006557 containerd[1453]: time="2025-07-14T22:15:19.006425568Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 14 22:15:19.006557 containerd[1453]: time="2025-07-14T22:15:19.006443536Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 14 22:15:19.006557 containerd[1453]: time="2025-07-14T22:15:19.006457359Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 14 22:15:19.006667 containerd[1453]: time="2025-07-14T22:15:19.006646917Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 14 22:15:19.007056 containerd[1453]: time="2025-07-14T22:15:19.007014117Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 14 22:15:19.007219 containerd[1453]: time="2025-07-14T22:15:19.007190769Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 14 22:15:19.007219 containerd[1453]: time="2025-07-14T22:15:19.007211961Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 14 22:15:19.007299 containerd[1453]: time="2025-07-14T22:15:19.007225764Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 14 22:15:19.007299 containerd[1453]: time="2025-07-14T22:15:19.007240428Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 14 22:15:19.007299 containerd[1453]: time="2025-07-14T22:15:19.007257718Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 14 22:15:19.007299 containerd[1453]: time="2025-07-14T22:15:19.007274189Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 14 22:15:19.007299 containerd[1453]: time="2025-07-14T22:15:19.007295057Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 14 22:15:19.007415 containerd[1453]: time="2025-07-14T22:15:19.007315339Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 14 22:15:19.007415 containerd[1453]: time="2025-07-14T22:15:19.007344149Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 14 22:15:19.007415 containerd[1453]: time="2025-07-14T22:15:19.007357317Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 14 22:15:19.007415 containerd[1453]: time="2025-07-14T22:15:19.007370282Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 14 22:15:19.007415 containerd[1453]: time="2025-07-14T22:15:19.007391170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 14 22:15:19.007415 containerd[1453]: time="2025-07-14T22:15:19.007404550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 14 22:15:19.007415 containerd[1453]: time="2025-07-14T22:15:19.007418000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 14 22:15:19.007602 containerd[1453]: time="2025-07-14T22:15:19.007432885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 14 22:15:19.007602 containerd[1453]: time="2025-07-14T22:15:19.007445952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 14 22:15:19.007602 containerd[1453]: time="2025-07-14T22:15:19.007459805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 14 22:15:19.007602 containerd[1453]: time="2025-07-14T22:15:19.007477490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 14 22:15:19.007602 containerd[1453]: time="2025-07-14T22:15:19.007491072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 14 22:15:19.007602 containerd[1453]: time="2025-07-14T22:15:19.007505109Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 14 22:15:19.007602 containerd[1453]: time="2025-07-14T22:15:19.007538173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 14 22:15:19.007602 containerd[1453]: time="2025-07-14T22:15:19.007550270Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 14 22:15:19.007602 containerd[1453]: time="2025-07-14T22:15:19.007562436Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 14 22:15:19.007602 containerd[1453]: time="2025-07-14T22:15:19.007576129Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 14 22:15:19.007602 containerd[1453]: time="2025-07-14T22:15:19.007592025Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 14 22:15:19.007602 containerd[1453]: time="2025-07-14T22:15:19.007611771Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 14 22:15:19.007972 containerd[1453]: time="2025-07-14T22:15:19.007626252Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 14 22:15:19.007972 containerd[1453]: time="2025-07-14T22:15:19.007637964Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 14 22:15:19.007972 containerd[1453]: time="2025-07-14T22:15:19.007694373Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 14 22:15:19.007972 containerd[1453]: time="2025-07-14T22:15:19.007711703Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 14 22:15:19.007972 containerd[1453]: time="2025-07-14T22:15:19.007722758Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 14 22:15:19.007972 containerd[1453]: time="2025-07-14T22:15:19.007758552Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 14 22:15:19.007972 containerd[1453]: time="2025-07-14T22:15:19.007768334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 14 22:15:19.007972 containerd[1453]: time="2025-07-14T22:15:19.007781279Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 14 22:15:19.007972 containerd[1453]: time="2025-07-14T22:15:19.007791829Z" level=info msg="NRI interface is disabled by configuration." Jul 14 22:15:19.007972 containerd[1453]: time="2025-07-14T22:15:19.007801995Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 14 22:15:19.008334 containerd[1453]: time="2025-07-14T22:15:19.008082390Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 14 22:15:19.008461 containerd[1453]: time="2025-07-14T22:15:19.008342322Z" level=info msg="Connect containerd service" Jul 14 22:15:19.008461 containerd[1453]: time="2025-07-14T22:15:19.008397275Z" level=info msg="using legacy CRI server" Jul 14 22:15:19.008461 containerd[1453]: time="2025-07-14T22:15:19.008406511Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 14 22:15:19.008649 containerd[1453]: time="2025-07-14T22:15:19.008628294Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 14 22:15:19.009602 containerd[1453]: time="2025-07-14T22:15:19.009561235Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 14 22:15:19.009846 containerd[1453]: time="2025-07-14T22:15:19.009712140Z" level=info msg="Start subscribing containerd event" Jul 14 22:15:19.009846 containerd[1453]: time="2025-07-14T22:15:19.009785909Z" level=info msg="Start recovering state" Jul 14 22:15:19.009950 containerd[1453]: time="2025-07-14T22:15:19.009925798Z" level=info msg="Start event monitor" Jul 14 22:15:19.009969 containerd[1453]: time="2025-07-14T22:15:19.009953103Z" level=info msg="Start snapshots syncer" Jul 14 22:15:19.010007 containerd[1453]: time="2025-07-14T22:15:19.009967240Z" level=info msg="Start cni network conf syncer for default" Jul 14 22:15:19.010007 containerd[1453]: time="2025-07-14T22:15:19.009977811Z" level=info msg="Start streaming server" Jul 14 22:15:19.010044 containerd[1453]: time="2025-07-14T22:15:19.010020688Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 14 22:15:19.010171 containerd[1453]: time="2025-07-14T22:15:19.010142075Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 14 22:15:19.010255 containerd[1453]: time="2025-07-14T22:15:19.010230708Z" level=info msg="containerd successfully booted in 0.346013s" Jul 14 22:15:19.010353 systemd[1]: Started containerd.service - containerd container runtime. Jul 14 22:15:19.028223 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 14 22:15:19.035072 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 14 22:15:19.042997 systemd[1]: issuegen.service: Deactivated successfully. Jul 14 22:15:19.043252 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 14 22:15:19.046110 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 14 22:15:19.062231 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 14 22:15:19.065063 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 14 22:15:19.067422 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 14 22:15:19.068764 systemd[1]: Reached target getty.target - Login Prompts. Jul 14 22:15:19.175332 systemd-networkd[1387]: eth0: Gained IPv6LL Jul 14 22:15:19.178672 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 14 22:15:19.180491 systemd[1]: Reached target network-online.target - Network is Online. Jul 14 22:15:19.191108 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 14 22:15:19.194387 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:15:19.197086 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 14 22:15:19.217369 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 14 22:15:19.217649 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 14 22:15:19.219657 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 14 22:15:19.221876 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 14 22:15:19.934105 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:15:19.935962 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 14 22:15:19.938062 systemd[1]: Startup finished in 756ms (kernel) + 46.528s (initrd) + 3.712s (userspace) = 50.997s. Jul 14 22:15:19.964275 (kubelet)[1539]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:15:20.384376 kubelet[1539]: E0714 22:15:20.384260 1539 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:15:20.388652 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:15:20.388886 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:15:20.389202 systemd[1]: kubelet.service: Consumed 1.015s CPU time. Jul 14 22:15:22.662345 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 14 22:15:22.663638 systemd[1]: Started sshd@0-10.0.0.96:22-10.0.0.1:57078.service - OpenSSH per-connection server daemon (10.0.0.1:57078). Jul 14 22:15:22.709050 sshd[1552]: Accepted publickey for core from 10.0.0.1 port 57078 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:15:22.711325 sshd[1552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:15:22.721892 systemd-logind[1434]: New session 1 of user core. Jul 14 22:15:22.723385 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 14 22:15:22.731112 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 14 22:15:22.747364 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 14 22:15:22.751040 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 14 22:15:22.759171 (systemd)[1556]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:15:22.873744 systemd[1556]: Queued start job for default target default.target. Jul 14 22:15:22.885073 systemd[1556]: Created slice app.slice - User Application Slice. Jul 14 22:15:22.885097 systemd[1556]: Reached target paths.target - Paths. Jul 14 22:15:22.885109 systemd[1556]: Reached target timers.target - Timers. Jul 14 22:15:22.886617 systemd[1556]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 14 22:15:22.898413 systemd[1556]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 14 22:15:22.898528 systemd[1556]: Reached target sockets.target - Sockets. Jul 14 22:15:22.898546 systemd[1556]: Reached target basic.target - Basic System. Jul 14 22:15:22.898582 systemd[1556]: Reached target default.target - Main User Target. Jul 14 22:15:22.898615 systemd[1556]: Startup finished in 132ms. Jul 14 22:15:22.899042 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 14 22:15:22.900604 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 14 22:15:22.961922 systemd[1]: Started sshd@1-10.0.0.96:22-10.0.0.1:57088.service - OpenSSH per-connection server daemon (10.0.0.1:57088). Jul 14 22:15:22.998673 sshd[1567]: Accepted publickey for core from 10.0.0.1 port 57088 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:15:23.000014 sshd[1567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:15:23.003848 systemd-logind[1434]: New session 2 of user core. Jul 14 22:15:23.017957 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 14 22:15:23.070346 sshd[1567]: pam_unix(sshd:session): session closed for user core Jul 14 22:15:23.080281 systemd[1]: sshd@1-10.0.0.96:22-10.0.0.1:57088.service: Deactivated successfully. Jul 14 22:15:23.081714 systemd[1]: session-2.scope: Deactivated successfully. Jul 14 22:15:23.083011 systemd-logind[1434]: Session 2 logged out. Waiting for processes to exit. Jul 14 22:15:23.093060 systemd[1]: Started sshd@2-10.0.0.96:22-10.0.0.1:57104.service - OpenSSH per-connection server daemon (10.0.0.1:57104). Jul 14 22:15:23.094053 systemd-logind[1434]: Removed session 2. Jul 14 22:15:23.122568 sshd[1574]: Accepted publickey for core from 10.0.0.1 port 57104 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:15:23.123966 sshd[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:15:23.127592 systemd-logind[1434]: New session 3 of user core. Jul 14 22:15:23.136938 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 14 22:15:23.185079 sshd[1574]: pam_unix(sshd:session): session closed for user core Jul 14 22:15:23.195541 systemd[1]: sshd@2-10.0.0.96:22-10.0.0.1:57104.service: Deactivated successfully. Jul 14 22:15:23.197030 systemd[1]: session-3.scope: Deactivated successfully. Jul 14 22:15:23.198420 systemd-logind[1434]: Session 3 logged out. Waiting for processes to exit. Jul 14 22:15:23.207145 systemd[1]: Started sshd@3-10.0.0.96:22-10.0.0.1:57116.service - OpenSSH per-connection server daemon (10.0.0.1:57116). Jul 14 22:15:23.208121 systemd-logind[1434]: Removed session 3. Jul 14 22:15:23.236077 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 57116 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:15:23.237585 sshd[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:15:23.241356 systemd-logind[1434]: New session 4 of user core. Jul 14 22:15:23.250936 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 14 22:15:23.304213 sshd[1581]: pam_unix(sshd:session): session closed for user core Jul 14 22:15:23.323281 systemd[1]: sshd@3-10.0.0.96:22-10.0.0.1:57116.service: Deactivated successfully. Jul 14 22:15:23.324701 systemd[1]: session-4.scope: Deactivated successfully. Jul 14 22:15:23.326047 systemd-logind[1434]: Session 4 logged out. Waiting for processes to exit. Jul 14 22:15:23.327181 systemd[1]: Started sshd@4-10.0.0.96:22-10.0.0.1:57118.service - OpenSSH per-connection server daemon (10.0.0.1:57118). Jul 14 22:15:23.328001 systemd-logind[1434]: Removed session 4. Jul 14 22:15:23.360203 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 57118 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:15:23.361535 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:15:23.365055 systemd-logind[1434]: New session 5 of user core. Jul 14 22:15:23.378935 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 14 22:15:23.435632 sudo[1591]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 14 22:15:23.435979 sudo[1591]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 22:15:23.460612 sudo[1591]: pam_unix(sudo:session): session closed for user root Jul 14 22:15:23.462308 sshd[1588]: pam_unix(sshd:session): session closed for user core Jul 14 22:15:23.472289 systemd[1]: sshd@4-10.0.0.96:22-10.0.0.1:57118.service: Deactivated successfully. Jul 14 22:15:23.473681 systemd[1]: session-5.scope: Deactivated successfully. Jul 14 22:15:23.474968 systemd-logind[1434]: Session 5 logged out. Waiting for processes to exit. Jul 14 22:15:23.483058 systemd[1]: Started sshd@5-10.0.0.96:22-10.0.0.1:57130.service - OpenSSH per-connection server daemon (10.0.0.1:57130). Jul 14 22:15:23.483778 systemd-logind[1434]: Removed session 5. Jul 14 22:15:23.512560 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 57130 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:15:23.514172 sshd[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:15:23.517778 systemd-logind[1434]: New session 6 of user core. Jul 14 22:15:23.526947 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 14 22:15:23.579191 sudo[1600]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 14 22:15:23.579520 sudo[1600]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 22:15:23.582809 sudo[1600]: pam_unix(sudo:session): session closed for user root Jul 14 22:15:23.588433 sudo[1599]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 14 22:15:23.588755 sudo[1599]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 22:15:23.607048 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 14 22:15:23.608665 auditctl[1603]: No rules Jul 14 22:15:23.609097 systemd[1]: audit-rules.service: Deactivated successfully. Jul 14 22:15:23.609299 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 14 22:15:23.611788 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 14 22:15:23.639401 augenrules[1621]: No rules Jul 14 22:15:23.641012 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 14 22:15:23.642168 sudo[1599]: pam_unix(sudo:session): session closed for user root Jul 14 22:15:23.643962 sshd[1596]: pam_unix(sshd:session): session closed for user core Jul 14 22:15:23.654417 systemd[1]: sshd@5-10.0.0.96:22-10.0.0.1:57130.service: Deactivated successfully. Jul 14 22:15:23.655808 systemd[1]: session-6.scope: Deactivated successfully. Jul 14 22:15:23.657141 systemd-logind[1434]: Session 6 logged out. Waiting for processes to exit. Jul 14 22:15:23.658412 systemd[1]: Started sshd@6-10.0.0.96:22-10.0.0.1:57142.service - OpenSSH per-connection server daemon (10.0.0.1:57142). Jul 14 22:15:23.659145 systemd-logind[1434]: Removed session 6. Jul 14 22:15:23.691740 sshd[1629]: Accepted publickey for core from 10.0.0.1 port 57142 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:15:23.693193 sshd[1629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:15:23.696729 systemd-logind[1434]: New session 7 of user core. Jul 14 22:15:23.710976 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 14 22:15:23.763924 sudo[1632]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 14 22:15:23.764273 sudo[1632]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 22:15:24.032034 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 14 22:15:24.032212 (dockerd)[1650]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 14 22:15:24.755086 dockerd[1650]: time="2025-07-14T22:15:24.755016317Z" level=info msg="Starting up" Jul 14 22:15:25.632215 dockerd[1650]: time="2025-07-14T22:15:25.632155810Z" level=info msg="Loading containers: start." Jul 14 22:15:25.751861 kernel: Initializing XFRM netlink socket Jul 14 22:15:25.839233 systemd-networkd[1387]: docker0: Link UP Jul 14 22:15:25.861390 dockerd[1650]: time="2025-07-14T22:15:25.861333995Z" level=info msg="Loading containers: done." Jul 14 22:15:25.876898 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck888599755-merged.mount: Deactivated successfully. Jul 14 22:15:25.878145 dockerd[1650]: time="2025-07-14T22:15:25.878096560Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 14 22:15:25.878237 dockerd[1650]: time="2025-07-14T22:15:25.878189979Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 14 22:15:25.878309 dockerd[1650]: time="2025-07-14T22:15:25.878290583Z" level=info msg="Daemon has completed initialization" Jul 14 22:15:25.921088 dockerd[1650]: time="2025-07-14T22:15:25.920920369Z" level=info msg="API listen on /run/docker.sock" Jul 14 22:15:25.921260 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 14 22:15:30.409609 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 14 22:15:30.421101 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:15:30.600548 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:15:30.605120 (kubelet)[1805]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:15:30.642946 kubelet[1805]: E0714 22:15:30.642884 1805 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:15:30.649602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:15:30.649856 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:15:36.258744 containerd[1453]: time="2025-07-14T22:15:36.258707912Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.0\"" Jul 14 22:15:40.659553 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 14 22:15:40.668987 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:15:40.836103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:15:40.840430 (kubelet)[1823]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:15:40.878048 kubelet[1823]: E0714 22:15:40.877987 1823 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:15:40.882614 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:15:40.882859 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:15:47.207477 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount182244606.mount: Deactivated successfully. Jul 14 22:15:48.144774 containerd[1453]: time="2025-07-14T22:15:48.144707833Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:15:48.145421 containerd[1453]: time="2025-07-14T22:15:48.145365559Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.0: active requests=0, bytes read=30074507" Jul 14 22:15:48.146667 containerd[1453]: time="2025-07-14T22:15:48.146634140Z" level=info msg="ImageCreate event name:\"sha256:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:15:48.149336 containerd[1453]: time="2025-07-14T22:15:48.149289100Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6679a9970a8b2f18647b33bf02e5e9895d286689256e2f7172481b4096e46a32\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:15:48.150583 containerd[1453]: time="2025-07-14T22:15:48.150544500Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.0\" with image id \"sha256:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.0\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6679a9970a8b2f18647b33bf02e5e9895d286689256e2f7172481b4096e46a32\", size \"30071307\" in 11.891796809s" Jul 14 22:15:48.150626 containerd[1453]: time="2025-07-14T22:15:48.150590719Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.0\" returns image reference \"sha256:6ba9545b2183ef722d7e8a7f9e9c2abfaf483cd980bc378480631699413d9cf4\"" Jul 14 22:15:48.151340 containerd[1453]: time="2025-07-14T22:15:48.151314365Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.0\"" Jul 14 22:15:49.275800 containerd[1453]: time="2025-07-14T22:15:49.275736535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:15:49.276577 containerd[1453]: time="2025-07-14T22:15:49.276521292Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.0: active requests=0, bytes read=26007510" Jul 14 22:15:49.277742 containerd[1453]: time="2025-07-14T22:15:49.277709990Z" level=info msg="ImageCreate event name:\"sha256:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:15:49.280996 containerd[1453]: time="2025-07-14T22:15:49.280633829Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0b32ab11fd06504608cdb9084f7284106b4f5f07f35eb8823e70ea0eaaf252a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:15:49.283359 containerd[1453]: time="2025-07-14T22:15:49.283323217Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.0\" with image id \"sha256:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.0\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0b32ab11fd06504608cdb9084f7284106b4f5f07f35eb8823e70ea0eaaf252a\", size \"27635030\" in 1.1319737s" Jul 14 22:15:49.283359 containerd[1453]: time="2025-07-14T22:15:49.283359871Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.0\" returns image reference \"sha256:1d579cb6d696709ea7c8613023cbc1204ac2af295477fe577c8fa741a76efa02\"" Jul 14 22:15:49.283934 containerd[1453]: time="2025-07-14T22:15:49.283857443Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.0\"" Jul 14 22:15:50.479199 containerd[1453]: time="2025-07-14T22:15:50.479130020Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:15:50.480034 containerd[1453]: time="2025-07-14T22:15:50.479997038Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.0: active requests=0, bytes read=20148946" Jul 14 22:15:50.481322 containerd[1453]: time="2025-07-14T22:15:50.481277542Z" level=info msg="ImageCreate event name:\"sha256:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:15:50.483720 containerd[1453]: time="2025-07-14T22:15:50.483682847Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:8dd2fbeb7f711da53a89ded239e54133f34110d98de887a39a9021e651b51f1f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:15:50.484763 containerd[1453]: time="2025-07-14T22:15:50.484725358Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.0\" with image id \"sha256:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.0\", repo digest \"registry.k8s.io/kube-scheduler@sha256:8dd2fbeb7f711da53a89ded239e54133f34110d98de887a39a9021e651b51f1f\", size \"21776484\" in 1.200813826s" Jul 14 22:15:50.484763 containerd[1453]: time="2025-07-14T22:15:50.484759080Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.0\" returns image reference \"sha256:8d72586a76469984dc4c5c7c36b24fbe4baed63056998c682f07b591d5e0aba4\"" Jul 14 22:15:50.485303 containerd[1453]: time="2025-07-14T22:15:50.485239713Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.0\"" Jul 14 22:15:50.909501 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 14 22:15:50.918981 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:15:51.085949 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:15:51.090308 (kubelet)[1901]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:15:51.128380 kubelet[1901]: E0714 22:15:51.128321 1901 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:15:51.132641 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:15:51.132871 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:15:52.019408 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3944522631.mount: Deactivated successfully. Jul 14 22:15:52.565559 containerd[1453]: time="2025-07-14T22:15:52.565506810Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:15:52.566207 containerd[1453]: time="2025-07-14T22:15:52.566167806Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.0: active requests=0, bytes read=31888707" Jul 14 22:15:52.567340 containerd[1453]: time="2025-07-14T22:15:52.567312286Z" level=info msg="ImageCreate event name:\"sha256:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:15:52.569369 containerd[1453]: time="2025-07-14T22:15:52.569325565Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:05f8984642d05b1b1a6c37605a4a566e46e7290f9291d17885f096c36861095b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:15:52.570020 containerd[1453]: time="2025-07-14T22:15:52.569955011Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.0\" with image id \"sha256:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68\", repo tag \"registry.k8s.io/kube-proxy:v1.33.0\", repo digest \"registry.k8s.io/kube-proxy@sha256:05f8984642d05b1b1a6c37605a4a566e46e7290f9291d17885f096c36861095b\", size \"31887726\" in 2.08468881s" Jul 14 22:15:52.570020 containerd[1453]: time="2025-07-14T22:15:52.570003562Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.0\" returns image reference \"sha256:f1184a0bd7fe53a4c7098147f250b1f8b287a0e4f8a4e1509ef1d06893267c68\"" Jul 14 22:15:52.570620 containerd[1453]: time="2025-07-14T22:15:52.570553978Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 14 22:15:53.862476 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4291704902.mount: Deactivated successfully. Jul 14 22:15:54.861637 containerd[1453]: time="2025-07-14T22:15:54.861573924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:15:54.862364 containerd[1453]: time="2025-07-14T22:15:54.862330394Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Jul 14 22:15:54.863521 containerd[1453]: time="2025-07-14T22:15:54.863488268Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:15:54.866574 containerd[1453]: time="2025-07-14T22:15:54.866543471Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:15:54.867693 containerd[1453]: time="2025-07-14T22:15:54.867663461Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.297057031s" Jul 14 22:15:54.867756 containerd[1453]: time="2025-07-14T22:15:54.867709127Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jul 14 22:15:54.868173 containerd[1453]: time="2025-07-14T22:15:54.868152619Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 14 22:15:55.286720 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3783886561.mount: Deactivated successfully. Jul 14 22:15:55.292190 containerd[1453]: time="2025-07-14T22:15:55.292148675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:15:55.293043 containerd[1453]: time="2025-07-14T22:15:55.292986965Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 14 22:15:55.293938 containerd[1453]: time="2025-07-14T22:15:55.293900770Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:15:55.296008 containerd[1453]: time="2025-07-14T22:15:55.295973459Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:15:55.296673 containerd[1453]: time="2025-07-14T22:15:55.296638137Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 428.460298ms" Jul 14 22:15:55.296673 containerd[1453]: time="2025-07-14T22:15:55.296663927Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 14 22:15:55.297211 containerd[1453]: time="2025-07-14T22:15:55.297186679Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 14 22:15:57.554078 containerd[1453]: time="2025-07-14T22:15:57.554018594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:15:57.554857 containerd[1453]: time="2025-07-14T22:15:57.554766774Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58142739" Jul 14 22:15:57.556083 containerd[1453]: time="2025-07-14T22:15:57.556039365Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:15:57.559063 containerd[1453]: time="2025-07-14T22:15:57.559017100Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:15:57.560136 containerd[1453]: time="2025-07-14T22:15:57.560076143Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.262848184s" Jul 14 22:15:57.560136 containerd[1453]: time="2025-07-14T22:15:57.560128039Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jul 14 22:16:01.159522 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 14 22:16:01.170022 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:16:01.342535 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:16:01.349514 (kubelet)[2001]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:16:01.391396 kubelet[2001]: E0714 22:16:01.391259 2001 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:16:01.396195 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:16:01.396448 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:16:03.979707 update_engine[1438]: I20250714 22:16:03.979597 1438 update_attempter.cc:509] Updating boot flags... Jul 14 22:16:04.004891 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2018) Jul 14 22:16:04.029856 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2021) Jul 14 22:16:04.055810 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2021) Jul 14 22:16:10.915457 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:16:10.929224 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:16:10.959048 systemd[1]: Reloading requested from client PID 2050 ('systemctl') (unit session-7.scope)... Jul 14 22:16:10.959065 systemd[1]: Reloading... Jul 14 22:16:11.043902 zram_generator::config[2090]: No configuration found. Jul 14 22:16:11.368093 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 22:16:11.448161 systemd[1]: Reloading finished in 488 ms. Jul 14 22:16:11.498051 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:16:11.501021 systemd[1]: kubelet.service: Deactivated successfully. Jul 14 22:16:11.501270 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:16:11.502979 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:16:11.670283 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:16:11.676061 (kubelet)[2139]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 14 22:16:11.712908 kubelet[2139]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 22:16:11.712908 kubelet[2139]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 14 22:16:11.712908 kubelet[2139]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 22:16:11.713291 kubelet[2139]: I0714 22:16:11.712960 2139 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 14 22:16:11.987892 kubelet[2139]: I0714 22:16:11.987748 2139 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 14 22:16:11.987892 kubelet[2139]: I0714 22:16:11.987783 2139 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 14 22:16:11.988044 kubelet[2139]: I0714 22:16:11.988025 2139 server.go:956] "Client rotation is on, will bootstrap in background" Jul 14 22:16:12.008899 kubelet[2139]: I0714 22:16:12.008836 2139 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 14 22:16:12.009086 kubelet[2139]: E0714 22:16:12.009024 2139 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.96:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.96:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 14 22:16:12.015327 kubelet[2139]: E0714 22:16:12.015270 2139 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 14 22:16:12.015327 kubelet[2139]: I0714 22:16:12.015312 2139 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 14 22:16:12.020709 kubelet[2139]: I0714 22:16:12.020678 2139 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 14 22:16:12.020977 kubelet[2139]: I0714 22:16:12.020942 2139 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 14 22:16:12.021170 kubelet[2139]: I0714 22:16:12.020964 2139 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 14 22:16:12.021170 kubelet[2139]: I0714 22:16:12.021165 2139 topology_manager.go:138] "Creating topology manager with none policy" Jul 14 22:16:12.021170 kubelet[2139]: I0714 22:16:12.021173 2139 container_manager_linux.go:303] "Creating device plugin manager" Jul 14 22:16:12.021314 kubelet[2139]: I0714 22:16:12.021298 2139 state_mem.go:36] "Initialized new in-memory state store" Jul 14 22:16:12.023428 kubelet[2139]: I0714 22:16:12.023402 2139 kubelet.go:480] "Attempting to sync node with API server" Jul 14 22:16:12.023428 kubelet[2139]: I0714 22:16:12.023422 2139 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 14 22:16:12.023493 kubelet[2139]: I0714 22:16:12.023456 2139 kubelet.go:386] "Adding apiserver pod source" Jul 14 22:16:12.023493 kubelet[2139]: I0714 22:16:12.023475 2139 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 14 22:16:12.030293 kubelet[2139]: I0714 22:16:12.029556 2139 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 14 22:16:12.030293 kubelet[2139]: I0714 22:16:12.030125 2139 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 14 22:16:12.031703 kubelet[2139]: W0714 22:16:12.031660 2139 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 14 22:16:12.032920 kubelet[2139]: E0714 22:16:12.032639 2139 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.96:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.96:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 14 22:16:12.033041 kubelet[2139]: E0714 22:16:12.033008 2139 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.96:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.96:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 14 22:16:12.034520 kubelet[2139]: I0714 22:16:12.034503 2139 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 14 22:16:12.034563 kubelet[2139]: I0714 22:16:12.034553 2139 server.go:1289] "Started kubelet" Jul 14 22:16:12.034995 kubelet[2139]: I0714 22:16:12.034864 2139 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 14 22:16:12.035263 kubelet[2139]: I0714 22:16:12.035240 2139 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 14 22:16:12.036603 kubelet[2139]: I0714 22:16:12.035293 2139 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 14 22:16:12.036603 kubelet[2139]: I0714 22:16:12.035930 2139 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 14 22:16:12.036603 kubelet[2139]: I0714 22:16:12.036149 2139 server.go:317] "Adding debug handlers to kubelet server" Jul 14 22:16:12.036938 kubelet[2139]: I0714 22:16:12.036915 2139 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 14 22:16:12.039320 kubelet[2139]: E0714 22:16:12.038666 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:16:12.039320 kubelet[2139]: I0714 22:16:12.038699 2139 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 14 22:16:12.039320 kubelet[2139]: I0714 22:16:12.038872 2139 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 14 22:16:12.039320 kubelet[2139]: I0714 22:16:12.038941 2139 reconciler.go:26] "Reconciler: start to sync state" Jul 14 22:16:12.039320 kubelet[2139]: E0714 22:16:12.039199 2139 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.96:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.96:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 14 22:16:12.039523 kubelet[2139]: I0714 22:16:12.039407 2139 factory.go:223] Registration of the systemd container factory successfully Jul 14 22:16:12.039523 kubelet[2139]: I0714 22:16:12.039477 2139 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 14 22:16:12.040135 kubelet[2139]: E0714 22:16:12.040102 2139 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.96:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.96:6443: connect: connection refused" interval="200ms" Jul 14 22:16:12.040188 kubelet[2139]: E0714 22:16:12.037735 2139 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.96:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.96:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18523dff5fd88691 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-14 22:16:12.034524817 +0000 UTC m=+0.354412497,LastTimestamp:2025-07-14 22:16:12.034524817 +0000 UTC m=+0.354412497,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 14 22:16:12.041867 kubelet[2139]: I0714 22:16:12.041840 2139 factory.go:223] Registration of the containerd container factory successfully Jul 14 22:16:12.043151 kubelet[2139]: E0714 22:16:12.043117 2139 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 14 22:16:12.059958 kubelet[2139]: I0714 22:16:12.059912 2139 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 14 22:16:12.061686 kubelet[2139]: I0714 22:16:12.061649 2139 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 14 22:16:12.061767 kubelet[2139]: I0714 22:16:12.061750 2139 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 14 22:16:12.061811 kubelet[2139]: I0714 22:16:12.061779 2139 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 14 22:16:12.061868 kubelet[2139]: I0714 22:16:12.061852 2139 kubelet.go:2436] "Starting kubelet main sync loop" Jul 14 22:16:12.062088 kubelet[2139]: E0714 22:16:12.061953 2139 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 14 22:16:12.062139 kubelet[2139]: I0714 22:16:12.062115 2139 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 14 22:16:12.062139 kubelet[2139]: I0714 22:16:12.062128 2139 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 14 22:16:12.062180 kubelet[2139]: I0714 22:16:12.062146 2139 state_mem.go:36] "Initialized new in-memory state store" Jul 14 22:16:12.062543 kubelet[2139]: E0714 22:16:12.062505 2139 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.96:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.96:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 14 22:16:12.139543 kubelet[2139]: E0714 22:16:12.139493 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:16:12.162838 kubelet[2139]: E0714 22:16:12.162777 2139 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 14 22:16:12.240048 kubelet[2139]: E0714 22:16:12.239969 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:16:12.241557 kubelet[2139]: E0714 22:16:12.241519 2139 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.96:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.96:6443: connect: connection refused" interval="400ms" Jul 14 22:16:12.340931 kubelet[2139]: E0714 22:16:12.340861 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:16:12.362989 kubelet[2139]: E0714 22:16:12.362917 2139 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 14 22:16:12.441492 kubelet[2139]: E0714 22:16:12.441416 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:16:12.541751 kubelet[2139]: E0714 22:16:12.541592 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:16:12.642149 kubelet[2139]: E0714 22:16:12.642084 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:16:12.642658 kubelet[2139]: E0714 22:16:12.642617 2139 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.96:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.96:6443: connect: connection refused" interval="800ms" Jul 14 22:16:12.739974 kubelet[2139]: I0714 22:16:12.739925 2139 policy_none.go:49] "None policy: Start" Jul 14 22:16:12.739974 kubelet[2139]: I0714 22:16:12.739959 2139 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 14 22:16:12.739974 kubelet[2139]: I0714 22:16:12.739975 2139 state_mem.go:35] "Initializing new in-memory state store" Jul 14 22:16:12.743049 kubelet[2139]: E0714 22:16:12.743019 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:16:12.763246 kubelet[2139]: E0714 22:16:12.763214 2139 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 14 22:16:12.843991 kubelet[2139]: E0714 22:16:12.843853 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:16:12.917558 kubelet[2139]: E0714 22:16:12.917511 2139 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.96:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.96:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 14 22:16:12.944279 kubelet[2139]: E0714 22:16:12.944231 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:16:13.034747 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 14 22:16:13.044481 kubelet[2139]: E0714 22:16:13.044414 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:16:13.049524 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 14 22:16:13.053440 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 14 22:16:13.067234 kubelet[2139]: E0714 22:16:13.067119 2139 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 14 22:16:13.067392 kubelet[2139]: I0714 22:16:13.067367 2139 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 14 22:16:13.067431 kubelet[2139]: I0714 22:16:13.067380 2139 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 14 22:16:13.067667 kubelet[2139]: I0714 22:16:13.067605 2139 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 14 22:16:13.068554 kubelet[2139]: E0714 22:16:13.068523 2139 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 14 22:16:13.068554 kubelet[2139]: E0714 22:16:13.068564 2139 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 14 22:16:13.132510 kubelet[2139]: E0714 22:16:13.132348 2139 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.96:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.96:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 14 22:16:13.169961 kubelet[2139]: I0714 22:16:13.169903 2139 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 22:16:13.170449 kubelet[2139]: E0714 22:16:13.170392 2139 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.96:6443/api/v1/nodes\": dial tcp 10.0.0.96:6443: connect: connection refused" node="localhost" Jul 14 22:16:13.207194 kubelet[2139]: E0714 22:16:13.207130 2139 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.96:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.96:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 14 22:16:13.371862 kubelet[2139]: I0714 22:16:13.371812 2139 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 22:16:13.372163 kubelet[2139]: E0714 22:16:13.372130 2139 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.96:6443/api/v1/nodes\": dial tcp 10.0.0.96:6443: connect: connection refused" node="localhost" Jul 14 22:16:13.443921 kubelet[2139]: E0714 22:16:13.443879 2139 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.96:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.96:6443: connect: connection refused" interval="1.6s" Jul 14 22:16:13.573674 systemd[1]: Created slice kubepods-burstable-pod2626f392a41443150ef6fb957db06b3a.slice - libcontainer container kubepods-burstable-pod2626f392a41443150ef6fb957db06b3a.slice. Jul 14 22:16:13.596160 kubelet[2139]: E0714 22:16:13.596134 2139 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 22:16:13.597850 systemd[1]: Created slice kubepods-burstable-pod7883c2a360afc5b3c9b064549b9b0c8d.slice - libcontainer container kubepods-burstable-pod7883c2a360afc5b3c9b064549b9b0c8d.slice. Jul 14 22:16:13.612940 kubelet[2139]: E0714 22:16:13.612894 2139 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 22:16:13.615770 systemd[1]: Created slice kubepods-burstable-podfad475d3be2e7026903cdccc200d075f.slice - libcontainer container kubepods-burstable-podfad475d3be2e7026903cdccc200d075f.slice. Jul 14 22:16:13.617480 kubelet[2139]: E0714 22:16:13.617456 2139 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 22:16:13.647991 kubelet[2139]: I0714 22:16:13.647948 2139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2626f392a41443150ef6fb957db06b3a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2626f392a41443150ef6fb957db06b3a\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:16:13.647991 kubelet[2139]: I0714 22:16:13.647983 2139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2626f392a41443150ef6fb957db06b3a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2626f392a41443150ef6fb957db06b3a\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:16:13.648090 kubelet[2139]: I0714 22:16:13.648002 2139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7883c2a360afc5b3c9b064549b9b0c8d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7883c2a360afc5b3c9b064549b9b0c8d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:16:13.648090 kubelet[2139]: I0714 22:16:13.648020 2139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7883c2a360afc5b3c9b064549b9b0c8d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7883c2a360afc5b3c9b064549b9b0c8d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:16:13.648090 kubelet[2139]: I0714 22:16:13.648040 2139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7883c2a360afc5b3c9b064549b9b0c8d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7883c2a360afc5b3c9b064549b9b0c8d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:16:13.648090 kubelet[2139]: I0714 22:16:13.648055 2139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2626f392a41443150ef6fb957db06b3a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2626f392a41443150ef6fb957db06b3a\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:16:13.648090 kubelet[2139]: I0714 22:16:13.648077 2139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7883c2a360afc5b3c9b064549b9b0c8d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7883c2a360afc5b3c9b064549b9b0c8d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:16:13.648253 kubelet[2139]: I0714 22:16:13.648092 2139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7883c2a360afc5b3c9b064549b9b0c8d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7883c2a360afc5b3c9b064549b9b0c8d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:16:13.648253 kubelet[2139]: I0714 22:16:13.648107 2139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fad475d3be2e7026903cdccc200d075f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fad475d3be2e7026903cdccc200d075f\") " pod="kube-system/kube-scheduler-localhost" Jul 14 22:16:13.659506 kubelet[2139]: E0714 22:16:13.659474 2139 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.96:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.96:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 14 22:16:13.773580 kubelet[2139]: I0714 22:16:13.773463 2139 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 22:16:13.773995 kubelet[2139]: E0714 22:16:13.773814 2139 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.96:6443/api/v1/nodes\": dial tcp 10.0.0.96:6443: connect: connection refused" node="localhost" Jul 14 22:16:13.896848 kubelet[2139]: E0714 22:16:13.896795 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:16:13.897609 containerd[1453]: time="2025-07-14T22:16:13.897553730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2626f392a41443150ef6fb957db06b3a,Namespace:kube-system,Attempt:0,}" Jul 14 22:16:13.913939 kubelet[2139]: E0714 22:16:13.913881 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:16:13.914415 containerd[1453]: time="2025-07-14T22:16:13.914371612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7883c2a360afc5b3c9b064549b9b0c8d,Namespace:kube-system,Attempt:0,}" Jul 14 22:16:13.918750 kubelet[2139]: E0714 22:16:13.918721 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:16:13.919083 containerd[1453]: time="2025-07-14T22:16:13.919033208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fad475d3be2e7026903cdccc200d075f,Namespace:kube-system,Attempt:0,}" Jul 14 22:16:14.168428 kubelet[2139]: E0714 22:16:14.168383 2139 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.96:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.96:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 14 22:16:14.575540 kubelet[2139]: I0714 22:16:14.575417 2139 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 22:16:14.575893 kubelet[2139]: E0714 22:16:14.575786 2139 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.96:6443/api/v1/nodes\": dial tcp 10.0.0.96:6443: connect: connection refused" node="localhost" Jul 14 22:16:14.710728 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3865727334.mount: Deactivated successfully. Jul 14 22:16:14.723039 containerd[1453]: time="2025-07-14T22:16:14.722979353Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 22:16:14.724031 containerd[1453]: time="2025-07-14T22:16:14.723996271Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 22:16:14.724812 containerd[1453]: time="2025-07-14T22:16:14.724774524Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 14 22:16:14.725794 containerd[1453]: time="2025-07-14T22:16:14.725744704Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 22:16:14.726638 containerd[1453]: time="2025-07-14T22:16:14.726600612Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 14 22:16:14.727377 containerd[1453]: time="2025-07-14T22:16:14.727346286Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 14 22:16:14.728313 containerd[1453]: time="2025-07-14T22:16:14.728277002Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 22:16:14.735129 containerd[1453]: time="2025-07-14T22:16:14.735066371Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 22:16:14.736039 containerd[1453]: time="2025-07-14T22:16:14.735981374Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 838.352212ms" Jul 14 22:16:14.736492 containerd[1453]: time="2025-07-14T22:16:14.736461212Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 817.386946ms" Jul 14 22:16:14.739600 containerd[1453]: time="2025-07-14T22:16:14.739551566Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 825.123532ms" Jul 14 22:16:14.861445 containerd[1453]: time="2025-07-14T22:16:14.860733730Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:16:14.861445 containerd[1453]: time="2025-07-14T22:16:14.860902007Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:16:14.861445 containerd[1453]: time="2025-07-14T22:16:14.860921168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:16:14.861445 containerd[1453]: time="2025-07-14T22:16:14.861250607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:16:14.862191 containerd[1453]: time="2025-07-14T22:16:14.861960014Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:16:14.862191 containerd[1453]: time="2025-07-14T22:16:14.862001151Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:16:14.862191 containerd[1453]: time="2025-07-14T22:16:14.862014590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:16:14.862191 containerd[1453]: time="2025-07-14T22:16:14.862071591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:16:14.862374 containerd[1453]: time="2025-07-14T22:16:14.862267227Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:16:14.862374 containerd[1453]: time="2025-07-14T22:16:14.862319367Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:16:14.862374 containerd[1453]: time="2025-07-14T22:16:14.862333427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:16:14.862972 containerd[1453]: time="2025-07-14T22:16:14.862411663Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:16:14.889995 systemd[1]: Started cri-containerd-0e745b40690d050c7645f7c3f3f433e0b9554062e0b3bac2eb56a5b35b73759f.scope - libcontainer container 0e745b40690d050c7645f7c3f3f433e0b9554062e0b3bac2eb56a5b35b73759f. Jul 14 22:16:14.893817 systemd[1]: Started cri-containerd-3919e033f90828bc5130927ca5602572eaea398c28e2abda65b866ae1a7530b0.scope - libcontainer container 3919e033f90828bc5130927ca5602572eaea398c28e2abda65b866ae1a7530b0. Jul 14 22:16:14.895690 systemd[1]: Started cri-containerd-4fffa35f5bd8098748f702802eeb4c9388e3fb4d303c5d8d541149a35e24431f.scope - libcontainer container 4fffa35f5bd8098748f702802eeb4c9388e3fb4d303c5d8d541149a35e24431f. Jul 14 22:16:14.930303 kubelet[2139]: E0714 22:16:14.930242 2139 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.96:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.96:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 14 22:16:14.930813 containerd[1453]: time="2025-07-14T22:16:14.930339136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2626f392a41443150ef6fb957db06b3a,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e745b40690d050c7645f7c3f3f433e0b9554062e0b3bac2eb56a5b35b73759f\"" Jul 14 22:16:14.931627 kubelet[2139]: E0714 22:16:14.931599 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:16:14.932578 containerd[1453]: time="2025-07-14T22:16:14.932538065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7883c2a360afc5b3c9b064549b9b0c8d,Namespace:kube-system,Attempt:0,} returns sandbox id \"3919e033f90828bc5130927ca5602572eaea398c28e2abda65b866ae1a7530b0\"" Jul 14 22:16:14.935789 kubelet[2139]: E0714 22:16:14.935766 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:16:14.937883 containerd[1453]: time="2025-07-14T22:16:14.937222633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fad475d3be2e7026903cdccc200d075f,Namespace:kube-system,Attempt:0,} returns sandbox id \"4fffa35f5bd8098748f702802eeb4c9388e3fb4d303c5d8d541149a35e24431f\"" Jul 14 22:16:14.940092 kubelet[2139]: E0714 22:16:14.940066 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:16:14.940550 containerd[1453]: time="2025-07-14T22:16:14.940492366Z" level=info msg="CreateContainer within sandbox \"0e745b40690d050c7645f7c3f3f433e0b9554062e0b3bac2eb56a5b35b73759f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 14 22:16:14.943059 containerd[1453]: time="2025-07-14T22:16:14.943020205Z" level=info msg="CreateContainer within sandbox \"3919e033f90828bc5130927ca5602572eaea398c28e2abda65b866ae1a7530b0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 14 22:16:14.945677 containerd[1453]: time="2025-07-14T22:16:14.945649328Z" level=info msg="CreateContainer within sandbox \"4fffa35f5bd8098748f702802eeb4c9388e3fb4d303c5d8d541149a35e24431f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 14 22:16:14.961886 containerd[1453]: time="2025-07-14T22:16:14.961704343Z" level=info msg="CreateContainer within sandbox \"0e745b40690d050c7645f7c3f3f433e0b9554062e0b3bac2eb56a5b35b73759f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"da114613f6c5259d343b874b2b9d41448519df07515ed916a73013b1b0b65948\"" Jul 14 22:16:14.962246 containerd[1453]: time="2025-07-14T22:16:14.962209025Z" level=info msg="StartContainer for \"da114613f6c5259d343b874b2b9d41448519df07515ed916a73013b1b0b65948\"" Jul 14 22:16:14.966865 containerd[1453]: time="2025-07-14T22:16:14.966811448Z" level=info msg="CreateContainer within sandbox \"4fffa35f5bd8098748f702802eeb4c9388e3fb4d303c5d8d541149a35e24431f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5fd84f8744afde49d7f59433adbc51d6306edbbffac1734758fe4cddb66a7ed1\"" Jul 14 22:16:14.967319 containerd[1453]: time="2025-07-14T22:16:14.967294244Z" level=info msg="StartContainer for \"5fd84f8744afde49d7f59433adbc51d6306edbbffac1734758fe4cddb66a7ed1\"" Jul 14 22:16:14.967757 containerd[1453]: time="2025-07-14T22:16:14.967519572Z" level=info msg="CreateContainer within sandbox \"3919e033f90828bc5130927ca5602572eaea398c28e2abda65b866ae1a7530b0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"813ca494a3e7028c0a1e739c4d016d41579c74367f5a23c17a1f1ba3bd430e5b\"" Jul 14 22:16:14.968139 containerd[1453]: time="2025-07-14T22:16:14.968101569Z" level=info msg="StartContainer for \"813ca494a3e7028c0a1e739c4d016d41579c74367f5a23c17a1f1ba3bd430e5b\"" Jul 14 22:16:14.990946 systemd[1]: Started cri-containerd-da114613f6c5259d343b874b2b9d41448519df07515ed916a73013b1b0b65948.scope - libcontainer container da114613f6c5259d343b874b2b9d41448519df07515ed916a73013b1b0b65948. Jul 14 22:16:14.994504 systemd[1]: Started cri-containerd-5fd84f8744afde49d7f59433adbc51d6306edbbffac1734758fe4cddb66a7ed1.scope - libcontainer container 5fd84f8744afde49d7f59433adbc51d6306edbbffac1734758fe4cddb66a7ed1. Jul 14 22:16:14.996672 systemd[1]: Started cri-containerd-813ca494a3e7028c0a1e739c4d016d41579c74367f5a23c17a1f1ba3bd430e5b.scope - libcontainer container 813ca494a3e7028c0a1e739c4d016d41579c74367f5a23c17a1f1ba3bd430e5b. Jul 14 22:16:15.038727 containerd[1453]: time="2025-07-14T22:16:15.038668781Z" level=info msg="StartContainer for \"5fd84f8744afde49d7f59433adbc51d6306edbbffac1734758fe4cddb66a7ed1\" returns successfully" Jul 14 22:16:15.038890 containerd[1453]: time="2025-07-14T22:16:15.038777890Z" level=info msg="StartContainer for \"da114613f6c5259d343b874b2b9d41448519df07515ed916a73013b1b0b65948\" returns successfully" Jul 14 22:16:15.038890 containerd[1453]: time="2025-07-14T22:16:15.038804677Z" level=info msg="StartContainer for \"813ca494a3e7028c0a1e739c4d016d41579c74367f5a23c17a1f1ba3bd430e5b\" returns successfully" Jul 14 22:16:15.045473 kubelet[2139]: E0714 22:16:15.045398 2139 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.96:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.96:6443: connect: connection refused" interval="3.2s" Jul 14 22:16:15.071055 kubelet[2139]: E0714 22:16:15.070944 2139 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 22:16:15.071055 kubelet[2139]: E0714 22:16:15.071054 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:16:15.073112 kubelet[2139]: E0714 22:16:15.072983 2139 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 22:16:15.073112 kubelet[2139]: E0714 22:16:15.073064 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:16:15.075092 kubelet[2139]: E0714 22:16:15.074990 2139 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 22:16:15.075220 kubelet[2139]: E0714 22:16:15.075184 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:16:16.078554 kubelet[2139]: E0714 22:16:16.078510 2139 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 22:16:16.078999 kubelet[2139]: E0714 22:16:16.078617 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:16:16.079137 kubelet[2139]: E0714 22:16:16.079110 2139 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 22:16:16.079204 kubelet[2139]: E0714 22:16:16.079189 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:16:16.177357 kubelet[2139]: I0714 22:16:16.177316 2139 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 22:16:16.186805 kubelet[2139]: I0714 22:16:16.186764 2139 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 14 22:16:16.186805 kubelet[2139]: E0714 22:16:16.186807 2139 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 14 22:16:16.194099 kubelet[2139]: E0714 22:16:16.194051 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:16:16.295010 kubelet[2139]: E0714 22:16:16.294967 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:16:16.395624 kubelet[2139]: E0714 22:16:16.395508 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:16:16.496202 kubelet[2139]: E0714 22:16:16.496155 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:16:16.596525 kubelet[2139]: E0714 22:16:16.596451 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:16:16.697166 kubelet[2139]: E0714 22:16:16.697124 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:16:16.800791 kubelet[2139]: E0714 22:16:16.800745 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:16:16.901624 kubelet[2139]: E0714 22:16:16.901566 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:16:17.002415 kubelet[2139]: E0714 22:16:17.002284 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:16:17.079155 kubelet[2139]: E0714 22:16:17.079128 2139 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 22:16:17.079552 kubelet[2139]: E0714 22:16:17.079259 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:16:17.103305 kubelet[2139]: E0714 22:16:17.103257 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:16:17.205956 kubelet[2139]: E0714 22:16:17.205905 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:16:17.306923 kubelet[2139]: E0714 22:16:17.306758 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:16:17.394145 kubelet[2139]: E0714 22:16:17.394109 2139 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 22:16:17.394258 kubelet[2139]: E0714 22:16:17.394223 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:16:17.407293 kubelet[2139]: E0714 22:16:17.407263 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:16:17.508114 kubelet[2139]: E0714 22:16:17.508088 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:16:17.609113 kubelet[2139]: E0714 22:16:17.609014 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:16:17.709708 kubelet[2139]: E0714 22:16:17.709654 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:16:17.783185 kubelet[2139]: E0714 22:16:17.783148 2139 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 22:16:17.783299 kubelet[2139]: E0714 22:16:17.783275 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:16:17.810258 kubelet[2139]: E0714 22:16:17.810223 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:16:17.910989 kubelet[2139]: E0714 22:16:17.910951 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:16:18.011957 kubelet[2139]: E0714 22:16:18.011916 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:16:18.112611 kubelet[2139]: E0714 22:16:18.112572 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:16:18.115647 systemd[1]: Reloading requested from client PID 2432 ('systemctl') (unit session-7.scope)... Jul 14 22:16:18.115663 systemd[1]: Reloading... Jul 14 22:16:18.175866 zram_generator::config[2477]: No configuration found. Jul 14 22:16:18.213463 kubelet[2139]: E0714 22:16:18.213412 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:16:18.279373 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 22:16:18.313622 kubelet[2139]: E0714 22:16:18.313566 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:16:18.368851 systemd[1]: Reloading finished in 252 ms. Jul 14 22:16:18.412867 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:16:18.439104 systemd[1]: kubelet.service: Deactivated successfully. Jul 14 22:16:18.439377 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:16:18.452012 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:16:18.617455 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:16:18.623422 (kubelet)[2516]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 14 22:16:18.664192 kubelet[2516]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 22:16:18.664192 kubelet[2516]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 14 22:16:18.664192 kubelet[2516]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 22:16:18.664609 kubelet[2516]: I0714 22:16:18.664229 2516 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 14 22:16:18.671865 kubelet[2516]: I0714 22:16:18.671831 2516 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 14 22:16:18.671976 kubelet[2516]: I0714 22:16:18.671885 2516 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 14 22:16:18.672099 kubelet[2516]: I0714 22:16:18.672075 2516 server.go:956] "Client rotation is on, will bootstrap in background" Jul 14 22:16:18.673192 kubelet[2516]: I0714 22:16:18.673169 2516 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 14 22:16:18.676403 kubelet[2516]: I0714 22:16:18.676359 2516 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 14 22:16:18.679637 kubelet[2516]: E0714 22:16:18.679589 2516 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 14 22:16:18.679637 kubelet[2516]: I0714 22:16:18.679633 2516 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 14 22:16:18.685073 kubelet[2516]: I0714 22:16:18.685033 2516 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 14 22:16:18.685392 kubelet[2516]: I0714 22:16:18.685358 2516 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 14 22:16:18.685578 kubelet[2516]: I0714 22:16:18.685393 2516 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 14 22:16:18.685656 kubelet[2516]: I0714 22:16:18.685580 2516 topology_manager.go:138] "Creating topology manager with none policy" Jul 14 22:16:18.685656 kubelet[2516]: I0714 22:16:18.685593 2516 container_manager_linux.go:303] "Creating device plugin manager" Jul 14 22:16:18.686350 kubelet[2516]: I0714 22:16:18.686325 2516 state_mem.go:36] "Initialized new in-memory state store" Jul 14 22:16:18.688186 kubelet[2516]: I0714 22:16:18.688147 2516 kubelet.go:480] "Attempting to sync node with API server" Jul 14 22:16:18.688272 kubelet[2516]: I0714 22:16:18.688206 2516 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 14 22:16:18.688272 kubelet[2516]: I0714 22:16:18.688235 2516 kubelet.go:386] "Adding apiserver pod source" Jul 14 22:16:18.688272 kubelet[2516]: I0714 22:16:18.688252 2516 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 14 22:16:18.689602 kubelet[2516]: I0714 22:16:18.689579 2516 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 14 22:16:18.690157 kubelet[2516]: I0714 22:16:18.690129 2516 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 14 22:16:18.693005 kubelet[2516]: I0714 22:16:18.692976 2516 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 14 22:16:18.693060 kubelet[2516]: I0714 22:16:18.693021 2516 server.go:1289] "Started kubelet" Jul 14 22:16:18.693274 kubelet[2516]: I0714 22:16:18.693241 2516 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 14 22:16:18.693341 kubelet[2516]: I0714 22:16:18.693297 2516 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 14 22:16:18.694994 kubelet[2516]: I0714 22:16:18.693570 2516 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 14 22:16:18.694994 kubelet[2516]: I0714 22:16:18.694143 2516 server.go:317] "Adding debug handlers to kubelet server" Jul 14 22:16:18.694994 kubelet[2516]: I0714 22:16:18.694772 2516 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 14 22:16:18.695229 kubelet[2516]: I0714 22:16:18.695189 2516 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 14 22:16:18.697847 kubelet[2516]: E0714 22:16:18.697826 2516 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:16:18.697901 kubelet[2516]: I0714 22:16:18.697862 2516 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 14 22:16:18.698061 kubelet[2516]: I0714 22:16:18.698042 2516 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 14 22:16:18.698154 kubelet[2516]: I0714 22:16:18.698141 2516 reconciler.go:26] "Reconciler: start to sync state" Jul 14 22:16:18.701042 kubelet[2516]: I0714 22:16:18.701015 2516 factory.go:223] Registration of the containerd container factory successfully Jul 14 22:16:18.701042 kubelet[2516]: I0714 22:16:18.701038 2516 factory.go:223] Registration of the systemd container factory successfully Jul 14 22:16:18.701153 kubelet[2516]: I0714 22:16:18.701130 2516 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 14 22:16:18.702413 kubelet[2516]: I0714 22:16:18.702389 2516 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 14 22:16:18.703227 kubelet[2516]: E0714 22:16:18.703189 2516 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 14 22:16:18.717973 kubelet[2516]: I0714 22:16:18.717938 2516 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 14 22:16:18.717973 kubelet[2516]: I0714 22:16:18.717965 2516 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 14 22:16:18.717973 kubelet[2516]: I0714 22:16:18.717984 2516 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 14 22:16:18.718136 kubelet[2516]: I0714 22:16:18.717992 2516 kubelet.go:2436] "Starting kubelet main sync loop" Jul 14 22:16:18.718136 kubelet[2516]: E0714 22:16:18.718038 2516 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 14 22:16:18.740631 kubelet[2516]: I0714 22:16:18.740600 2516 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 14 22:16:18.740631 kubelet[2516]: I0714 22:16:18.740618 2516 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 14 22:16:18.740778 kubelet[2516]: I0714 22:16:18.740647 2516 state_mem.go:36] "Initialized new in-memory state store" Jul 14 22:16:18.740873 kubelet[2516]: I0714 22:16:18.740846 2516 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 14 22:16:18.740898 kubelet[2516]: I0714 22:16:18.740868 2516 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 14 22:16:18.740898 kubelet[2516]: I0714 22:16:18.740888 2516 policy_none.go:49] "None policy: Start" Jul 14 22:16:18.740898 kubelet[2516]: I0714 22:16:18.740898 2516 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 14 22:16:18.740963 kubelet[2516]: I0714 22:16:18.740911 2516 state_mem.go:35] "Initializing new in-memory state store" Jul 14 22:16:18.741015 kubelet[2516]: I0714 22:16:18.741000 2516 state_mem.go:75] "Updated machine memory state" Jul 14 22:16:18.744936 kubelet[2516]: E0714 22:16:18.744913 2516 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 14 22:16:18.745286 kubelet[2516]: I0714 22:16:18.745105 2516 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 14 22:16:18.745286 kubelet[2516]: I0714 22:16:18.745121 2516 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 14 22:16:18.745286 kubelet[2516]: I0714 22:16:18.745274 2516 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 14 22:16:18.746376 kubelet[2516]: E0714 22:16:18.746341 2516 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 14 22:16:18.819748 kubelet[2516]: I0714 22:16:18.819678 2516 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 14 22:16:18.819748 kubelet[2516]: I0714 22:16:18.819759 2516 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 14 22:16:18.819984 kubelet[2516]: I0714 22:16:18.819808 2516 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 14 22:16:18.850520 kubelet[2516]: I0714 22:16:18.850489 2516 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 22:16:18.858843 kubelet[2516]: I0714 22:16:18.856759 2516 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 14 22:16:18.858843 kubelet[2516]: I0714 22:16:18.856865 2516 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 14 22:16:18.999210 kubelet[2516]: I0714 22:16:18.999174 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7883c2a360afc5b3c9b064549b9b0c8d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7883c2a360afc5b3c9b064549b9b0c8d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:16:18.999210 kubelet[2516]: I0714 22:16:18.999209 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7883c2a360afc5b3c9b064549b9b0c8d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7883c2a360afc5b3c9b064549b9b0c8d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:16:18.999394 kubelet[2516]: I0714 22:16:18.999231 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fad475d3be2e7026903cdccc200d075f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fad475d3be2e7026903cdccc200d075f\") " pod="kube-system/kube-scheduler-localhost" Jul 14 22:16:18.999394 kubelet[2516]: I0714 22:16:18.999246 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2626f392a41443150ef6fb957db06b3a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2626f392a41443150ef6fb957db06b3a\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:16:18.999394 kubelet[2516]: I0714 22:16:18.999274 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7883c2a360afc5b3c9b064549b9b0c8d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7883c2a360afc5b3c9b064549b9b0c8d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:16:18.999394 kubelet[2516]: I0714 22:16:18.999288 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2626f392a41443150ef6fb957db06b3a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2626f392a41443150ef6fb957db06b3a\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:16:18.999394 kubelet[2516]: I0714 22:16:18.999336 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2626f392a41443150ef6fb957db06b3a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2626f392a41443150ef6fb957db06b3a\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:16:18.999501 kubelet[2516]: I0714 22:16:18.999392 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7883c2a360afc5b3c9b064549b9b0c8d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7883c2a360afc5b3c9b064549b9b0c8d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:16:18.999501 kubelet[2516]: I0714 22:16:18.999413 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7883c2a360afc5b3c9b064549b9b0c8d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7883c2a360afc5b3c9b064549b9b0c8d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:16:19.129747 kubelet[2516]: E0714 22:16:19.129713 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:16:19.129849 kubelet[2516]: E0714 22:16:19.129753 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:16:19.129891 kubelet[2516]: E0714 22:16:19.129880 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:16:19.689271 kubelet[2516]: I0714 22:16:19.689232 2516 apiserver.go:52] "Watching apiserver" Jul 14 22:16:19.698398 kubelet[2516]: I0714 22:16:19.698378 2516 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 14 22:16:19.732058 kubelet[2516]: I0714 22:16:19.732029 2516 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 14 22:16:19.732277 kubelet[2516]: E0714 22:16:19.732244 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:16:19.732433 kubelet[2516]: E0714 22:16:19.732414 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:16:19.737210 kubelet[2516]: E0714 22:16:19.737180 2516 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 14 22:16:19.737358 kubelet[2516]: E0714 22:16:19.737295 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:16:19.748780 kubelet[2516]: I0714 22:16:19.748697 2516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.74866573 podStartE2EDuration="1.74866573s" podCreationTimestamp="2025-07-14 22:16:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:16:19.748324298 +0000 UTC m=+1.120877224" watchObservedRunningTime="2025-07-14 22:16:19.74866573 +0000 UTC m=+1.121218626" Jul 14 22:16:19.761362 kubelet[2516]: I0714 22:16:19.761309 2516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.7612921099999999 podStartE2EDuration="1.76129211s" podCreationTimestamp="2025-07-14 22:16:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:16:19.756074646 +0000 UTC m=+1.128627562" watchObservedRunningTime="2025-07-14 22:16:19.76129211 +0000 UTC m=+1.133845006" Jul 14 22:16:19.767070 kubelet[2516]: I0714 22:16:19.767018 2516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.767005833 podStartE2EDuration="1.767005833s" podCreationTimestamp="2025-07-14 22:16:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:16:19.761372675 +0000 UTC m=+1.133925571" watchObservedRunningTime="2025-07-14 22:16:19.767005833 +0000 UTC m=+1.139558729" Jul 14 22:16:20.733726 kubelet[2516]: E0714 22:16:20.733699 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:16:20.734260 kubelet[2516]: E0714 22:16:20.733699 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:16:20.734260 kubelet[2516]: E0714 22:16:20.733974 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:16:22.209589 kubelet[2516]: E0714 22:16:22.209542 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:16:24.893461 kubelet[2516]: I0714 22:16:24.893423 2516 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 14 22:16:24.894005 kubelet[2516]: I0714 22:16:24.893978 2516 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 14 22:16:24.894035 containerd[1453]: time="2025-07-14T22:16:24.893760217Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 14 22:16:25.840050 systemd[1]: Created slice kubepods-besteffort-podfd7ece6a_073f_4e3e_b40d_aac04a712e4a.slice - libcontainer container kubepods-besteffort-podfd7ece6a_073f_4e3e_b40d_aac04a712e4a.slice. Jul 14 22:16:25.845589 kubelet[2516]: I0714 22:16:25.845551 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fd7ece6a-073f-4e3e-b40d-aac04a712e4a-xtables-lock\") pod \"kube-proxy-xd5cg\" (UID: \"fd7ece6a-073f-4e3e-b40d-aac04a712e4a\") " pod="kube-system/kube-proxy-xd5cg" Jul 14 22:16:25.845589 kubelet[2516]: I0714 22:16:25.845587 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnrvb\" (UniqueName: \"kubernetes.io/projected/fd7ece6a-073f-4e3e-b40d-aac04a712e4a-kube-api-access-fnrvb\") pod \"kube-proxy-xd5cg\" (UID: \"fd7ece6a-073f-4e3e-b40d-aac04a712e4a\") " pod="kube-system/kube-proxy-xd5cg" Jul 14 22:16:25.845701 kubelet[2516]: I0714 22:16:25.845606 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fd7ece6a-073f-4e3e-b40d-aac04a712e4a-kube-proxy\") pod \"kube-proxy-xd5cg\" (UID: \"fd7ece6a-073f-4e3e-b40d-aac04a712e4a\") " pod="kube-system/kube-proxy-xd5cg" Jul 14 22:16:25.845701 kubelet[2516]: I0714 22:16:25.845627 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fd7ece6a-073f-4e3e-b40d-aac04a712e4a-lib-modules\") pod \"kube-proxy-xd5cg\" (UID: \"fd7ece6a-073f-4e3e-b40d-aac04a712e4a\") " pod="kube-system/kube-proxy-xd5cg" Jul 14 22:16:26.148344 kubelet[2516]: E0714 22:16:26.148221 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:16:26.148970 containerd[1453]: time="2025-07-14T22:16:26.148933490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xd5cg,Uid:fd7ece6a-073f-4e3e-b40d-aac04a712e4a,Namespace:kube-system,Attempt:0,}" Jul 14 22:16:26.400234 containerd[1453]: time="2025-07-14T22:16:26.400047430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:16:26.400234 containerd[1453]: time="2025-07-14T22:16:26.400105849Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:16:26.400234 containerd[1453]: time="2025-07-14T22:16:26.400115628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:16:26.400401 containerd[1453]: time="2025-07-14T22:16:26.400236982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:16:26.427045 systemd[1]: Started cri-containerd-7baa302265608fde0a89bb5174efd15cf1cb8a45b0af89cad9e521275700be00.scope - libcontainer container 7baa302265608fde0a89bb5174efd15cf1cb8a45b0af89cad9e521275700be00. Jul 14 22:16:26.447656 containerd[1453]: time="2025-07-14T22:16:26.447585225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xd5cg,Uid:fd7ece6a-073f-4e3e-b40d-aac04a712e4a,Namespace:kube-system,Attempt:0,} returns sandbox id \"7baa302265608fde0a89bb5174efd15cf1cb8a45b0af89cad9e521275700be00\"" Jul 14 22:16:26.448454 kubelet[2516]: E0714 22:16:26.448421 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:16:26.477196 containerd[1453]: time="2025-07-14T22:16:26.477139997Z" level=info msg="CreateContainer within sandbox \"7baa302265608fde0a89bb5174efd15cf1cb8a45b0af89cad9e521275700be00\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 14 22:16:26.494430 containerd[1453]: time="2025-07-14T22:16:26.494389730Z" level=info msg="CreateContainer within sandbox \"7baa302265608fde0a89bb5174efd15cf1cb8a45b0af89cad9e521275700be00\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"566a2768b2d6c4d2cca0172c245220ae8b6d106ba32379fd30410918ca3c077b\"" Jul 14 22:16:26.495039 containerd[1453]: time="2025-07-14T22:16:26.495015413Z" level=info msg="StartContainer for \"566a2768b2d6c4d2cca0172c245220ae8b6d106ba32379fd30410918ca3c077b\"" Jul 14 22:16:26.523981 systemd[1]: Started cri-containerd-566a2768b2d6c4d2cca0172c245220ae8b6d106ba32379fd30410918ca3c077b.scope - libcontainer container 566a2768b2d6c4d2cca0172c245220ae8b6d106ba32379fd30410918ca3c077b. Jul 14 22:16:26.553322 containerd[1453]: time="2025-07-14T22:16:26.553268941Z" level=info msg="StartContainer for \"566a2768b2d6c4d2cca0172c245220ae8b6d106ba32379fd30410918ca3c077b\" returns successfully" Jul 14 22:16:26.742595 kubelet[2516]: E0714 22:16:26.742561 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:16:26.751584 kubelet[2516]: I0714 22:16:26.751525 2516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xd5cg" podStartSLOduration=1.7515038330000001 podStartE2EDuration="1.751503833s" podCreationTimestamp="2025-07-14 22:16:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:16:26.750965708 +0000 UTC m=+8.123518624" watchObservedRunningTime="2025-07-14 22:16:26.751503833 +0000 UTC m=+8.124056729" Jul 14 22:16:29.325709 kubelet[2516]: E0714 22:16:29.325656 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:16:29.756250 systemd[1]: Created slice kubepods-besteffort-podfe21253e_6566_4218_929a_c352f29a6c79.slice - libcontainer container kubepods-besteffort-podfe21253e_6566_4218_929a_c352f29a6c79.slice. Jul 14 22:16:29.768973 kubelet[2516]: I0714 22:16:29.768924 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fe21253e-6566-4218-929a-c352f29a6c79-var-lib-calico\") pod \"tigera-operator-747864d56d-45h9l\" (UID: \"fe21253e-6566-4218-929a-c352f29a6c79\") " pod="tigera-operator/tigera-operator-747864d56d-45h9l" Jul 14 22:16:29.768973 kubelet[2516]: I0714 22:16:29.768965 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4zfc\" (UniqueName: \"kubernetes.io/projected/fe21253e-6566-4218-929a-c352f29a6c79-kube-api-access-l4zfc\") pod \"tigera-operator-747864d56d-45h9l\" (UID: \"fe21253e-6566-4218-929a-c352f29a6c79\") " pod="tigera-operator/tigera-operator-747864d56d-45h9l" Jul 14 22:16:30.062911 containerd[1453]: time="2025-07-14T22:16:30.062745094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-45h9l,Uid:fe21253e-6566-4218-929a-c352f29a6c79,Namespace:tigera-operator,Attempt:0,}" Jul 14 22:16:30.088767 containerd[1453]: time="2025-07-14T22:16:30.087975150Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:16:30.088767 containerd[1453]: time="2025-07-14T22:16:30.088706143Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:16:30.088767 containerd[1453]: time="2025-07-14T22:16:30.088722225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:16:30.089076 containerd[1453]: time="2025-07-14T22:16:30.088858970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:16:30.114961 systemd[1]: Started cri-containerd-903b9536f0eb73301381bca0a691dc5d38b7a6da20a596dab7b21829850f17d3.scope - libcontainer container 903b9536f0eb73301381bca0a691dc5d38b7a6da20a596dab7b21829850f17d3. Jul 14 22:16:30.149626 containerd[1453]: time="2025-07-14T22:16:30.149567320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-45h9l,Uid:fe21253e-6566-4218-929a-c352f29a6c79,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"903b9536f0eb73301381bca0a691dc5d38b7a6da20a596dab7b21829850f17d3\"" Jul 14 22:16:30.153960 containerd[1453]: time="2025-07-14T22:16:30.153915646Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 14 22:16:30.376880 kubelet[2516]: E0714 22:16:30.376744 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:16:31.499785 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4216564179.mount: Deactivated successfully. Jul 14 22:16:32.214064 kubelet[2516]: E0714 22:16:32.213958 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:16:32.756065 kubelet[2516]: E0714 22:16:32.756037 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:16:32.873518 containerd[1453]: time="2025-07-14T22:16:32.873460785Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:16:32.874368 containerd[1453]: time="2025-07-14T22:16:32.874328046Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 14 22:16:32.875597 containerd[1453]: time="2025-07-14T22:16:32.875540084Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:16:32.877905 containerd[1453]: time="2025-07-14T22:16:32.877874703Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:16:32.878519 containerd[1453]: time="2025-07-14T22:16:32.878483538Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.724522982s" Jul 14 22:16:32.878550 containerd[1453]: time="2025-07-14T22:16:32.878519961Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 14 22:16:32.884118 containerd[1453]: time="2025-07-14T22:16:32.884086198Z" level=info msg="CreateContainer within sandbox \"903b9536f0eb73301381bca0a691dc5d38b7a6da20a596dab7b21829850f17d3\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 14 22:16:32.894260 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2864704514.mount: Deactivated successfully. Jul 14 22:16:32.895744 containerd[1453]: time="2025-07-14T22:16:32.895712258Z" level=info msg="CreateContainer within sandbox \"903b9536f0eb73301381bca0a691dc5d38b7a6da20a596dab7b21829850f17d3\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e9131a6bbf0e42af1a265de2f517ace4a0a5086725fb62f67eed2841dda248d0\"" Jul 14 22:16:32.896161 containerd[1453]: time="2025-07-14T22:16:32.896133518Z" level=info msg="StartContainer for \"e9131a6bbf0e42af1a265de2f517ace4a0a5086725fb62f67eed2841dda248d0\"" Jul 14 22:16:32.925080 systemd[1]: Started cri-containerd-e9131a6bbf0e42af1a265de2f517ace4a0a5086725fb62f67eed2841dda248d0.scope - libcontainer container e9131a6bbf0e42af1a265de2f517ace4a0a5086725fb62f67eed2841dda248d0. Jul 14 22:16:32.949574 containerd[1453]: time="2025-07-14T22:16:32.949527614Z" level=info msg="StartContainer for \"e9131a6bbf0e42af1a265de2f517ace4a0a5086725fb62f67eed2841dda248d0\" returns successfully" Jul 14 22:16:33.765465 kubelet[2516]: I0714 22:16:33.765394 2516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-45h9l" podStartSLOduration=2.039414748 podStartE2EDuration="4.765379411s" podCreationTimestamp="2025-07-14 22:16:29 +0000 UTC" firstStartedPulling="2025-07-14 22:16:30.153290284 +0000 UTC m=+11.525843180" lastFinishedPulling="2025-07-14 22:16:32.879254947 +0000 UTC m=+14.251807843" observedRunningTime="2025-07-14 22:16:33.765219472 +0000 UTC m=+15.137772368" watchObservedRunningTime="2025-07-14 22:16:33.765379411 +0000 UTC m=+15.137932297" Jul 14 22:16:38.399852 sudo[1632]: pam_unix(sudo:session): session closed for user root Jul 14 22:16:38.402547 sshd[1629]: pam_unix(sshd:session): session closed for user core Jul 14 22:16:38.406944 systemd-logind[1434]: Session 7 logged out. Waiting for processes to exit. Jul 14 22:16:38.409126 systemd[1]: sshd@6-10.0.0.96:22-10.0.0.1:57142.service: Deactivated successfully. Jul 14 22:16:38.412716 systemd[1]: session-7.scope: Deactivated successfully. Jul 14 22:16:38.415904 systemd[1]: session-7.scope: Consumed 5.687s CPU time, 164.9M memory peak, 0B memory swap peak. Jul 14 22:16:38.418197 systemd-logind[1434]: Removed session 7. Jul 14 22:16:40.734266 systemd[1]: Created slice kubepods-besteffort-pod8c29779b_55ef_47d5_a0f8_433580b9ca0d.slice - libcontainer container kubepods-besteffort-pod8c29779b_55ef_47d5_a0f8_433580b9ca0d.slice. Jul 14 22:16:40.746881 kubelet[2516]: I0714 22:16:40.746804 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c29779b-55ef-47d5-a0f8-433580b9ca0d-tigera-ca-bundle\") pod \"calico-typha-677df65775-wv9qr\" (UID: \"8c29779b-55ef-47d5-a0f8-433580b9ca0d\") " pod="calico-system/calico-typha-677df65775-wv9qr" Jul 14 22:16:40.746881 kubelet[2516]: I0714 22:16:40.746883 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h8w2\" (UniqueName: \"kubernetes.io/projected/8c29779b-55ef-47d5-a0f8-433580b9ca0d-kube-api-access-8h8w2\") pod \"calico-typha-677df65775-wv9qr\" (UID: \"8c29779b-55ef-47d5-a0f8-433580b9ca0d\") " pod="calico-system/calico-typha-677df65775-wv9qr" Jul 14 22:16:40.747307 kubelet[2516]: I0714 22:16:40.746907 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8c29779b-55ef-47d5-a0f8-433580b9ca0d-typha-certs\") pod \"calico-typha-677df65775-wv9qr\" (UID: \"8c29779b-55ef-47d5-a0f8-433580b9ca0d\") " pod="calico-system/calico-typha-677df65775-wv9qr" Jul 14 22:16:40.779300 systemd[1]: Created slice kubepods-besteffort-podf63336ea_c1f9_468f_8457_7e2419b62b2a.slice - libcontainer container kubepods-besteffort-podf63336ea_c1f9_468f_8457_7e2419b62b2a.slice. Jul 14 22:16:40.847977 kubelet[2516]: I0714 22:16:40.847920 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f63336ea-c1f9-468f-8457-7e2419b62b2a-cni-bin-dir\") pod \"calico-node-tgkc4\" (UID: \"f63336ea-c1f9-468f-8457-7e2419b62b2a\") " pod="calico-system/calico-node-tgkc4" Jul 14 22:16:40.848122 kubelet[2516]: I0714 22:16:40.848098 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f63336ea-c1f9-468f-8457-7e2419b62b2a-lib-modules\") pod \"calico-node-tgkc4\" (UID: \"f63336ea-c1f9-468f-8457-7e2419b62b2a\") " pod="calico-system/calico-node-tgkc4" Jul 14 22:16:40.848255 kubelet[2516]: I0714 22:16:40.848219 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f63336ea-c1f9-468f-8457-7e2419b62b2a-cni-net-dir\") pod \"calico-node-tgkc4\" (UID: \"f63336ea-c1f9-468f-8457-7e2419b62b2a\") " pod="calico-system/calico-node-tgkc4" Jul 14 22:16:40.848316 kubelet[2516]: I0714 22:16:40.848293 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f63336ea-c1f9-468f-8457-7e2419b62b2a-flexvol-driver-host\") pod \"calico-node-tgkc4\" (UID: \"f63336ea-c1f9-468f-8457-7e2419b62b2a\") " pod="calico-system/calico-node-tgkc4" Jul 14 22:16:40.848345 kubelet[2516]: I0714 22:16:40.848320 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f63336ea-c1f9-468f-8457-7e2419b62b2a-node-certs\") pod \"calico-node-tgkc4\" (UID: \"f63336ea-c1f9-468f-8457-7e2419b62b2a\") " pod="calico-system/calico-node-tgkc4" Jul 14 22:16:40.848345 kubelet[2516]: I0714 22:16:40.848340 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f63336ea-c1f9-468f-8457-7e2419b62b2a-policysync\") pod \"calico-node-tgkc4\" (UID: \"f63336ea-c1f9-468f-8457-7e2419b62b2a\") " pod="calico-system/calico-node-tgkc4" Jul 14 22:16:40.848398 kubelet[2516]: I0714 22:16:40.848363 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f63336ea-c1f9-468f-8457-7e2419b62b2a-tigera-ca-bundle\") pod \"calico-node-tgkc4\" (UID: \"f63336ea-c1f9-468f-8457-7e2419b62b2a\") " pod="calico-system/calico-node-tgkc4" Jul 14 22:16:40.848398 kubelet[2516]: I0714 22:16:40.848379 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f63336ea-c1f9-468f-8457-7e2419b62b2a-var-lib-calico\") pod \"calico-node-tgkc4\" (UID: \"f63336ea-c1f9-468f-8457-7e2419b62b2a\") " pod="calico-system/calico-node-tgkc4" Jul 14 22:16:40.848398 kubelet[2516]: I0714 22:16:40.848395 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxrn8\" (UniqueName: \"kubernetes.io/projected/f63336ea-c1f9-468f-8457-7e2419b62b2a-kube-api-access-dxrn8\") pod \"calico-node-tgkc4\" (UID: \"f63336ea-c1f9-468f-8457-7e2419b62b2a\") " pod="calico-system/calico-node-tgkc4" Jul 14 22:16:40.848468 kubelet[2516]: I0714 22:16:40.848422 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f63336ea-c1f9-468f-8457-7e2419b62b2a-cni-log-dir\") pod \"calico-node-tgkc4\" (UID: \"f63336ea-c1f9-468f-8457-7e2419b62b2a\") " pod="calico-system/calico-node-tgkc4" Jul 14 22:16:40.848468 kubelet[2516]: I0714 22:16:40.848437 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f63336ea-c1f9-468f-8457-7e2419b62b2a-var-run-calico\") pod \"calico-node-tgkc4\" (UID: \"f63336ea-c1f9-468f-8457-7e2419b62b2a\") " pod="calico-system/calico-node-tgkc4" Jul 14 22:16:40.848468 kubelet[2516]: I0714 22:16:40.848453 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f63336ea-c1f9-468f-8457-7e2419b62b2a-xtables-lock\") pod \"calico-node-tgkc4\" (UID: \"f63336ea-c1f9-468f-8457-7e2419b62b2a\") " pod="calico-system/calico-node-tgkc4" Jul 14 22:16:40.951412 kubelet[2516]: E0714 22:16:40.951363 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:40.951412 kubelet[2516]: W0714 22:16:40.951398 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:40.951412 kubelet[2516]: E0714 22:16:40.951419 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:40.954486 kubelet[2516]: E0714 22:16:40.954410 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:40.954486 kubelet[2516]: W0714 22:16:40.954427 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:40.954486 kubelet[2516]: E0714 22:16:40.954459 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:40.956901 kubelet[2516]: E0714 22:16:40.956865 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:40.956901 kubelet[2516]: W0714 22:16:40.956888 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:40.956901 kubelet[2516]: E0714 22:16:40.956908 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.005767 kubelet[2516]: E0714 22:16:41.005321 2516 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wq2kw" podUID="eedcfc2c-c8c7-40fa-a5d9-5e29e588b0a5" Jul 14 22:16:41.032090 kubelet[2516]: E0714 22:16:41.032055 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.032090 kubelet[2516]: W0714 22:16:41.032075 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.032090 kubelet[2516]: E0714 22:16:41.032096 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.032381 kubelet[2516]: E0714 22:16:41.032370 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.032381 kubelet[2516]: W0714 22:16:41.032380 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.032430 kubelet[2516]: E0714 22:16:41.032391 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.032597 kubelet[2516]: E0714 22:16:41.032580 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.032597 kubelet[2516]: W0714 22:16:41.032589 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.032597 kubelet[2516]: E0714 22:16:41.032597 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.032841 kubelet[2516]: E0714 22:16:41.032815 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.032841 kubelet[2516]: W0714 22:16:41.032840 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.032891 kubelet[2516]: E0714 22:16:41.032848 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.033062 kubelet[2516]: E0714 22:16:41.033052 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.033062 kubelet[2516]: W0714 22:16:41.033060 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.033108 kubelet[2516]: E0714 22:16:41.033067 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.033281 kubelet[2516]: E0714 22:16:41.033271 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.033281 kubelet[2516]: W0714 22:16:41.033279 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.033324 kubelet[2516]: E0714 22:16:41.033286 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.033485 kubelet[2516]: E0714 22:16:41.033475 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.033485 kubelet[2516]: W0714 22:16:41.033483 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.033531 kubelet[2516]: E0714 22:16:41.033489 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.033685 kubelet[2516]: E0714 22:16:41.033675 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.033685 kubelet[2516]: W0714 22:16:41.033683 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.033733 kubelet[2516]: E0714 22:16:41.033690 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.033927 kubelet[2516]: E0714 22:16:41.033915 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.033927 kubelet[2516]: W0714 22:16:41.033924 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.033975 kubelet[2516]: E0714 22:16:41.033931 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.034132 kubelet[2516]: E0714 22:16:41.034122 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.034132 kubelet[2516]: W0714 22:16:41.034130 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.034173 kubelet[2516]: E0714 22:16:41.034137 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.034347 kubelet[2516]: E0714 22:16:41.034336 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.034347 kubelet[2516]: W0714 22:16:41.034344 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.034393 kubelet[2516]: E0714 22:16:41.034351 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.034546 kubelet[2516]: E0714 22:16:41.034536 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.034546 kubelet[2516]: W0714 22:16:41.034544 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.034592 kubelet[2516]: E0714 22:16:41.034551 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.034751 kubelet[2516]: E0714 22:16:41.034741 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.034751 kubelet[2516]: W0714 22:16:41.034749 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.034796 kubelet[2516]: E0714 22:16:41.034757 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.034964 kubelet[2516]: E0714 22:16:41.034954 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.034964 kubelet[2516]: W0714 22:16:41.034962 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.035009 kubelet[2516]: E0714 22:16:41.034969 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.035164 kubelet[2516]: E0714 22:16:41.035154 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.035164 kubelet[2516]: W0714 22:16:41.035162 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.035210 kubelet[2516]: E0714 22:16:41.035169 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.035374 kubelet[2516]: E0714 22:16:41.035364 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.035374 kubelet[2516]: W0714 22:16:41.035372 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.035420 kubelet[2516]: E0714 22:16:41.035380 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.035582 kubelet[2516]: E0714 22:16:41.035573 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.035582 kubelet[2516]: W0714 22:16:41.035581 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.035624 kubelet[2516]: E0714 22:16:41.035588 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.035778 kubelet[2516]: E0714 22:16:41.035768 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.035778 kubelet[2516]: W0714 22:16:41.035776 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.035874 kubelet[2516]: E0714 22:16:41.035783 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.036026 kubelet[2516]: E0714 22:16:41.036014 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.036026 kubelet[2516]: W0714 22:16:41.036023 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.036081 kubelet[2516]: E0714 22:16:41.036030 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.036231 kubelet[2516]: E0714 22:16:41.036221 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.036231 kubelet[2516]: W0714 22:16:41.036228 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.036290 kubelet[2516]: E0714 22:16:41.036236 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.039485 kubelet[2516]: E0714 22:16:41.039445 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:16:41.040346 containerd[1453]: time="2025-07-14T22:16:41.040272673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-677df65775-wv9qr,Uid:8c29779b-55ef-47d5-a0f8-433580b9ca0d,Namespace:calico-system,Attempt:0,}" Jul 14 22:16:41.049785 kubelet[2516]: E0714 22:16:41.049766 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.049785 kubelet[2516]: W0714 22:16:41.049782 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.049899 kubelet[2516]: E0714 22:16:41.049802 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.049899 kubelet[2516]: I0714 22:16:41.049851 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/eedcfc2c-c8c7-40fa-a5d9-5e29e588b0a5-varrun\") pod \"csi-node-driver-wq2kw\" (UID: \"eedcfc2c-c8c7-40fa-a5d9-5e29e588b0a5\") " pod="calico-system/csi-node-driver-wq2kw" Jul 14 22:16:41.050218 kubelet[2516]: E0714 22:16:41.050183 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.050218 kubelet[2516]: W0714 22:16:41.050204 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.050218 kubelet[2516]: E0714 22:16:41.050222 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.050430 kubelet[2516]: I0714 22:16:41.050253 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c4vr\" (UniqueName: \"kubernetes.io/projected/eedcfc2c-c8c7-40fa-a5d9-5e29e588b0a5-kube-api-access-4c4vr\") pod \"csi-node-driver-wq2kw\" (UID: \"eedcfc2c-c8c7-40fa-a5d9-5e29e588b0a5\") " pod="calico-system/csi-node-driver-wq2kw" Jul 14 22:16:41.050621 kubelet[2516]: E0714 22:16:41.050596 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.050621 kubelet[2516]: W0714 22:16:41.050615 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.050694 kubelet[2516]: E0714 22:16:41.050629 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.050976 kubelet[2516]: E0714 22:16:41.050946 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.051019 kubelet[2516]: W0714 22:16:41.050973 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.051019 kubelet[2516]: E0714 22:16:41.051000 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.051362 kubelet[2516]: E0714 22:16:41.051307 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.051362 kubelet[2516]: W0714 22:16:41.051321 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.051362 kubelet[2516]: E0714 22:16:41.051330 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.051362 kubelet[2516]: I0714 22:16:41.051361 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eedcfc2c-c8c7-40fa-a5d9-5e29e588b0a5-kubelet-dir\") pod \"csi-node-driver-wq2kw\" (UID: \"eedcfc2c-c8c7-40fa-a5d9-5e29e588b0a5\") " pod="calico-system/csi-node-driver-wq2kw" Jul 14 22:16:41.051871 kubelet[2516]: E0714 22:16:41.051721 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.051871 kubelet[2516]: W0714 22:16:41.051738 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.051871 kubelet[2516]: E0714 22:16:41.051748 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.051871 kubelet[2516]: I0714 22:16:41.051869 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/eedcfc2c-c8c7-40fa-a5d9-5e29e588b0a5-registration-dir\") pod \"csi-node-driver-wq2kw\" (UID: \"eedcfc2c-c8c7-40fa-a5d9-5e29e588b0a5\") " pod="calico-system/csi-node-driver-wq2kw" Jul 14 22:16:41.052670 kubelet[2516]: E0714 22:16:41.052046 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.052670 kubelet[2516]: W0714 22:16:41.052054 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.052670 kubelet[2516]: E0714 22:16:41.052063 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.052670 kubelet[2516]: E0714 22:16:41.052246 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.052670 kubelet[2516]: W0714 22:16:41.052253 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.052670 kubelet[2516]: E0714 22:16:41.052272 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.052670 kubelet[2516]: E0714 22:16:41.052586 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.052670 kubelet[2516]: W0714 22:16:41.052594 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.052670 kubelet[2516]: E0714 22:16:41.052603 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.052998 kubelet[2516]: I0714 22:16:41.052626 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/eedcfc2c-c8c7-40fa-a5d9-5e29e588b0a5-socket-dir\") pod \"csi-node-driver-wq2kw\" (UID: \"eedcfc2c-c8c7-40fa-a5d9-5e29e588b0a5\") " pod="calico-system/csi-node-driver-wq2kw" Jul 14 22:16:41.052998 kubelet[2516]: E0714 22:16:41.052918 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.052998 kubelet[2516]: W0714 22:16:41.052928 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.052998 kubelet[2516]: E0714 22:16:41.052961 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.053276 kubelet[2516]: E0714 22:16:41.053236 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.053276 kubelet[2516]: W0714 22:16:41.053253 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.053276 kubelet[2516]: E0714 22:16:41.053278 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.053601 kubelet[2516]: E0714 22:16:41.053584 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.053601 kubelet[2516]: W0714 22:16:41.053598 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.053650 kubelet[2516]: E0714 22:16:41.053611 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.053896 kubelet[2516]: E0714 22:16:41.053877 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.053896 kubelet[2516]: W0714 22:16:41.053891 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.054040 kubelet[2516]: E0714 22:16:41.053903 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.054172 kubelet[2516]: E0714 22:16:41.054148 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.054172 kubelet[2516]: W0714 22:16:41.054165 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.054352 kubelet[2516]: E0714 22:16:41.054175 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.054484 kubelet[2516]: E0714 22:16:41.054470 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.054552 kubelet[2516]: W0714 22:16:41.054540 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.054624 kubelet[2516]: E0714 22:16:41.054613 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.070430 containerd[1453]: time="2025-07-14T22:16:41.070173676Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:16:41.070430 containerd[1453]: time="2025-07-14T22:16:41.070233455Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:16:41.070430 containerd[1453]: time="2025-07-14T22:16:41.070246360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:16:41.070430 containerd[1453]: time="2025-07-14T22:16:41.070354232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:16:41.082735 containerd[1453]: time="2025-07-14T22:16:41.082656397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tgkc4,Uid:f63336ea-c1f9-468f-8457-7e2419b62b2a,Namespace:calico-system,Attempt:0,}" Jul 14 22:16:41.095959 systemd[1]: Started cri-containerd-5e2c0d34c50b9e537254f3413d19ff9aca2a862d301cfe1ef205868dae86d114.scope - libcontainer container 5e2c0d34c50b9e537254f3413d19ff9aca2a862d301cfe1ef205868dae86d114. Jul 14 22:16:41.110405 containerd[1453]: time="2025-07-14T22:16:41.110110551Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:16:41.110405 containerd[1453]: time="2025-07-14T22:16:41.110187883Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:16:41.110405 containerd[1453]: time="2025-07-14T22:16:41.110271868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:16:41.111722 containerd[1453]: time="2025-07-14T22:16:41.111639532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:16:41.132134 systemd[1]: Started cri-containerd-80ba2c69852e4d735f507877d56a8438148fd9c91070e9c6db9a48a82f4f166b.scope - libcontainer container 80ba2c69852e4d735f507877d56a8438148fd9c91070e9c6db9a48a82f4f166b. Jul 14 22:16:41.148510 containerd[1453]: time="2025-07-14T22:16:41.148438926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-677df65775-wv9qr,Uid:8c29779b-55ef-47d5-a0f8-433580b9ca0d,Namespace:calico-system,Attempt:0,} returns sandbox id \"5e2c0d34c50b9e537254f3413d19ff9aca2a862d301cfe1ef205868dae86d114\"" Jul 14 22:16:41.157375 kubelet[2516]: E0714 22:16:41.157121 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.157375 kubelet[2516]: W0714 22:16:41.157316 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.157375 kubelet[2516]: E0714 22:16:41.157336 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.159387 kubelet[2516]: E0714 22:16:41.158902 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:16:41.163471 kubelet[2516]: E0714 22:16:41.163438 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.163637 kubelet[2516]: W0714 22:16:41.163615 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.163800 kubelet[2516]: E0714 22:16:41.163689 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.167238 kubelet[2516]: E0714 22:16:41.167208 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.167379 kubelet[2516]: W0714 22:16:41.167350 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.167455 kubelet[2516]: E0714 22:16:41.167439 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.167743 containerd[1453]: time="2025-07-14T22:16:41.167702514Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 14 22:16:41.168954 kubelet[2516]: E0714 22:16:41.168938 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.169039 kubelet[2516]: W0714 22:16:41.169025 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.169164 kubelet[2516]: E0714 22:16:41.169136 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.170788 kubelet[2516]: E0714 22:16:41.170294 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.170788 kubelet[2516]: W0714 22:16:41.170321 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.170788 kubelet[2516]: E0714 22:16:41.170537 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.173599 kubelet[2516]: E0714 22:16:41.173567 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.173599 kubelet[2516]: W0714 22:16:41.173588 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.173854 kubelet[2516]: E0714 22:16:41.173778 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.175152 kubelet[2516]: E0714 22:16:41.175129 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.175152 kubelet[2516]: W0714 22:16:41.175150 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.175227 kubelet[2516]: E0714 22:16:41.175165 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.176177 kubelet[2516]: E0714 22:16:41.176049 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.176489 kubelet[2516]: W0714 22:16:41.176346 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.176489 kubelet[2516]: E0714 22:16:41.176368 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.178133 kubelet[2516]: E0714 22:16:41.178047 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.178263 kubelet[2516]: W0714 22:16:41.178207 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.178508 kubelet[2516]: E0714 22:16:41.178475 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.179170 kubelet[2516]: E0714 22:16:41.179043 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.180097 kubelet[2516]: W0714 22:16:41.180080 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.180259 kubelet[2516]: E0714 22:16:41.180206 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.181630 kubelet[2516]: E0714 22:16:41.181174 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.181630 kubelet[2516]: W0714 22:16:41.181185 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.181630 kubelet[2516]: E0714 22:16:41.181194 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.181914 kubelet[2516]: E0714 22:16:41.181761 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.181914 kubelet[2516]: W0714 22:16:41.181771 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.181914 kubelet[2516]: E0714 22:16:41.181780 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.183838 kubelet[2516]: E0714 22:16:41.182212 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.183838 kubelet[2516]: W0714 22:16:41.182223 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.183838 kubelet[2516]: E0714 22:16:41.182232 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.185080 kubelet[2516]: E0714 22:16:41.184946 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.185080 kubelet[2516]: W0714 22:16:41.184958 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.185080 kubelet[2516]: E0714 22:16:41.184970 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.186114 kubelet[2516]: E0714 22:16:41.185996 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.186114 kubelet[2516]: W0714 22:16:41.186007 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.186114 kubelet[2516]: E0714 22:16:41.186017 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.186315 kubelet[2516]: E0714 22:16:41.186284 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.186362 kubelet[2516]: W0714 22:16:41.186313 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.186362 kubelet[2516]: E0714 22:16:41.186336 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.189024 kubelet[2516]: E0714 22:16:41.188995 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.189024 kubelet[2516]: W0714 22:16:41.189016 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.189105 kubelet[2516]: E0714 22:16:41.189028 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.189893 kubelet[2516]: E0714 22:16:41.189283 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.189893 kubelet[2516]: W0714 22:16:41.189298 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.189893 kubelet[2516]: E0714 22:16:41.189309 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.189893 kubelet[2516]: E0714 22:16:41.189606 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.189893 kubelet[2516]: W0714 22:16:41.189616 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.189893 kubelet[2516]: E0714 22:16:41.189627 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.189893 kubelet[2516]: E0714 22:16:41.189862 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.189893 kubelet[2516]: W0714 22:16:41.189871 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.189893 kubelet[2516]: E0714 22:16:41.189882 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.190402 kubelet[2516]: E0714 22:16:41.190103 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.190402 kubelet[2516]: W0714 22:16:41.190115 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.190402 kubelet[2516]: E0714 22:16:41.190123 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.191418 kubelet[2516]: E0714 22:16:41.191281 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.191418 kubelet[2516]: W0714 22:16:41.191296 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.191418 kubelet[2516]: E0714 22:16:41.191307 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.192954 kubelet[2516]: E0714 22:16:41.192889 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.192954 kubelet[2516]: W0714 22:16:41.192909 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.192954 kubelet[2516]: E0714 22:16:41.192922 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.193991 kubelet[2516]: E0714 22:16:41.193968 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.193991 kubelet[2516]: W0714 22:16:41.193986 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.194061 kubelet[2516]: E0714 22:16:41.193997 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.196360 kubelet[2516]: E0714 22:16:41.196330 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.196360 kubelet[2516]: W0714 22:16:41.196351 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.196360 kubelet[2516]: E0714 22:16:41.196364 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:41.211097 containerd[1453]: time="2025-07-14T22:16:41.211060878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tgkc4,Uid:f63336ea-c1f9-468f-8457-7e2419b62b2a,Namespace:calico-system,Attempt:0,} returns sandbox id \"80ba2c69852e4d735f507877d56a8438148fd9c91070e9c6db9a48a82f4f166b\"" Jul 14 22:16:41.216115 kubelet[2516]: E0714 22:16:41.216064 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:41.216115 kubelet[2516]: W0714 22:16:41.216094 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:41.216115 kubelet[2516]: E0714 22:16:41.216117 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:42.719415 kubelet[2516]: E0714 22:16:42.719360 2516 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wq2kw" podUID="eedcfc2c-c8c7-40fa-a5d9-5e29e588b0a5" Jul 14 22:16:43.137156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount996827419.mount: Deactivated successfully. Jul 14 22:16:43.979834 containerd[1453]: time="2025-07-14T22:16:43.979762513Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:16:43.980669 containerd[1453]: time="2025-07-14T22:16:43.980624849Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 14 22:16:43.982066 containerd[1453]: time="2025-07-14T22:16:43.982040100Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:16:43.984520 containerd[1453]: time="2025-07-14T22:16:43.984450477Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:16:43.985216 containerd[1453]: time="2025-07-14T22:16:43.985179960Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.817434972s" Jul 14 22:16:43.985259 containerd[1453]: time="2025-07-14T22:16:43.985216602Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 14 22:16:43.986507 containerd[1453]: time="2025-07-14T22:16:43.986453182Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 14 22:16:43.997959 containerd[1453]: time="2025-07-14T22:16:43.997911771Z" level=info msg="CreateContainer within sandbox \"5e2c0d34c50b9e537254f3413d19ff9aca2a862d301cfe1ef205868dae86d114\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 14 22:16:44.017884 containerd[1453]: time="2025-07-14T22:16:44.017830186Z" level=info msg="CreateContainer within sandbox \"5e2c0d34c50b9e537254f3413d19ff9aca2a862d301cfe1ef205868dae86d114\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"fb77bd298eb16d5d16e003c0c25679836f9169c4cb59511c279c91da9241c625\"" Jul 14 22:16:44.018422 containerd[1453]: time="2025-07-14T22:16:44.018397200Z" level=info msg="StartContainer for \"fb77bd298eb16d5d16e003c0c25679836f9169c4cb59511c279c91da9241c625\"" Jul 14 22:16:44.052967 systemd[1]: Started cri-containerd-fb77bd298eb16d5d16e003c0c25679836f9169c4cb59511c279c91da9241c625.scope - libcontainer container fb77bd298eb16d5d16e003c0c25679836f9169c4cb59511c279c91da9241c625. Jul 14 22:16:44.098558 containerd[1453]: time="2025-07-14T22:16:44.098478103Z" level=info msg="StartContainer for \"fb77bd298eb16d5d16e003c0c25679836f9169c4cb59511c279c91da9241c625\" returns successfully" Jul 14 22:16:44.718672 kubelet[2516]: E0714 22:16:44.718614 2516 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wq2kw" podUID="eedcfc2c-c8c7-40fa-a5d9-5e29e588b0a5" Jul 14 22:16:44.783724 kubelet[2516]: E0714 22:16:44.783694 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:16:44.858236 kubelet[2516]: E0714 22:16:44.858206 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:44.858236 kubelet[2516]: W0714 22:16:44.858228 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:44.858374 kubelet[2516]: E0714 22:16:44.858248 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:44.858468 kubelet[2516]: E0714 22:16:44.858453 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:44.858468 kubelet[2516]: W0714 22:16:44.858464 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:44.858539 kubelet[2516]: E0714 22:16:44.858473 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:44.858721 kubelet[2516]: E0714 22:16:44.858701 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:44.858754 kubelet[2516]: W0714 22:16:44.858719 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:44.858754 kubelet[2516]: E0714 22:16:44.858739 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:44.859020 kubelet[2516]: E0714 22:16:44.858987 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:44.859020 kubelet[2516]: W0714 22:16:44.859001 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:44.859020 kubelet[2516]: E0714 22:16:44.859009 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:44.859313 kubelet[2516]: E0714 22:16:44.859299 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:44.859313 kubelet[2516]: W0714 22:16:44.859309 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:44.859373 kubelet[2516]: E0714 22:16:44.859318 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:44.859491 kubelet[2516]: E0714 22:16:44.859477 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:44.859491 kubelet[2516]: W0714 22:16:44.859487 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:44.859572 kubelet[2516]: E0714 22:16:44.859495 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:44.859670 kubelet[2516]: E0714 22:16:44.859657 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:44.859670 kubelet[2516]: W0714 22:16:44.859666 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:44.859739 kubelet[2516]: E0714 22:16:44.859674 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:44.859861 kubelet[2516]: E0714 22:16:44.859848 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:44.859861 kubelet[2516]: W0714 22:16:44.859858 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:44.859921 kubelet[2516]: E0714 22:16:44.859866 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:44.860120 kubelet[2516]: E0714 22:16:44.860106 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:44.860120 kubelet[2516]: W0714 22:16:44.860117 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:44.860186 kubelet[2516]: E0714 22:16:44.860125 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:44.860305 kubelet[2516]: E0714 22:16:44.860292 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:44.860305 kubelet[2516]: W0714 22:16:44.860302 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:44.860353 kubelet[2516]: E0714 22:16:44.860309 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:44.860520 kubelet[2516]: E0714 22:16:44.860490 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:44.860520 kubelet[2516]: W0714 22:16:44.860500 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:44.860520 kubelet[2516]: E0714 22:16:44.860507 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:44.860700 kubelet[2516]: E0714 22:16:44.860687 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:44.860700 kubelet[2516]: W0714 22:16:44.860696 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:44.860766 kubelet[2516]: E0714 22:16:44.860704 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:44.861020 kubelet[2516]: E0714 22:16:44.860990 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:44.861020 kubelet[2516]: W0714 22:16:44.861008 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:44.861169 kubelet[2516]: E0714 22:16:44.861031 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:44.861298 kubelet[2516]: E0714 22:16:44.861275 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:44.861298 kubelet[2516]: W0714 22:16:44.861284 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:44.861298 kubelet[2516]: E0714 22:16:44.861292 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:44.861914 kubelet[2516]: E0714 22:16:44.861520 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:44.861914 kubelet[2516]: W0714 22:16:44.861533 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:44.861914 kubelet[2516]: E0714 22:16:44.861564 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:44.902898 kubelet[2516]: E0714 22:16:44.902868 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:44.902898 kubelet[2516]: W0714 22:16:44.902882 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:44.902898 kubelet[2516]: E0714 22:16:44.902894 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:44.903149 kubelet[2516]: E0714 22:16:44.903125 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:44.903149 kubelet[2516]: W0714 22:16:44.903136 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:44.903149 kubelet[2516]: E0714 22:16:44.903144 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:44.903366 kubelet[2516]: E0714 22:16:44.903348 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:44.903366 kubelet[2516]: W0714 22:16:44.903358 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:44.903366 kubelet[2516]: E0714 22:16:44.903366 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:44.903719 kubelet[2516]: E0714 22:16:44.903694 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:44.903719 kubelet[2516]: W0714 22:16:44.903712 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:44.903776 kubelet[2516]: E0714 22:16:44.903727 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:44.903947 kubelet[2516]: E0714 22:16:44.903922 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:44.903947 kubelet[2516]: W0714 22:16:44.903933 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:44.903947 kubelet[2516]: E0714 22:16:44.903941 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:44.904121 kubelet[2516]: E0714 22:16:44.904104 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:44.904121 kubelet[2516]: W0714 22:16:44.904113 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:44.904121 kubelet[2516]: E0714 22:16:44.904121 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:44.904365 kubelet[2516]: E0714 22:16:44.904350 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:44.904365 kubelet[2516]: W0714 22:16:44.904359 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:44.904415 kubelet[2516]: E0714 22:16:44.904368 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:44.904557 kubelet[2516]: E0714 22:16:44.904543 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:44.904557 kubelet[2516]: W0714 22:16:44.904553 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:44.904603 kubelet[2516]: E0714 22:16:44.904561 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:44.904765 kubelet[2516]: E0714 22:16:44.904752 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:44.904765 kubelet[2516]: W0714 22:16:44.904761 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:44.904813 kubelet[2516]: E0714 22:16:44.904769 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:44.904978 kubelet[2516]: E0714 22:16:44.904964 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:44.904978 kubelet[2516]: W0714 22:16:44.904974 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:44.905026 kubelet[2516]: E0714 22:16:44.904982 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:44.905187 kubelet[2516]: E0714 22:16:44.905174 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:44.905187 kubelet[2516]: W0714 22:16:44.905183 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:44.905241 kubelet[2516]: E0714 22:16:44.905191 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:44.905387 kubelet[2516]: E0714 22:16:44.905374 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:44.905387 kubelet[2516]: W0714 22:16:44.905385 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:44.905434 kubelet[2516]: E0714 22:16:44.905392 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:44.905617 kubelet[2516]: E0714 22:16:44.905602 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:44.905617 kubelet[2516]: W0714 22:16:44.905611 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:44.905674 kubelet[2516]: E0714 22:16:44.905619 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:44.905915 kubelet[2516]: E0714 22:16:44.905899 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:44.905915 kubelet[2516]: W0714 22:16:44.905911 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:44.905964 kubelet[2516]: E0714 22:16:44.905924 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:44.906113 kubelet[2516]: E0714 22:16:44.906099 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:44.906113 kubelet[2516]: W0714 22:16:44.906109 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:44.906161 kubelet[2516]: E0714 22:16:44.906117 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:44.906319 kubelet[2516]: E0714 22:16:44.906306 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:44.906319 kubelet[2516]: W0714 22:16:44.906316 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:44.906364 kubelet[2516]: E0714 22:16:44.906325 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:44.906639 kubelet[2516]: E0714 22:16:44.906614 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:44.906639 kubelet[2516]: W0714 22:16:44.906628 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:44.906639 kubelet[2516]: E0714 22:16:44.906638 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:44.906878 kubelet[2516]: E0714 22:16:44.906864 2516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:16:44.906878 kubelet[2516]: W0714 22:16:44.906874 2516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:16:44.906927 kubelet[2516]: E0714 22:16:44.906883 2516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:16:45.631591 containerd[1453]: time="2025-07-14T22:16:45.631521350Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:16:45.632369 containerd[1453]: time="2025-07-14T22:16:45.632316930Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 14 22:16:45.633400 containerd[1453]: time="2025-07-14T22:16:45.633353543Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:16:45.635596 containerd[1453]: time="2025-07-14T22:16:45.635547123Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:16:45.636253 containerd[1453]: time="2025-07-14T22:16:45.636224782Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.649720419s" Jul 14 22:16:45.636282 containerd[1453]: time="2025-07-14T22:16:45.636253960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 14 22:16:45.642516 containerd[1453]: time="2025-07-14T22:16:45.642462413Z" level=info msg="CreateContainer within sandbox \"80ba2c69852e4d735f507877d56a8438148fd9c91070e9c6db9a48a82f4f166b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 14 22:16:45.657420 containerd[1453]: time="2025-07-14T22:16:45.657375273Z" level=info msg="CreateContainer within sandbox \"80ba2c69852e4d735f507877d56a8438148fd9c91070e9c6db9a48a82f4f166b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b4deb5836e23706abee3ad6acb5c1b7f3cabe97856cdaf75f47f07b77e3f9253\"" Jul 14 22:16:45.658002 containerd[1453]: time="2025-07-14T22:16:45.657957255Z" level=info msg="StartContainer for \"b4deb5836e23706abee3ad6acb5c1b7f3cabe97856cdaf75f47f07b77e3f9253\"" Jul 14 22:16:45.690081 systemd[1]: Started cri-containerd-b4deb5836e23706abee3ad6acb5c1b7f3cabe97856cdaf75f47f07b77e3f9253.scope - libcontainer container b4deb5836e23706abee3ad6acb5c1b7f3cabe97856cdaf75f47f07b77e3f9253. Jul 14 22:16:45.719573 containerd[1453]: time="2025-07-14T22:16:45.719498431Z" level=info msg="StartContainer for \"b4deb5836e23706abee3ad6acb5c1b7f3cabe97856cdaf75f47f07b77e3f9253\" returns successfully" Jul 14 22:16:45.731171 systemd[1]: cri-containerd-b4deb5836e23706abee3ad6acb5c1b7f3cabe97856cdaf75f47f07b77e3f9253.scope: Deactivated successfully. Jul 14 22:16:45.786600 kubelet[2516]: I0714 22:16:45.786554 2516 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 22:16:45.829018 kubelet[2516]: E0714 22:16:45.786890 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:16:46.252137 kubelet[2516]: I0714 22:16:46.251948 2516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-677df65775-wv9qr" podStartSLOduration=3.431300193 podStartE2EDuration="6.251930526s" podCreationTimestamp="2025-07-14 22:16:40 +0000 UTC" firstStartedPulling="2025-07-14 22:16:41.165490877 +0000 UTC m=+22.538043773" lastFinishedPulling="2025-07-14 22:16:43.986121209 +0000 UTC m=+25.358674106" observedRunningTime="2025-07-14 22:16:44.791571668 +0000 UTC m=+26.164124564" watchObservedRunningTime="2025-07-14 22:16:46.251930526 +0000 UTC m=+27.624483422" Jul 14 22:16:46.254329 containerd[1453]: time="2025-07-14T22:16:46.254269487Z" level=info msg="shim disconnected" id=b4deb5836e23706abee3ad6acb5c1b7f3cabe97856cdaf75f47f07b77e3f9253 namespace=k8s.io Jul 14 22:16:46.255161 containerd[1453]: time="2025-07-14T22:16:46.254948366Z" level=warning msg="cleaning up after shim disconnected" id=b4deb5836e23706abee3ad6acb5c1b7f3cabe97856cdaf75f47f07b77e3f9253 namespace=k8s.io Jul 14 22:16:46.255161 containerd[1453]: time="2025-07-14T22:16:46.254969297Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 22:16:46.654589 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b4deb5836e23706abee3ad6acb5c1b7f3cabe97856cdaf75f47f07b77e3f9253-rootfs.mount: Deactivated successfully. Jul 14 22:16:46.719100 kubelet[2516]: E0714 22:16:46.719066 2516 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wq2kw" podUID="eedcfc2c-c8c7-40fa-a5d9-5e29e588b0a5" Jul 14 22:16:46.789401 containerd[1453]: time="2025-07-14T22:16:46.789369048Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 14 22:16:48.718813 kubelet[2516]: E0714 22:16:48.718687 2516 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wq2kw" podUID="eedcfc2c-c8c7-40fa-a5d9-5e29e588b0a5" Jul 14 22:16:50.365935 containerd[1453]: time="2025-07-14T22:16:50.365886107Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:16:50.366967 containerd[1453]: time="2025-07-14T22:16:50.366930695Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 14 22:16:50.368006 containerd[1453]: time="2025-07-14T22:16:50.367974771Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:16:50.370307 containerd[1453]: time="2025-07-14T22:16:50.370281471Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:16:50.370874 containerd[1453]: time="2025-07-14T22:16:50.370842215Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 3.58142365s" Jul 14 22:16:50.370927 containerd[1453]: time="2025-07-14T22:16:50.370874257Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 14 22:16:50.375150 containerd[1453]: time="2025-07-14T22:16:50.375112495Z" level=info msg="CreateContainer within sandbox \"80ba2c69852e4d735f507877d56a8438148fd9c91070e9c6db9a48a82f4f166b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 14 22:16:50.391942 containerd[1453]: time="2025-07-14T22:16:50.391890886Z" level=info msg="CreateContainer within sandbox \"80ba2c69852e4d735f507877d56a8438148fd9c91070e9c6db9a48a82f4f166b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"34474646c5c16bd254343f26fbfd82c5c1bf52ad6737a5015349233ba416c634\"" Jul 14 22:16:50.392413 containerd[1453]: time="2025-07-14T22:16:50.392359160Z" level=info msg="StartContainer for \"34474646c5c16bd254343f26fbfd82c5c1bf52ad6737a5015349233ba416c634\"" Jul 14 22:16:50.425956 systemd[1]: Started cri-containerd-34474646c5c16bd254343f26fbfd82c5c1bf52ad6737a5015349233ba416c634.scope - libcontainer container 34474646c5c16bd254343f26fbfd82c5c1bf52ad6737a5015349233ba416c634. Jul 14 22:16:50.530412 containerd[1453]: time="2025-07-14T22:16:50.530370158Z" level=info msg="StartContainer for \"34474646c5c16bd254343f26fbfd82c5c1bf52ad6737a5015349233ba416c634\" returns successfully" Jul 14 22:16:50.718633 kubelet[2516]: E0714 22:16:50.718587 2516 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wq2kw" podUID="eedcfc2c-c8c7-40fa-a5d9-5e29e588b0a5" Jul 14 22:16:51.532671 systemd[1]: cri-containerd-34474646c5c16bd254343f26fbfd82c5c1bf52ad6737a5015349233ba416c634.scope: Deactivated successfully. Jul 14 22:16:51.551304 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34474646c5c16bd254343f26fbfd82c5c1bf52ad6737a5015349233ba416c634-rootfs.mount: Deactivated successfully. Jul 14 22:16:51.604963 kubelet[2516]: I0714 22:16:51.604929 2516 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 14 22:16:51.667400 containerd[1453]: time="2025-07-14T22:16:51.667111974Z" level=info msg="shim disconnected" id=34474646c5c16bd254343f26fbfd82c5c1bf52ad6737a5015349233ba416c634 namespace=k8s.io Jul 14 22:16:51.667400 containerd[1453]: time="2025-07-14T22:16:51.667185808Z" level=warning msg="cleaning up after shim disconnected" id=34474646c5c16bd254343f26fbfd82c5c1bf52ad6737a5015349233ba416c634 namespace=k8s.io Jul 14 22:16:51.667400 containerd[1453]: time="2025-07-14T22:16:51.667195126Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 22:16:51.699596 systemd[1]: Created slice kubepods-besteffort-pod85e42d64_4a00_42a5_acc0_966557a6a6b9.slice - libcontainer container kubepods-besteffort-pod85e42d64_4a00_42a5_acc0_966557a6a6b9.slice. Jul 14 22:16:51.713513 systemd[1]: Created slice kubepods-burstable-pod2f2c7ddb_17ed_4c7d_97db_2bcad0e280dc.slice - libcontainer container kubepods-burstable-pod2f2c7ddb_17ed_4c7d_97db_2bcad0e280dc.slice. Jul 14 22:16:51.723348 systemd[1]: Created slice kubepods-burstable-pod240cf122_725b_4d48_a6f5_1c05c9f2102a.slice - libcontainer container kubepods-burstable-pod240cf122_725b_4d48_a6f5_1c05c9f2102a.slice. Jul 14 22:16:51.729031 kubelet[2516]: I0714 22:16:51.728999 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/240cf122-725b-4d48-a6f5-1c05c9f2102a-config-volume\") pod \"coredns-674b8bbfcf-cj6qp\" (UID: \"240cf122-725b-4d48-a6f5-1c05c9f2102a\") " pod="kube-system/coredns-674b8bbfcf-cj6qp" Jul 14 22:16:51.730120 kubelet[2516]: I0714 22:16:51.729615 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzp8x\" (UniqueName: \"kubernetes.io/projected/a1809be6-31c3-4ebb-abc9-4c861a7f613e-kube-api-access-dzp8x\") pod \"whisker-f5cc46c99-xrsn7\" (UID: \"a1809be6-31c3-4ebb-abc9-4c861a7f613e\") " pod="calico-system/whisker-f5cc46c99-xrsn7" Jul 14 22:16:51.730120 kubelet[2516]: I0714 22:16:51.729652 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghqb7\" (UniqueName: \"kubernetes.io/projected/57e67ca6-c616-4a8d-8e63-8c17097d1b86-kube-api-access-ghqb7\") pod \"calico-apiserver-55b999998d-8sbft\" (UID: \"57e67ca6-c616-4a8d-8e63-8c17097d1b86\") " pod="calico-apiserver/calico-apiserver-55b999998d-8sbft" Jul 14 22:16:51.730120 kubelet[2516]: I0714 22:16:51.729670 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/0b90dbe9-4134-4806-83a7-13f4785a8131-goldmane-key-pair\") pod \"goldmane-768f4c5c69-7c28s\" (UID: \"0b90dbe9-4134-4806-83a7-13f4785a8131\") " pod="calico-system/goldmane-768f4c5c69-7c28s" Jul 14 22:16:51.730120 kubelet[2516]: I0714 22:16:51.729691 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85e42d64-4a00-42a5-acc0-966557a6a6b9-tigera-ca-bundle\") pod \"calico-kube-controllers-74c5db56dc-m6q26\" (UID: \"85e42d64-4a00-42a5-acc0-966557a6a6b9\") " pod="calico-system/calico-kube-controllers-74c5db56dc-m6q26" Jul 14 22:16:51.730120 kubelet[2516]: I0714 22:16:51.729719 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mj75\" (UniqueName: \"kubernetes.io/projected/240cf122-725b-4d48-a6f5-1c05c9f2102a-kube-api-access-8mj75\") pod \"coredns-674b8bbfcf-cj6qp\" (UID: \"240cf122-725b-4d48-a6f5-1c05c9f2102a\") " pod="kube-system/coredns-674b8bbfcf-cj6qp" Jul 14 22:16:51.730321 kubelet[2516]: I0714 22:16:51.729745 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f2c7ddb-17ed-4c7d-97db-2bcad0e280dc-config-volume\") pod \"coredns-674b8bbfcf-nd7rw\" (UID: \"2f2c7ddb-17ed-4c7d-97db-2bcad0e280dc\") " pod="kube-system/coredns-674b8bbfcf-nd7rw" Jul 14 22:16:51.730321 kubelet[2516]: I0714 22:16:51.729769 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1809be6-31c3-4ebb-abc9-4c861a7f613e-whisker-ca-bundle\") pod \"whisker-f5cc46c99-xrsn7\" (UID: \"a1809be6-31c3-4ebb-abc9-4c861a7f613e\") " pod="calico-system/whisker-f5cc46c99-xrsn7" Jul 14 22:16:51.730321 kubelet[2516]: I0714 22:16:51.729803 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b90dbe9-4134-4806-83a7-13f4785a8131-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-7c28s\" (UID: \"0b90dbe9-4134-4806-83a7-13f4785a8131\") " pod="calico-system/goldmane-768f4c5c69-7c28s" Jul 14 22:16:51.730321 kubelet[2516]: I0714 22:16:51.729855 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a1809be6-31c3-4ebb-abc9-4c861a7f613e-whisker-backend-key-pair\") pod \"whisker-f5cc46c99-xrsn7\" (UID: \"a1809be6-31c3-4ebb-abc9-4c861a7f613e\") " pod="calico-system/whisker-f5cc46c99-xrsn7" Jul 14 22:16:51.730321 kubelet[2516]: I0714 22:16:51.729884 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8zwn\" (UniqueName: \"kubernetes.io/projected/0b90dbe9-4134-4806-83a7-13f4785a8131-kube-api-access-j8zwn\") pod \"goldmane-768f4c5c69-7c28s\" (UID: \"0b90dbe9-4134-4806-83a7-13f4785a8131\") " pod="calico-system/goldmane-768f4c5c69-7c28s" Jul 14 22:16:51.730458 kubelet[2516]: I0714 22:16:51.729920 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhbvq\" (UniqueName: \"kubernetes.io/projected/85e42d64-4a00-42a5-acc0-966557a6a6b9-kube-api-access-qhbvq\") pod \"calico-kube-controllers-74c5db56dc-m6q26\" (UID: \"85e42d64-4a00-42a5-acc0-966557a6a6b9\") " pod="calico-system/calico-kube-controllers-74c5db56dc-m6q26" Jul 14 22:16:51.730458 kubelet[2516]: I0714 22:16:51.729944 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/57e67ca6-c616-4a8d-8e63-8c17097d1b86-calico-apiserver-certs\") pod \"calico-apiserver-55b999998d-8sbft\" (UID: \"57e67ca6-c616-4a8d-8e63-8c17097d1b86\") " pod="calico-apiserver/calico-apiserver-55b999998d-8sbft" Jul 14 22:16:51.730458 kubelet[2516]: I0714 22:16:51.729961 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9c8f1d18-f0df-451d-a336-43d00ca10c65-calico-apiserver-certs\") pod \"calico-apiserver-55b999998d-g2dgd\" (UID: \"9c8f1d18-f0df-451d-a336-43d00ca10c65\") " pod="calico-apiserver/calico-apiserver-55b999998d-g2dgd" Jul 14 22:16:51.730458 kubelet[2516]: I0714 22:16:51.729980 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b90dbe9-4134-4806-83a7-13f4785a8131-config\") pod \"goldmane-768f4c5c69-7c28s\" (UID: \"0b90dbe9-4134-4806-83a7-13f4785a8131\") " pod="calico-system/goldmane-768f4c5c69-7c28s" Jul 14 22:16:51.730458 kubelet[2516]: I0714 22:16:51.729995 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kh4b\" (UniqueName: \"kubernetes.io/projected/2f2c7ddb-17ed-4c7d-97db-2bcad0e280dc-kube-api-access-6kh4b\") pod \"coredns-674b8bbfcf-nd7rw\" (UID: \"2f2c7ddb-17ed-4c7d-97db-2bcad0e280dc\") " pod="kube-system/coredns-674b8bbfcf-nd7rw" Jul 14 22:16:51.730372 systemd[1]: Created slice kubepods-besteffort-pod0b90dbe9_4134_4806_83a7_13f4785a8131.slice - libcontainer container kubepods-besteffort-pod0b90dbe9_4134_4806_83a7_13f4785a8131.slice. Jul 14 22:16:51.730700 kubelet[2516]: I0714 22:16:51.730009 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdfhs\" (UniqueName: \"kubernetes.io/projected/9c8f1d18-f0df-451d-a336-43d00ca10c65-kube-api-access-fdfhs\") pod \"calico-apiserver-55b999998d-g2dgd\" (UID: \"9c8f1d18-f0df-451d-a336-43d00ca10c65\") " pod="calico-apiserver/calico-apiserver-55b999998d-g2dgd" Jul 14 22:16:51.736048 systemd[1]: Created slice kubepods-besteffort-pod9c8f1d18_f0df_451d_a336_43d00ca10c65.slice - libcontainer container kubepods-besteffort-pod9c8f1d18_f0df_451d_a336_43d00ca10c65.slice. Jul 14 22:16:51.741075 systemd[1]: Created slice kubepods-besteffort-poda1809be6_31c3_4ebb_abc9_4c861a7f613e.slice - libcontainer container kubepods-besteffort-poda1809be6_31c3_4ebb_abc9_4c861a7f613e.slice. Jul 14 22:16:51.746972 systemd[1]: Created slice kubepods-besteffort-pod57e67ca6_c616_4a8d_8e63_8c17097d1b86.slice - libcontainer container kubepods-besteffort-pod57e67ca6_c616_4a8d_8e63_8c17097d1b86.slice. Jul 14 22:16:51.805702 containerd[1453]: time="2025-07-14T22:16:51.805584782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 14 22:16:52.014069 containerd[1453]: time="2025-07-14T22:16:52.014030794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74c5db56dc-m6q26,Uid:85e42d64-4a00-42a5-acc0-966557a6a6b9,Namespace:calico-system,Attempt:0,}" Jul 14 22:16:52.020317 kubelet[2516]: E0714 22:16:52.020274 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:16:52.020578 containerd[1453]: time="2025-07-14T22:16:52.020555118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nd7rw,Uid:2f2c7ddb-17ed-4c7d-97db-2bcad0e280dc,Namespace:kube-system,Attempt:0,}" Jul 14 22:16:52.026492 kubelet[2516]: E0714 22:16:52.026466 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:16:52.027032 containerd[1453]: time="2025-07-14T22:16:52.026996171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cj6qp,Uid:240cf122-725b-4d48-a6f5-1c05c9f2102a,Namespace:kube-system,Attempt:0,}" Jul 14 22:16:52.033545 containerd[1453]: time="2025-07-14T22:16:52.033498222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-7c28s,Uid:0b90dbe9-4134-4806-83a7-13f4785a8131,Namespace:calico-system,Attempt:0,}" Jul 14 22:16:52.039239 containerd[1453]: time="2025-07-14T22:16:52.039202869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55b999998d-g2dgd,Uid:9c8f1d18-f0df-451d-a336-43d00ca10c65,Namespace:calico-apiserver,Attempt:0,}" Jul 14 22:16:52.045068 containerd[1453]: time="2025-07-14T22:16:52.045015769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f5cc46c99-xrsn7,Uid:a1809be6-31c3-4ebb-abc9-4c861a7f613e,Namespace:calico-system,Attempt:0,}" Jul 14 22:16:52.049962 containerd[1453]: time="2025-07-14T22:16:52.049937251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55b999998d-8sbft,Uid:57e67ca6-c616-4a8d-8e63-8c17097d1b86,Namespace:calico-apiserver,Attempt:0,}" Jul 14 22:16:52.195995 containerd[1453]: time="2025-07-14T22:16:52.195949678Z" level=error msg="Failed to destroy network for sandbox \"4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:16:52.202192 containerd[1453]: time="2025-07-14T22:16:52.202129302Z" level=error msg="encountered an error cleaning up failed sandbox \"4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:16:52.202434 containerd[1453]: time="2025-07-14T22:16:52.202317037Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nd7rw,Uid:2f2c7ddb-17ed-4c7d-97db-2bcad0e280dc,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:16:52.202840 kubelet[2516]: E0714 22:16:52.202759 2516 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:16:52.202897 kubelet[2516]: E0714 22:16:52.202841 2516 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-nd7rw" Jul 14 22:16:52.202897 kubelet[2516]: E0714 22:16:52.202862 2516 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-nd7rw" Jul 14 22:16:52.203539 kubelet[2516]: E0714 22:16:52.203505 2516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-nd7rw_kube-system(2f2c7ddb-17ed-4c7d-97db-2bcad0e280dc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-nd7rw_kube-system(2f2c7ddb-17ed-4c7d-97db-2bcad0e280dc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-nd7rw" podUID="2f2c7ddb-17ed-4c7d-97db-2bcad0e280dc" Jul 14 22:16:52.205263 containerd[1453]: time="2025-07-14T22:16:52.205047294Z" level=error msg="Failed to destroy network for sandbox \"d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:16:52.206073 containerd[1453]: time="2025-07-14T22:16:52.206044445Z" level=error msg="encountered an error cleaning up failed sandbox \"d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:16:52.207480 containerd[1453]: time="2025-07-14T22:16:52.207330650Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cj6qp,Uid:240cf122-725b-4d48-a6f5-1c05c9f2102a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:16:52.207757 kubelet[2516]: E0714 22:16:52.207645 2516 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:16:52.207757 kubelet[2516]: E0714 22:16:52.207701 2516 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-cj6qp" Jul 14 22:16:52.207757 kubelet[2516]: E0714 22:16:52.207725 2516 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-cj6qp" Jul 14 22:16:52.208696 containerd[1453]: time="2025-07-14T22:16:52.207722253Z" level=error msg="Failed to destroy network for sandbox \"6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:16:52.208696 containerd[1453]: time="2025-07-14T22:16:52.208175074Z" level=error msg="encountered an error cleaning up failed sandbox \"6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:16:52.208696 containerd[1453]: time="2025-07-14T22:16:52.208206275Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74c5db56dc-m6q26,Uid:85e42d64-4a00-42a5-acc0-966557a6a6b9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:16:52.208875 kubelet[2516]: E0714 22:16:52.208047 2516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-cj6qp_kube-system(240cf122-725b-4d48-a6f5-1c05c9f2102a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-cj6qp_kube-system(240cf122-725b-4d48-a6f5-1c05c9f2102a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-cj6qp" podUID="240cf122-725b-4d48-a6f5-1c05c9f2102a" Jul 14 22:16:52.212008 kubelet[2516]: E0714 22:16:52.211264 2516 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:16:52.212008 kubelet[2516]: E0714 22:16:52.211298 2516 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74c5db56dc-m6q26" Jul 14 22:16:52.212008 kubelet[2516]: E0714 22:16:52.211316 2516 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74c5db56dc-m6q26" Jul 14 22:16:52.212132 kubelet[2516]: E0714 22:16:52.211352 2516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-74c5db56dc-m6q26_calico-system(85e42d64-4a00-42a5-acc0-966557a6a6b9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-74c5db56dc-m6q26_calico-system(85e42d64-4a00-42a5-acc0-966557a6a6b9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-74c5db56dc-m6q26" podUID="85e42d64-4a00-42a5-acc0-966557a6a6b9" Jul 14 22:16:52.222984 containerd[1453]: time="2025-07-14T22:16:52.222940706Z" level=error msg="Failed to destroy network for sandbox \"95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:16:52.224248 containerd[1453]: time="2025-07-14T22:16:52.224119892Z" level=error msg="encountered an error cleaning up failed sandbox \"95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:16:52.224248 containerd[1453]: time="2025-07-14T22:16:52.224167164Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-7c28s,Uid:0b90dbe9-4134-4806-83a7-13f4785a8131,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:16:52.225208 kubelet[2516]: E0714 22:16:52.224445 2516 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:16:52.225208 kubelet[2516]: E0714 22:16:52.224515 2516 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-7c28s" Jul 14 22:16:52.225208 kubelet[2516]: E0714 22:16:52.224533 2516 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-7c28s" Jul 14 22:16:52.225319 kubelet[2516]: E0714 22:16:52.224586 2516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-7c28s_calico-system(0b90dbe9-4134-4806-83a7-13f4785a8131)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-7c28s_calico-system(0b90dbe9-4134-4806-83a7-13f4785a8131)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-7c28s" podUID="0b90dbe9-4134-4806-83a7-13f4785a8131" Jul 14 22:16:52.228750 containerd[1453]: time="2025-07-14T22:16:52.228681915Z" level=error msg="Failed to destroy network for sandbox \"59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:16:52.229063 containerd[1453]: time="2025-07-14T22:16:52.229036115Z" level=error msg="encountered an error cleaning up failed sandbox \"59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:16:52.229161 containerd[1453]: time="2025-07-14T22:16:52.229141971Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55b999998d-g2dgd,Uid:9c8f1d18-f0df-451d-a336-43d00ca10c65,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:16:52.229960 containerd[1453]: time="2025-07-14T22:16:52.229146740Z" level=error msg="Failed to destroy network for sandbox \"45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:16:52.229960 containerd[1453]: time="2025-07-14T22:16:52.229728914Z" level=error msg="encountered an error cleaning up failed sandbox \"45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:16:52.229960 containerd[1453]: time="2025-07-14T22:16:52.229802026Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f5cc46c99-xrsn7,Uid:a1809be6-31c3-4ebb-abc9-4c861a7f613e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:16:52.230131 kubelet[2516]: E0714 22:16:52.229350 2516 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:16:52.230131 kubelet[2516]: E0714 22:16:52.229393 2516 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55b999998d-g2dgd" Jul 14 22:16:52.230131 kubelet[2516]: E0714 22:16:52.229411 2516 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55b999998d-g2dgd" Jul 14 22:16:52.230367 kubelet[2516]: E0714 22:16:52.229448 2516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55b999998d-g2dgd_calico-apiserver(9c8f1d18-f0df-451d-a336-43d00ca10c65)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55b999998d-g2dgd_calico-apiserver(9c8f1d18-f0df-451d-a336-43d00ca10c65)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55b999998d-g2dgd" podUID="9c8f1d18-f0df-451d-a336-43d00ca10c65" Jul 14 22:16:52.230367 kubelet[2516]: E0714 22:16:52.230033 2516 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:16:52.230367 kubelet[2516]: E0714 22:16:52.230059 2516 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-f5cc46c99-xrsn7" Jul 14 22:16:52.230523 kubelet[2516]: E0714 22:16:52.230074 2516 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-f5cc46c99-xrsn7" Jul 14 22:16:52.230523 kubelet[2516]: E0714 22:16:52.230105 2516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-f5cc46c99-xrsn7_calico-system(a1809be6-31c3-4ebb-abc9-4c861a7f613e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-f5cc46c99-xrsn7_calico-system(a1809be6-31c3-4ebb-abc9-4c861a7f613e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-f5cc46c99-xrsn7" podUID="a1809be6-31c3-4ebb-abc9-4c861a7f613e" Jul 14 22:16:52.252524 containerd[1453]: time="2025-07-14T22:16:52.252464636Z" level=error msg="Failed to destroy network for sandbox \"1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:16:52.252880 containerd[1453]: time="2025-07-14T22:16:52.252856338Z" level=error msg="encountered an error cleaning up failed sandbox \"1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:16:52.252924 containerd[1453]: time="2025-07-14T22:16:52.252900745Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55b999998d-8sbft,Uid:57e67ca6-c616-4a8d-8e63-8c17097d1b86,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:16:52.253170 kubelet[2516]: E0714 22:16:52.253122 2516 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:16:52.253217 kubelet[2516]: E0714 22:16:52.253184 2516 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55b999998d-8sbft" Jul 14 22:16:52.253217 kubelet[2516]: E0714 22:16:52.253205 2516 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55b999998d-8sbft" Jul 14 22:16:52.253277 kubelet[2516]: E0714 22:16:52.253250 2516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55b999998d-8sbft_calico-apiserver(57e67ca6-c616-4a8d-8e63-8c17097d1b86)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55b999998d-8sbft_calico-apiserver(57e67ca6-c616-4a8d-8e63-8c17097d1b86)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55b999998d-8sbft" podUID="57e67ca6-c616-4a8d-8e63-8c17097d1b86" Jul 14 22:16:52.724672 systemd[1]: Created slice kubepods-besteffort-podeedcfc2c_c8c7_40fa_a5d9_5e29e588b0a5.slice - libcontainer container kubepods-besteffort-podeedcfc2c_c8c7_40fa_a5d9_5e29e588b0a5.slice. Jul 14 22:16:52.726639 containerd[1453]: time="2025-07-14T22:16:52.726590158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wq2kw,Uid:eedcfc2c-c8c7-40fa-a5d9-5e29e588b0a5,Namespace:calico-system,Attempt:0,}" Jul 14 22:16:52.807681 kubelet[2516]: I0714 22:16:52.807521 2516 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246" Jul 14 22:16:52.810115 kubelet[2516]: I0714 22:16:52.809138 2516 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf" Jul 14 22:16:52.811509 kubelet[2516]: I0714 22:16:52.811479 2516 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488" Jul 14 22:16:52.811706 containerd[1453]: time="2025-07-14T22:16:52.811653575Z" level=error msg="Failed to destroy network for sandbox \"cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:16:52.812767 containerd[1453]: time="2025-07-14T22:16:52.812728779Z" level=error msg="encountered an error cleaning up failed sandbox \"cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:16:52.814737 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e-shm.mount: Deactivated successfully. Jul 14 22:16:52.815368 containerd[1453]: time="2025-07-14T22:16:52.815331116Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wq2kw,Uid:eedcfc2c-c8c7-40fa-a5d9-5e29e588b0a5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:16:52.815671 kubelet[2516]: E0714 22:16:52.815482 2516 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:16:52.815671 kubelet[2516]: E0714 22:16:52.815530 2516 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wq2kw" Jul 14 22:16:52.815671 kubelet[2516]: E0714 22:16:52.815553 2516 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wq2kw" Jul 14 22:16:52.815839 kubelet[2516]: E0714 22:16:52.815591 2516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wq2kw_calico-system(eedcfc2c-c8c7-40fa-a5d9-5e29e588b0a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wq2kw_calico-system(eedcfc2c-c8c7-40fa-a5d9-5e29e588b0a5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wq2kw" podUID="eedcfc2c-c8c7-40fa-a5d9-5e29e588b0a5" Jul 14 22:16:52.816485 kubelet[2516]: I0714 22:16:52.816456 2516 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587" Jul 14 22:16:52.820805 containerd[1453]: time="2025-07-14T22:16:52.820768093Z" level=info msg="StopPodSandbox for \"d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488\"" Jul 14 22:16:52.820981 containerd[1453]: time="2025-07-14T22:16:52.820893366Z" level=info msg="StopPodSandbox for \"45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246\"" Jul 14 22:16:52.823237 containerd[1453]: time="2025-07-14T22:16:52.822995570Z" level=info msg="StopPodSandbox for \"59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf\"" Jul 14 22:16:52.826282 containerd[1453]: time="2025-07-14T22:16:52.826253213Z" level=info msg="StopPodSandbox for \"4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587\"" Jul 14 22:16:52.827391 containerd[1453]: time="2025-07-14T22:16:52.827140902Z" level=info msg="Ensure that sandbox 45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246 in task-service has been cleanup successfully" Jul 14 22:16:52.827391 containerd[1453]: time="2025-07-14T22:16:52.827157604Z" level=info msg="Ensure that sandbox 59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf in task-service has been cleanup successfully" Jul 14 22:16:52.827521 kubelet[2516]: I0714 22:16:52.827488 2516 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee" Jul 14 22:16:52.828225 containerd[1453]: time="2025-07-14T22:16:52.827162384Z" level=info msg="Ensure that sandbox 4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587 in task-service has been cleanup successfully" Jul 14 22:16:52.830062 containerd[1453]: time="2025-07-14T22:16:52.830028956Z" level=info msg="StopPodSandbox for \"6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee\"" Jul 14 22:16:52.830216 containerd[1453]: time="2025-07-14T22:16:52.830189327Z" level=info msg="Ensure that sandbox 6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee in task-service has been cleanup successfully" Jul 14 22:16:52.833317 containerd[1453]: time="2025-07-14T22:16:52.833275738Z" level=info msg="Ensure that sandbox d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488 in task-service has been cleanup successfully" Jul 14 22:16:52.836652 kubelet[2516]: I0714 22:16:52.835438 2516 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d" Jul 14 22:16:52.836747 containerd[1453]: time="2025-07-14T22:16:52.835915358Z" level=info msg="StopPodSandbox for \"1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d\"" Jul 14 22:16:52.836747 containerd[1453]: time="2025-07-14T22:16:52.836090208Z" level=info msg="Ensure that sandbox 1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d in task-service has been cleanup successfully" Jul 14 22:16:52.841601 kubelet[2516]: I0714 22:16:52.841567 2516 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0" Jul 14 22:16:52.842618 containerd[1453]: time="2025-07-14T22:16:52.842583482Z" level=info msg="StopPodSandbox for \"95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0\"" Jul 14 22:16:52.843023 containerd[1453]: time="2025-07-14T22:16:52.842998179Z" level=info msg="Ensure that sandbox 95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0 in task-service has been cleanup successfully" Jul 14 22:16:52.886510 containerd[1453]: time="2025-07-14T22:16:52.886430050Z" level=error msg="StopPodSandbox for \"59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf\" failed" error="failed to destroy network for sandbox \"59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:16:52.886809 kubelet[2516]: E0714 22:16:52.886759 2516 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf" Jul 14 22:16:52.886910 kubelet[2516]: E0714 22:16:52.886862 2516 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf"} Jul 14 22:16:52.886968 kubelet[2516]: E0714 22:16:52.886938 2516 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9c8f1d18-f0df-451d-a336-43d00ca10c65\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:16:52.887072 kubelet[2516]: E0714 22:16:52.886975 2516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9c8f1d18-f0df-451d-a336-43d00ca10c65\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55b999998d-g2dgd" podUID="9c8f1d18-f0df-451d-a336-43d00ca10c65" Jul 14 22:16:52.887203 containerd[1453]: time="2025-07-14T22:16:52.887165062Z" level=error msg="StopPodSandbox for \"1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d\" failed" error="failed to destroy network for sandbox \"1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:16:52.887495 kubelet[2516]: E0714 22:16:52.887461 2516 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d" Jul 14 22:16:52.887591 kubelet[2516]: E0714 22:16:52.887572 2516 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d"} Jul 14 22:16:52.887683 kubelet[2516]: E0714 22:16:52.887668 2516 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"57e67ca6-c616-4a8d-8e63-8c17097d1b86\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:16:52.887788 kubelet[2516]: E0714 22:16:52.887755 2516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"57e67ca6-c616-4a8d-8e63-8c17097d1b86\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55b999998d-8sbft" podUID="57e67ca6-c616-4a8d-8e63-8c17097d1b86" Jul 14 22:16:52.888505 containerd[1453]: time="2025-07-14T22:16:52.888456556Z" level=error msg="StopPodSandbox for \"d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488\" failed" error="failed to destroy network for sandbox \"d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:16:52.888648 kubelet[2516]: E0714 22:16:52.888618 2516 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488" Jul 14 22:16:52.888681 kubelet[2516]: E0714 22:16:52.888653 2516 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488"} Jul 14 22:16:52.888681 kubelet[2516]: E0714 22:16:52.888676 2516 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"240cf122-725b-4d48-a6f5-1c05c9f2102a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:16:52.888780 kubelet[2516]: E0714 22:16:52.888694 2516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"240cf122-725b-4d48-a6f5-1c05c9f2102a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-cj6qp" podUID="240cf122-725b-4d48-a6f5-1c05c9f2102a" Jul 14 22:16:52.890501 containerd[1453]: time="2025-07-14T22:16:52.890462182Z" level=error msg="StopPodSandbox for \"4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587\" failed" error="failed to destroy network for sandbox \"4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:16:52.890747 kubelet[2516]: E0714 22:16:52.890716 2516 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587" Jul 14 22:16:52.890845 kubelet[2516]: E0714 22:16:52.890814 2516 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587"} Jul 14 22:16:52.891004 kubelet[2516]: E0714 22:16:52.890923 2516 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2f2c7ddb-17ed-4c7d-97db-2bcad0e280dc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:16:52.891004 kubelet[2516]: E0714 22:16:52.890970 2516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2f2c7ddb-17ed-4c7d-97db-2bcad0e280dc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-nd7rw" podUID="2f2c7ddb-17ed-4c7d-97db-2bcad0e280dc" Jul 14 22:16:52.892865 containerd[1453]: time="2025-07-14T22:16:52.892760708Z" level=error msg="StopPodSandbox for \"45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246\" failed" error="failed to destroy network for sandbox \"45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:16:52.893054 kubelet[2516]: E0714 22:16:52.892991 2516 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246" Jul 14 22:16:52.893054 kubelet[2516]: E0714 22:16:52.893017 2516 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246"} Jul 14 22:16:52.893054 kubelet[2516]: E0714 22:16:52.893037 2516 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a1809be6-31c3-4ebb-abc9-4c861a7f613e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:16:52.893054 kubelet[2516]: E0714 22:16:52.893055 2516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a1809be6-31c3-4ebb-abc9-4c861a7f613e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-f5cc46c99-xrsn7" podUID="a1809be6-31c3-4ebb-abc9-4c861a7f613e" Jul 14 22:16:52.893326 kubelet[2516]: E0714 22:16:52.893126 2516 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee" Jul 14 22:16:52.893326 kubelet[2516]: E0714 22:16:52.893152 2516 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee"} Jul 14 22:16:52.893326 kubelet[2516]: E0714 22:16:52.893178 2516 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"85e42d64-4a00-42a5-acc0-966557a6a6b9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:16:52.893326 kubelet[2516]: E0714 22:16:52.893203 2516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"85e42d64-4a00-42a5-acc0-966557a6a6b9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-74c5db56dc-m6q26" podUID="85e42d64-4a00-42a5-acc0-966557a6a6b9" Jul 14 22:16:52.893450 containerd[1453]: time="2025-07-14T22:16:52.893031866Z" level=error msg="StopPodSandbox for \"6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee\" failed" error="failed to destroy network for sandbox \"6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:16:52.898147 containerd[1453]: time="2025-07-14T22:16:52.898114032Z" level=error msg="StopPodSandbox for \"95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0\" failed" error="failed to destroy network for sandbox \"95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:16:52.898261 kubelet[2516]: E0714 22:16:52.898221 2516 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0" Jul 14 22:16:52.898261 kubelet[2516]: E0714 22:16:52.898258 2516 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0"} Jul 14 22:16:52.898309 kubelet[2516]: E0714 22:16:52.898279 2516 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0b90dbe9-4134-4806-83a7-13f4785a8131\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:16:52.898309 kubelet[2516]: E0714 22:16:52.898296 2516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0b90dbe9-4134-4806-83a7-13f4785a8131\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-7c28s" podUID="0b90dbe9-4134-4806-83a7-13f4785a8131" Jul 14 22:16:53.844539 kubelet[2516]: I0714 22:16:53.844502 2516 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e" Jul 14 22:16:53.845193 containerd[1453]: time="2025-07-14T22:16:53.845150256Z" level=info msg="StopPodSandbox for \"cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e\"" Jul 14 22:16:53.845642 containerd[1453]: time="2025-07-14T22:16:53.845360615Z" level=info msg="Ensure that sandbox cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e in task-service has been cleanup successfully" Jul 14 22:16:53.871034 containerd[1453]: time="2025-07-14T22:16:53.870964809Z" level=error msg="StopPodSandbox for \"cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e\" failed" error="failed to destroy network for sandbox \"cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:16:53.871218 kubelet[2516]: E0714 22:16:53.871173 2516 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e" Jul 14 22:16:53.871271 kubelet[2516]: E0714 22:16:53.871227 2516 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e"} Jul 14 22:16:53.871271 kubelet[2516]: E0714 22:16:53.871263 2516 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"eedcfc2c-c8c7-40fa-a5d9-5e29e588b0a5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:16:53.871373 kubelet[2516]: E0714 22:16:53.871286 2516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"eedcfc2c-c8c7-40fa-a5d9-5e29e588b0a5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wq2kw" podUID="eedcfc2c-c8c7-40fa-a5d9-5e29e588b0a5" Jul 14 22:16:55.212877 kubelet[2516]: I0714 22:16:55.212841 2516 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 22:16:55.213280 kubelet[2516]: E0714 22:16:55.213217 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:16:55.849370 kubelet[2516]: E0714 22:16:55.849328 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:17:01.763519 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3644016844.mount: Deactivated successfully. Jul 14 22:17:02.621560 containerd[1453]: time="2025-07-14T22:17:02.621493818Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:17:02.622337 containerd[1453]: time="2025-07-14T22:17:02.622270068Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 14 22:17:02.623479 containerd[1453]: time="2025-07-14T22:17:02.623447493Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:17:02.625677 containerd[1453]: time="2025-07-14T22:17:02.625622105Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:17:02.626164 containerd[1453]: time="2025-07-14T22:17:02.626130518Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 10.820509385s" Jul 14 22:17:02.626199 containerd[1453]: time="2025-07-14T22:17:02.626162640Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 14 22:17:02.635840 containerd[1453]: time="2025-07-14T22:17:02.635773081Z" level=info msg="CreateContainer within sandbox \"80ba2c69852e4d735f507877d56a8438148fd9c91070e9c6db9a48a82f4f166b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 14 22:17:02.657777 containerd[1453]: time="2025-07-14T22:17:02.657729630Z" level=info msg="CreateContainer within sandbox \"80ba2c69852e4d735f507877d56a8438148fd9c91070e9c6db9a48a82f4f166b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"29cbdd2c67da307da2dd2aab4e3fb8d9f6866b152aed315104ea1839b2efcd41\"" Jul 14 22:17:02.658353 containerd[1453]: time="2025-07-14T22:17:02.658164852Z" level=info msg="StartContainer for \"29cbdd2c67da307da2dd2aab4e3fb8d9f6866b152aed315104ea1839b2efcd41\"" Jul 14 22:17:02.716958 systemd[1]: Started cri-containerd-29cbdd2c67da307da2dd2aab4e3fb8d9f6866b152aed315104ea1839b2efcd41.scope - libcontainer container 29cbdd2c67da307da2dd2aab4e3fb8d9f6866b152aed315104ea1839b2efcd41. Jul 14 22:17:02.749747 containerd[1453]: time="2025-07-14T22:17:02.749699824Z" level=info msg="StartContainer for \"29cbdd2c67da307da2dd2aab4e3fb8d9f6866b152aed315104ea1839b2efcd41\" returns successfully" Jul 14 22:17:02.830425 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 14 22:17:02.831099 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 14 22:17:03.197508 kubelet[2516]: I0714 22:17:03.197234 2516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-tgkc4" podStartSLOduration=1.787088319 podStartE2EDuration="23.19717095s" podCreationTimestamp="2025-07-14 22:16:40 +0000 UTC" firstStartedPulling="2025-07-14 22:16:41.216724655 +0000 UTC m=+22.589277551" lastFinishedPulling="2025-07-14 22:17:02.626807286 +0000 UTC m=+43.999360182" observedRunningTime="2025-07-14 22:17:03.197146733 +0000 UTC m=+44.569699639" watchObservedRunningTime="2025-07-14 22:17:03.19717095 +0000 UTC m=+44.569723846" Jul 14 22:17:03.234430 containerd[1453]: time="2025-07-14T22:17:03.234381530Z" level=info msg="StopPodSandbox for \"45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246\"" Jul 14 22:17:03.376388 containerd[1453]: 2025-07-14 22:17:03.298 [INFO][3806] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246" Jul 14 22:17:03.376388 containerd[1453]: 2025-07-14 22:17:03.299 [INFO][3806] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246" iface="eth0" netns="/var/run/netns/cni-302ae82b-903a-af5d-32ec-a71d1125b67e" Jul 14 22:17:03.376388 containerd[1453]: 2025-07-14 22:17:03.299 [INFO][3806] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246" iface="eth0" netns="/var/run/netns/cni-302ae82b-903a-af5d-32ec-a71d1125b67e" Jul 14 22:17:03.376388 containerd[1453]: 2025-07-14 22:17:03.300 [INFO][3806] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246" iface="eth0" netns="/var/run/netns/cni-302ae82b-903a-af5d-32ec-a71d1125b67e" Jul 14 22:17:03.376388 containerd[1453]: 2025-07-14 22:17:03.300 [INFO][3806] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246" Jul 14 22:17:03.376388 containerd[1453]: 2025-07-14 22:17:03.300 [INFO][3806] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246" Jul 14 22:17:03.376388 containerd[1453]: 2025-07-14 22:17:03.360 [INFO][3820] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246" HandleID="k8s-pod-network.45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246" Workload="localhost-k8s-whisker--f5cc46c99--xrsn7-eth0" Jul 14 22:17:03.376388 containerd[1453]: 2025-07-14 22:17:03.361 [INFO][3820] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:17:03.376388 containerd[1453]: 2025-07-14 22:17:03.361 [INFO][3820] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:17:03.376388 containerd[1453]: 2025-07-14 22:17:03.368 [WARNING][3820] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246" HandleID="k8s-pod-network.45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246" Workload="localhost-k8s-whisker--f5cc46c99--xrsn7-eth0" Jul 14 22:17:03.376388 containerd[1453]: 2025-07-14 22:17:03.368 [INFO][3820] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246" HandleID="k8s-pod-network.45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246" Workload="localhost-k8s-whisker--f5cc46c99--xrsn7-eth0" Jul 14 22:17:03.376388 containerd[1453]: 2025-07-14 22:17:03.369 [INFO][3820] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:17:03.376388 containerd[1453]: 2025-07-14 22:17:03.373 [INFO][3806] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246" Jul 14 22:17:03.376776 containerd[1453]: time="2025-07-14T22:17:03.376511166Z" level=info msg="TearDown network for sandbox \"45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246\" successfully" Jul 14 22:17:03.376776 containerd[1453]: time="2025-07-14T22:17:03.376536416Z" level=info msg="StopPodSandbox for \"45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246\" returns successfully" Jul 14 22:17:03.379078 systemd[1]: run-netns-cni\x2d302ae82b\x2d903a\x2daf5d\x2d32ec\x2da71d1125b67e.mount: Deactivated successfully. Jul 14 22:17:03.493478 kubelet[2516]: I0714 22:17:03.493322 2516 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a1809be6-31c3-4ebb-abc9-4c861a7f613e-whisker-backend-key-pair\") pod \"a1809be6-31c3-4ebb-abc9-4c861a7f613e\" (UID: \"a1809be6-31c3-4ebb-abc9-4c861a7f613e\") " Jul 14 22:17:03.493478 kubelet[2516]: I0714 22:17:03.493391 2516 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzp8x\" (UniqueName: \"kubernetes.io/projected/a1809be6-31c3-4ebb-abc9-4c861a7f613e-kube-api-access-dzp8x\") pod \"a1809be6-31c3-4ebb-abc9-4c861a7f613e\" (UID: \"a1809be6-31c3-4ebb-abc9-4c861a7f613e\") " Jul 14 22:17:03.493478 kubelet[2516]: I0714 22:17:03.493425 2516 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1809be6-31c3-4ebb-abc9-4c861a7f613e-whisker-ca-bundle\") pod \"a1809be6-31c3-4ebb-abc9-4c861a7f613e\" (UID: \"a1809be6-31c3-4ebb-abc9-4c861a7f613e\") " Jul 14 22:17:03.494841 kubelet[2516]: I0714 22:17:03.494791 2516 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1809be6-31c3-4ebb-abc9-4c861a7f613e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "a1809be6-31c3-4ebb-abc9-4c861a7f613e" (UID: "a1809be6-31c3-4ebb-abc9-4c861a7f613e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 14 22:17:03.498007 kubelet[2516]: I0714 22:17:03.497922 2516 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1809be6-31c3-4ebb-abc9-4c861a7f613e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "a1809be6-31c3-4ebb-abc9-4c861a7f613e" (UID: "a1809be6-31c3-4ebb-abc9-4c861a7f613e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 14 22:17:03.498947 kubelet[2516]: I0714 22:17:03.498893 2516 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1809be6-31c3-4ebb-abc9-4c861a7f613e-kube-api-access-dzp8x" (OuterVolumeSpecName: "kube-api-access-dzp8x") pod "a1809be6-31c3-4ebb-abc9-4c861a7f613e" (UID: "a1809be6-31c3-4ebb-abc9-4c861a7f613e"). InnerVolumeSpecName "kube-api-access-dzp8x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 14 22:17:03.499778 systemd[1]: var-lib-kubelet-pods-a1809be6\x2d31c3\x2d4ebb\x2dabc9\x2d4c861a7f613e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddzp8x.mount: Deactivated successfully. Jul 14 22:17:03.499899 systemd[1]: var-lib-kubelet-pods-a1809be6\x2d31c3\x2d4ebb\x2dabc9\x2d4c861a7f613e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 14 22:17:03.594287 kubelet[2516]: I0714 22:17:03.594248 2516 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1809be6-31c3-4ebb-abc9-4c861a7f613e-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 14 22:17:03.594287 kubelet[2516]: I0714 22:17:03.594269 2516 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a1809be6-31c3-4ebb-abc9-4c861a7f613e-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 14 22:17:03.594287 kubelet[2516]: I0714 22:17:03.594278 2516 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dzp8x\" (UniqueName: \"kubernetes.io/projected/a1809be6-31c3-4ebb-abc9-4c861a7f613e-kube-api-access-dzp8x\") on node \"localhost\" DevicePath \"\"" Jul 14 22:17:03.719998 containerd[1453]: time="2025-07-14T22:17:03.719587927Z" level=info msg="StopPodSandbox for \"6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee\"" Jul 14 22:17:03.719998 containerd[1453]: time="2025-07-14T22:17:03.719628035Z" level=info msg="StopPodSandbox for \"4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587\"" Jul 14 22:17:03.719998 containerd[1453]: time="2025-07-14T22:17:03.719602105Z" level=info msg="StopPodSandbox for \"59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf\"" Jul 14 22:17:03.719998 containerd[1453]: time="2025-07-14T22:17:03.719602105Z" level=info msg="StopPodSandbox for \"1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d\"" Jul 14 22:17:03.828921 containerd[1453]: 2025-07-14 22:17:03.779 [INFO][3893] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d" Jul 14 22:17:03.828921 containerd[1453]: 2025-07-14 22:17:03.779 [INFO][3893] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d" iface="eth0" netns="/var/run/netns/cni-51eeab8d-afc4-b793-d324-d27250087f31" Jul 14 22:17:03.828921 containerd[1453]: 2025-07-14 22:17:03.780 [INFO][3893] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d" iface="eth0" netns="/var/run/netns/cni-51eeab8d-afc4-b793-d324-d27250087f31" Jul 14 22:17:03.828921 containerd[1453]: 2025-07-14 22:17:03.780 [INFO][3893] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d" iface="eth0" netns="/var/run/netns/cni-51eeab8d-afc4-b793-d324-d27250087f31" Jul 14 22:17:03.828921 containerd[1453]: 2025-07-14 22:17:03.780 [INFO][3893] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d" Jul 14 22:17:03.828921 containerd[1453]: 2025-07-14 22:17:03.780 [INFO][3893] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d" Jul 14 22:17:03.828921 containerd[1453]: 2025-07-14 22:17:03.811 [INFO][3924] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d" HandleID="k8s-pod-network.1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d" Workload="localhost-k8s-calico--apiserver--55b999998d--8sbft-eth0" Jul 14 22:17:03.828921 containerd[1453]: 2025-07-14 22:17:03.811 [INFO][3924] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:17:03.828921 containerd[1453]: 2025-07-14 22:17:03.812 [INFO][3924] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:17:03.828921 containerd[1453]: 2025-07-14 22:17:03.819 [WARNING][3924] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d" HandleID="k8s-pod-network.1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d" Workload="localhost-k8s-calico--apiserver--55b999998d--8sbft-eth0" Jul 14 22:17:03.828921 containerd[1453]: 2025-07-14 22:17:03.819 [INFO][3924] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d" HandleID="k8s-pod-network.1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d" Workload="localhost-k8s-calico--apiserver--55b999998d--8sbft-eth0" Jul 14 22:17:03.828921 containerd[1453]: 2025-07-14 22:17:03.820 [INFO][3924] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:17:03.828921 containerd[1453]: 2025-07-14 22:17:03.823 [INFO][3893] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d" Jul 14 22:17:03.829729 containerd[1453]: time="2025-07-14T22:17:03.829293397Z" level=info msg="TearDown network for sandbox \"1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d\" successfully" Jul 14 22:17:03.829729 containerd[1453]: time="2025-07-14T22:17:03.829330659Z" level=info msg="StopPodSandbox for \"1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d\" returns successfully" Jul 14 22:17:03.831585 systemd[1]: run-netns-cni\x2d51eeab8d\x2dafc4\x2db793\x2dd324\x2dd27250087f31.mount: Deactivated successfully. Jul 14 22:17:03.833688 containerd[1453]: time="2025-07-14T22:17:03.833084619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55b999998d-8sbft,Uid:57e67ca6-c616-4a8d-8e63-8c17097d1b86,Namespace:calico-apiserver,Attempt:1,}" Jul 14 22:17:03.838218 containerd[1453]: 2025-07-14 22:17:03.786 [INFO][3888] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee" Jul 14 22:17:03.838218 containerd[1453]: 2025-07-14 22:17:03.786 [INFO][3888] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee" iface="eth0" netns="/var/run/netns/cni-8417eb4d-db17-9edd-982b-0ef7d9a3bc33" Jul 14 22:17:03.838218 containerd[1453]: 2025-07-14 22:17:03.786 [INFO][3888] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee" iface="eth0" netns="/var/run/netns/cni-8417eb4d-db17-9edd-982b-0ef7d9a3bc33" Jul 14 22:17:03.838218 containerd[1453]: 2025-07-14 22:17:03.789 [INFO][3888] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee" iface="eth0" netns="/var/run/netns/cni-8417eb4d-db17-9edd-982b-0ef7d9a3bc33" Jul 14 22:17:03.838218 containerd[1453]: 2025-07-14 22:17:03.789 [INFO][3888] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee" Jul 14 22:17:03.838218 containerd[1453]: 2025-07-14 22:17:03.789 [INFO][3888] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee" Jul 14 22:17:03.838218 containerd[1453]: 2025-07-14 22:17:03.817 [INFO][3931] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee" HandleID="k8s-pod-network.6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee" Workload="localhost-k8s-calico--kube--controllers--74c5db56dc--m6q26-eth0" Jul 14 22:17:03.838218 containerd[1453]: 2025-07-14 22:17:03.817 [INFO][3931] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:17:03.838218 containerd[1453]: 2025-07-14 22:17:03.820 [INFO][3931] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:17:03.838218 containerd[1453]: 2025-07-14 22:17:03.827 [WARNING][3931] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee" HandleID="k8s-pod-network.6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee" Workload="localhost-k8s-calico--kube--controllers--74c5db56dc--m6q26-eth0" Jul 14 22:17:03.838218 containerd[1453]: 2025-07-14 22:17:03.827 [INFO][3931] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee" HandleID="k8s-pod-network.6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee" Workload="localhost-k8s-calico--kube--controllers--74c5db56dc--m6q26-eth0" Jul 14 22:17:03.838218 containerd[1453]: 2025-07-14 22:17:03.828 [INFO][3931] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:17:03.838218 containerd[1453]: 2025-07-14 22:17:03.834 [INFO][3888] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee" Jul 14 22:17:03.840211 containerd[1453]: time="2025-07-14T22:17:03.839006104Z" level=info msg="TearDown network for sandbox \"6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee\" successfully" Jul 14 22:17:03.840211 containerd[1453]: time="2025-07-14T22:17:03.839036102Z" level=info msg="StopPodSandbox for \"6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee\" returns successfully" Jul 14 22:17:03.840211 containerd[1453]: time="2025-07-14T22:17:03.840086000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74c5db56dc-m6q26,Uid:85e42d64-4a00-42a5-acc0-966557a6a6b9,Namespace:calico-system,Attempt:1,}" Jul 14 22:17:03.842241 systemd[1]: run-netns-cni\x2d8417eb4d\x2ddb17\x2d9edd\x2d982b\x2d0ef7d9a3bc33.mount: Deactivated successfully. Jul 14 22:17:03.844029 containerd[1453]: 2025-07-14 22:17:03.772 [INFO][3873] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf" Jul 14 22:17:03.844029 containerd[1453]: 2025-07-14 22:17:03.773 [INFO][3873] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf" iface="eth0" netns="/var/run/netns/cni-092e87ad-1d46-e788-fc6d-0658febfbd49" Jul 14 22:17:03.844029 containerd[1453]: 2025-07-14 22:17:03.773 [INFO][3873] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf" iface="eth0" netns="/var/run/netns/cni-092e87ad-1d46-e788-fc6d-0658febfbd49" Jul 14 22:17:03.844029 containerd[1453]: 2025-07-14 22:17:03.773 [INFO][3873] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf" iface="eth0" netns="/var/run/netns/cni-092e87ad-1d46-e788-fc6d-0658febfbd49" Jul 14 22:17:03.844029 containerd[1453]: 2025-07-14 22:17:03.773 [INFO][3873] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf" Jul 14 22:17:03.844029 containerd[1453]: 2025-07-14 22:17:03.773 [INFO][3873] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf" Jul 14 22:17:03.844029 containerd[1453]: 2025-07-14 22:17:03.818 [INFO][3915] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf" HandleID="k8s-pod-network.59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf" Workload="localhost-k8s-calico--apiserver--55b999998d--g2dgd-eth0" Jul 14 22:17:03.844029 containerd[1453]: 2025-07-14 22:17:03.818 [INFO][3915] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:17:03.844029 containerd[1453]: 2025-07-14 22:17:03.828 [INFO][3915] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:17:03.844029 containerd[1453]: 2025-07-14 22:17:03.833 [WARNING][3915] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf" HandleID="k8s-pod-network.59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf" Workload="localhost-k8s-calico--apiserver--55b999998d--g2dgd-eth0" Jul 14 22:17:03.844029 containerd[1453]: 2025-07-14 22:17:03.834 [INFO][3915] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf" HandleID="k8s-pod-network.59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf" Workload="localhost-k8s-calico--apiserver--55b999998d--g2dgd-eth0" Jul 14 22:17:03.844029 containerd[1453]: 2025-07-14 22:17:03.835 [INFO][3915] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:17:03.844029 containerd[1453]: 2025-07-14 22:17:03.838 [INFO][3873] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf" Jul 14 22:17:03.844445 containerd[1453]: time="2025-07-14T22:17:03.844195085Z" level=info msg="TearDown network for sandbox \"59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf\" successfully" Jul 14 22:17:03.844445 containerd[1453]: time="2025-07-14T22:17:03.844219572Z" level=info msg="StopPodSandbox for \"59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf\" returns successfully" Jul 14 22:17:03.844696 containerd[1453]: time="2025-07-14T22:17:03.844668278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55b999998d-g2dgd,Uid:9c8f1d18-f0df-451d-a336-43d00ca10c65,Namespace:calico-apiserver,Attempt:1,}" Jul 14 22:17:03.846490 systemd[1]: run-netns-cni\x2d092e87ad\x2d1d46\x2de788\x2dfc6d\x2d0658febfbd49.mount: Deactivated successfully. Jul 14 22:17:03.851075 containerd[1453]: 2025-07-14 22:17:03.769 [INFO][3878] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587" Jul 14 22:17:03.851075 containerd[1453]: 2025-07-14 22:17:03.769 [INFO][3878] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587" iface="eth0" netns="/var/run/netns/cni-dce624d4-3419-1b5c-0040-ad34abf1a513" Jul 14 22:17:03.851075 containerd[1453]: 2025-07-14 22:17:03.771 [INFO][3878] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587" iface="eth0" netns="/var/run/netns/cni-dce624d4-3419-1b5c-0040-ad34abf1a513" Jul 14 22:17:03.851075 containerd[1453]: 2025-07-14 22:17:03.772 [INFO][3878] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587" iface="eth0" netns="/var/run/netns/cni-dce624d4-3419-1b5c-0040-ad34abf1a513" Jul 14 22:17:03.851075 containerd[1453]: 2025-07-14 22:17:03.772 [INFO][3878] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587" Jul 14 22:17:03.851075 containerd[1453]: 2025-07-14 22:17:03.772 [INFO][3878] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587" Jul 14 22:17:03.851075 containerd[1453]: 2025-07-14 22:17:03.817 [INFO][3912] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587" HandleID="k8s-pod-network.4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587" Workload="localhost-k8s-coredns--674b8bbfcf--nd7rw-eth0" Jul 14 22:17:03.851075 containerd[1453]: 2025-07-14 22:17:03.818 [INFO][3912] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:17:03.851075 containerd[1453]: 2025-07-14 22:17:03.835 [INFO][3912] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:17:03.851075 containerd[1453]: 2025-07-14 22:17:03.842 [WARNING][3912] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587" HandleID="k8s-pod-network.4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587" Workload="localhost-k8s-coredns--674b8bbfcf--nd7rw-eth0" Jul 14 22:17:03.851075 containerd[1453]: 2025-07-14 22:17:03.842 [INFO][3912] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587" HandleID="k8s-pod-network.4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587" Workload="localhost-k8s-coredns--674b8bbfcf--nd7rw-eth0" Jul 14 22:17:03.851075 containerd[1453]: 2025-07-14 22:17:03.844 [INFO][3912] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:17:03.851075 containerd[1453]: 2025-07-14 22:17:03.847 [INFO][3878] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587" Jul 14 22:17:03.851455 containerd[1453]: time="2025-07-14T22:17:03.851429376Z" level=info msg="TearDown network for sandbox \"4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587\" successfully" Jul 14 22:17:03.851455 containerd[1453]: time="2025-07-14T22:17:03.851453482Z" level=info msg="StopPodSandbox for \"4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587\" returns successfully" Jul 14 22:17:03.851775 kubelet[2516]: E0714 22:17:03.851751 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:17:03.852473 containerd[1453]: time="2025-07-14T22:17:03.852306840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nd7rw,Uid:2f2c7ddb-17ed-4c7d-97db-2bcad0e280dc,Namespace:kube-system,Attempt:1,}" Jul 14 22:17:03.853212 systemd[1]: run-netns-cni\x2ddce624d4\x2d3419\x2d1b5c\x2d0040\x2dad34abf1a513.mount: Deactivated successfully. Jul 14 22:17:03.868004 kubelet[2516]: I0714 22:17:03.867981 2516 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 22:17:03.873852 systemd[1]: Removed slice kubepods-besteffort-poda1809be6_31c3_4ebb_abc9_4c861a7f613e.slice - libcontainer container kubepods-besteffort-poda1809be6_31c3_4ebb_abc9_4c861a7f613e.slice. Jul 14 22:17:04.309302 systemd[1]: Created slice kubepods-besteffort-poda53e33ad_ef3c_465f_aaf1_2558458185ed.slice - libcontainer container kubepods-besteffort-poda53e33ad_ef3c_465f_aaf1_2558458185ed.slice. Jul 14 22:17:04.400613 kubelet[2516]: I0714 22:17:04.400476 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7v8j\" (UniqueName: \"kubernetes.io/projected/a53e33ad-ef3c-465f-aaf1-2558458185ed-kube-api-access-s7v8j\") pod \"whisker-78b748b8f7-7ljwr\" (UID: \"a53e33ad-ef3c-465f-aaf1-2558458185ed\") " pod="calico-system/whisker-78b748b8f7-7ljwr" Jul 14 22:17:04.400613 kubelet[2516]: I0714 22:17:04.400524 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a53e33ad-ef3c-465f-aaf1-2558458185ed-whisker-ca-bundle\") pod \"whisker-78b748b8f7-7ljwr\" (UID: \"a53e33ad-ef3c-465f-aaf1-2558458185ed\") " pod="calico-system/whisker-78b748b8f7-7ljwr" Jul 14 22:17:04.400613 kubelet[2516]: I0714 22:17:04.400548 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a53e33ad-ef3c-465f-aaf1-2558458185ed-whisker-backend-key-pair\") pod \"whisker-78b748b8f7-7ljwr\" (UID: \"a53e33ad-ef3c-465f-aaf1-2558458185ed\") " pod="calico-system/whisker-78b748b8f7-7ljwr" Jul 14 22:17:04.467044 systemd-networkd[1387]: cali66244ef7a24: Link UP Jul 14 22:17:04.468686 systemd-networkd[1387]: cali66244ef7a24: Gained carrier Jul 14 22:17:04.493825 containerd[1453]: 2025-07-14 22:17:04.342 [INFO][3959] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 14 22:17:04.493825 containerd[1453]: 2025-07-14 22:17:04.354 [INFO][3959] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--55b999998d--8sbft-eth0 calico-apiserver-55b999998d- calico-apiserver 57e67ca6-c616-4a8d-8e63-8c17097d1b86 916 0 2025-07-14 22:16:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:55b999998d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-55b999998d-8sbft eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali66244ef7a24 [] [] }} ContainerID="a6deca8b1ccb1c04f6d31cb44c60a362e400adf9843cdf21b0201b71c1074856" Namespace="calico-apiserver" Pod="calico-apiserver-55b999998d-8sbft" WorkloadEndpoint="localhost-k8s-calico--apiserver--55b999998d--8sbft-" Jul 14 22:17:04.493825 containerd[1453]: 2025-07-14 22:17:04.354 [INFO][3959] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a6deca8b1ccb1c04f6d31cb44c60a362e400adf9843cdf21b0201b71c1074856" Namespace="calico-apiserver" Pod="calico-apiserver-55b999998d-8sbft" WorkloadEndpoint="localhost-k8s-calico--apiserver--55b999998d--8sbft-eth0" Jul 14 22:17:04.493825 containerd[1453]: 2025-07-14 22:17:04.387 [INFO][4004] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a6deca8b1ccb1c04f6d31cb44c60a362e400adf9843cdf21b0201b71c1074856" HandleID="k8s-pod-network.a6deca8b1ccb1c04f6d31cb44c60a362e400adf9843cdf21b0201b71c1074856" Workload="localhost-k8s-calico--apiserver--55b999998d--8sbft-eth0" Jul 14 22:17:04.493825 containerd[1453]: 2025-07-14 22:17:04.387 [INFO][4004] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a6deca8b1ccb1c04f6d31cb44c60a362e400adf9843cdf21b0201b71c1074856" HandleID="k8s-pod-network.a6deca8b1ccb1c04f6d31cb44c60a362e400adf9843cdf21b0201b71c1074856" Workload="localhost-k8s-calico--apiserver--55b999998d--8sbft-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325490), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-55b999998d-8sbft", "timestamp":"2025-07-14 22:17:04.387129511 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:17:04.493825 containerd[1453]: 2025-07-14 22:17:04.387 [INFO][4004] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:17:04.493825 containerd[1453]: 2025-07-14 22:17:04.387 [INFO][4004] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:17:04.493825 containerd[1453]: 2025-07-14 22:17:04.387 [INFO][4004] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:17:04.493825 containerd[1453]: 2025-07-14 22:17:04.395 [INFO][4004] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a6deca8b1ccb1c04f6d31cb44c60a362e400adf9843cdf21b0201b71c1074856" host="localhost" Jul 14 22:17:04.493825 containerd[1453]: 2025-07-14 22:17:04.411 [INFO][4004] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:17:04.493825 containerd[1453]: 2025-07-14 22:17:04.417 [INFO][4004] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:17:04.493825 containerd[1453]: 2025-07-14 22:17:04.419 [INFO][4004] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:17:04.493825 containerd[1453]: 2025-07-14 22:17:04.424 [INFO][4004] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:17:04.493825 containerd[1453]: 2025-07-14 22:17:04.424 [INFO][4004] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a6deca8b1ccb1c04f6d31cb44c60a362e400adf9843cdf21b0201b71c1074856" host="localhost" Jul 14 22:17:04.493825 containerd[1453]: 2025-07-14 22:17:04.425 [INFO][4004] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a6deca8b1ccb1c04f6d31cb44c60a362e400adf9843cdf21b0201b71c1074856 Jul 14 22:17:04.493825 containerd[1453]: 2025-07-14 22:17:04.429 [INFO][4004] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a6deca8b1ccb1c04f6d31cb44c60a362e400adf9843cdf21b0201b71c1074856" host="localhost" Jul 14 22:17:04.493825 containerd[1453]: 2025-07-14 22:17:04.439 [INFO][4004] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.a6deca8b1ccb1c04f6d31cb44c60a362e400adf9843cdf21b0201b71c1074856" host="localhost" Jul 14 22:17:04.493825 containerd[1453]: 2025-07-14 22:17:04.439 [INFO][4004] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.a6deca8b1ccb1c04f6d31cb44c60a362e400adf9843cdf21b0201b71c1074856" host="localhost" Jul 14 22:17:04.493825 containerd[1453]: 2025-07-14 22:17:04.439 [INFO][4004] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:17:04.493825 containerd[1453]: 2025-07-14 22:17:04.440 [INFO][4004] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="a6deca8b1ccb1c04f6d31cb44c60a362e400adf9843cdf21b0201b71c1074856" HandleID="k8s-pod-network.a6deca8b1ccb1c04f6d31cb44c60a362e400adf9843cdf21b0201b71c1074856" Workload="localhost-k8s-calico--apiserver--55b999998d--8sbft-eth0" Jul 14 22:17:04.494362 containerd[1453]: 2025-07-14 22:17:04.450 [INFO][3959] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a6deca8b1ccb1c04f6d31cb44c60a362e400adf9843cdf21b0201b71c1074856" Namespace="calico-apiserver" Pod="calico-apiserver-55b999998d-8sbft" WorkloadEndpoint="localhost-k8s-calico--apiserver--55b999998d--8sbft-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55b999998d--8sbft-eth0", GenerateName:"calico-apiserver-55b999998d-", Namespace:"calico-apiserver", SelfLink:"", UID:"57e67ca6-c616-4a8d-8e63-8c17097d1b86", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 16, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55b999998d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-55b999998d-8sbft", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali66244ef7a24", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:17:04.494362 containerd[1453]: 2025-07-14 22:17:04.450 [INFO][3959] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="a6deca8b1ccb1c04f6d31cb44c60a362e400adf9843cdf21b0201b71c1074856" Namespace="calico-apiserver" Pod="calico-apiserver-55b999998d-8sbft" WorkloadEndpoint="localhost-k8s-calico--apiserver--55b999998d--8sbft-eth0" Jul 14 22:17:04.494362 containerd[1453]: 2025-07-14 22:17:04.450 [INFO][3959] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali66244ef7a24 ContainerID="a6deca8b1ccb1c04f6d31cb44c60a362e400adf9843cdf21b0201b71c1074856" Namespace="calico-apiserver" Pod="calico-apiserver-55b999998d-8sbft" WorkloadEndpoint="localhost-k8s-calico--apiserver--55b999998d--8sbft-eth0" Jul 14 22:17:04.494362 containerd[1453]: 2025-07-14 22:17:04.468 [INFO][3959] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a6deca8b1ccb1c04f6d31cb44c60a362e400adf9843cdf21b0201b71c1074856" Namespace="calico-apiserver" Pod="calico-apiserver-55b999998d-8sbft" WorkloadEndpoint="localhost-k8s-calico--apiserver--55b999998d--8sbft-eth0" Jul 14 22:17:04.494362 containerd[1453]: 2025-07-14 22:17:04.470 [INFO][3959] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a6deca8b1ccb1c04f6d31cb44c60a362e400adf9843cdf21b0201b71c1074856" Namespace="calico-apiserver" Pod="calico-apiserver-55b999998d-8sbft" WorkloadEndpoint="localhost-k8s-calico--apiserver--55b999998d--8sbft-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55b999998d--8sbft-eth0", GenerateName:"calico-apiserver-55b999998d-", Namespace:"calico-apiserver", SelfLink:"", UID:"57e67ca6-c616-4a8d-8e63-8c17097d1b86", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 16, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55b999998d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a6deca8b1ccb1c04f6d31cb44c60a362e400adf9843cdf21b0201b71c1074856", Pod:"calico-apiserver-55b999998d-8sbft", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali66244ef7a24", MAC:"0a:ec:aa:30:36:a3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:17:04.494362 containerd[1453]: 2025-07-14 22:17:04.486 [INFO][3959] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a6deca8b1ccb1c04f6d31cb44c60a362e400adf9843cdf21b0201b71c1074856" Namespace="calico-apiserver" Pod="calico-apiserver-55b999998d-8sbft" WorkloadEndpoint="localhost-k8s-calico--apiserver--55b999998d--8sbft-eth0" Jul 14 22:17:04.579870 containerd[1453]: time="2025-07-14T22:17:04.579712491Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:17:04.580110 containerd[1453]: time="2025-07-14T22:17:04.580056796Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:17:04.580261 containerd[1453]: time="2025-07-14T22:17:04.580191906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:17:04.580717 containerd[1453]: time="2025-07-14T22:17:04.580596006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:17:04.622045 containerd[1453]: time="2025-07-14T22:17:04.620239046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78b748b8f7-7ljwr,Uid:a53e33ad-ef3c-465f-aaf1-2558458185ed,Namespace:calico-system,Attempt:0,}" Jul 14 22:17:04.625200 systemd-networkd[1387]: calif10ec7d4a61: Link UP Jul 14 22:17:04.629423 systemd-networkd[1387]: calif10ec7d4a61: Gained carrier Jul 14 22:17:04.634025 systemd[1]: Started cri-containerd-a6deca8b1ccb1c04f6d31cb44c60a362e400adf9843cdf21b0201b71c1074856.scope - libcontainer container a6deca8b1ccb1c04f6d31cb44c60a362e400adf9843cdf21b0201b71c1074856. Jul 14 22:17:04.658118 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:17:04.662124 containerd[1453]: 2025-07-14 22:17:04.344 [INFO][3948] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 14 22:17:04.662124 containerd[1453]: 2025-07-14 22:17:04.356 [INFO][3948] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--74c5db56dc--m6q26-eth0 calico-kube-controllers-74c5db56dc- calico-system 85e42d64-4a00-42a5-acc0-966557a6a6b9 917 0 2025-07-14 22:16:41 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:74c5db56dc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-74c5db56dc-m6q26 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif10ec7d4a61 [] [] }} ContainerID="fd9df80e5765f532791e44d6d3a146482080874202205ff196ac6c199bd80e03" Namespace="calico-system" Pod="calico-kube-controllers-74c5db56dc-m6q26" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74c5db56dc--m6q26-" Jul 14 22:17:04.662124 containerd[1453]: 2025-07-14 22:17:04.356 [INFO][3948] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fd9df80e5765f532791e44d6d3a146482080874202205ff196ac6c199bd80e03" Namespace="calico-system" Pod="calico-kube-controllers-74c5db56dc-m6q26" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74c5db56dc--m6q26-eth0" Jul 14 22:17:04.662124 containerd[1453]: 2025-07-14 22:17:04.388 [INFO][4006] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fd9df80e5765f532791e44d6d3a146482080874202205ff196ac6c199bd80e03" HandleID="k8s-pod-network.fd9df80e5765f532791e44d6d3a146482080874202205ff196ac6c199bd80e03" Workload="localhost-k8s-calico--kube--controllers--74c5db56dc--m6q26-eth0" Jul 14 22:17:04.662124 containerd[1453]: 2025-07-14 22:17:04.388 [INFO][4006] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fd9df80e5765f532791e44d6d3a146482080874202205ff196ac6c199bd80e03" HandleID="k8s-pod-network.fd9df80e5765f532791e44d6d3a146482080874202205ff196ac6c199bd80e03" Workload="localhost-k8s-calico--kube--controllers--74c5db56dc--m6q26-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138e30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-74c5db56dc-m6q26", "timestamp":"2025-07-14 22:17:04.388779967 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:17:04.662124 containerd[1453]: 2025-07-14 22:17:04.389 [INFO][4006] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:17:04.662124 containerd[1453]: 2025-07-14 22:17:04.440 [INFO][4006] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:17:04.662124 containerd[1453]: 2025-07-14 22:17:04.441 [INFO][4006] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:17:04.662124 containerd[1453]: 2025-07-14 22:17:04.510 [INFO][4006] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fd9df80e5765f532791e44d6d3a146482080874202205ff196ac6c199bd80e03" host="localhost" Jul 14 22:17:04.662124 containerd[1453]: 2025-07-14 22:17:04.529 [INFO][4006] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:17:04.662124 containerd[1453]: 2025-07-14 22:17:04.550 [INFO][4006] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:17:04.662124 containerd[1453]: 2025-07-14 22:17:04.557 [INFO][4006] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:17:04.662124 containerd[1453]: 2025-07-14 22:17:04.566 [INFO][4006] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:17:04.662124 containerd[1453]: 2025-07-14 22:17:04.566 [INFO][4006] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fd9df80e5765f532791e44d6d3a146482080874202205ff196ac6c199bd80e03" host="localhost" Jul 14 22:17:04.662124 containerd[1453]: 2025-07-14 22:17:04.568 [INFO][4006] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fd9df80e5765f532791e44d6d3a146482080874202205ff196ac6c199bd80e03 Jul 14 22:17:04.662124 containerd[1453]: 2025-07-14 22:17:04.572 [INFO][4006] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fd9df80e5765f532791e44d6d3a146482080874202205ff196ac6c199bd80e03" host="localhost" Jul 14 22:17:04.662124 containerd[1453]: 2025-07-14 22:17:04.580 [INFO][4006] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.fd9df80e5765f532791e44d6d3a146482080874202205ff196ac6c199bd80e03" host="localhost" Jul 14 22:17:04.662124 containerd[1453]: 2025-07-14 22:17:04.585 [INFO][4006] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.fd9df80e5765f532791e44d6d3a146482080874202205ff196ac6c199bd80e03" host="localhost" Jul 14 22:17:04.662124 containerd[1453]: 2025-07-14 22:17:04.587 [INFO][4006] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:17:04.662124 containerd[1453]: 2025-07-14 22:17:04.588 [INFO][4006] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="fd9df80e5765f532791e44d6d3a146482080874202205ff196ac6c199bd80e03" HandleID="k8s-pod-network.fd9df80e5765f532791e44d6d3a146482080874202205ff196ac6c199bd80e03" Workload="localhost-k8s-calico--kube--controllers--74c5db56dc--m6q26-eth0" Jul 14 22:17:04.662814 containerd[1453]: 2025-07-14 22:17:04.596 [INFO][3948] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fd9df80e5765f532791e44d6d3a146482080874202205ff196ac6c199bd80e03" Namespace="calico-system" Pod="calico-kube-controllers-74c5db56dc-m6q26" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74c5db56dc--m6q26-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--74c5db56dc--m6q26-eth0", GenerateName:"calico-kube-controllers-74c5db56dc-", Namespace:"calico-system", SelfLink:"", UID:"85e42d64-4a00-42a5-acc0-966557a6a6b9", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 16, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74c5db56dc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-74c5db56dc-m6q26", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif10ec7d4a61", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:17:04.662814 containerd[1453]: 2025-07-14 22:17:04.597 [INFO][3948] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="fd9df80e5765f532791e44d6d3a146482080874202205ff196ac6c199bd80e03" Namespace="calico-system" Pod="calico-kube-controllers-74c5db56dc-m6q26" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74c5db56dc--m6q26-eth0" Jul 14 22:17:04.662814 containerd[1453]: 2025-07-14 22:17:04.597 [INFO][3948] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif10ec7d4a61 ContainerID="fd9df80e5765f532791e44d6d3a146482080874202205ff196ac6c199bd80e03" Namespace="calico-system" Pod="calico-kube-controllers-74c5db56dc-m6q26" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74c5db56dc--m6q26-eth0" Jul 14 22:17:04.662814 containerd[1453]: 2025-07-14 22:17:04.634 [INFO][3948] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fd9df80e5765f532791e44d6d3a146482080874202205ff196ac6c199bd80e03" Namespace="calico-system" Pod="calico-kube-controllers-74c5db56dc-m6q26" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74c5db56dc--m6q26-eth0" Jul 14 22:17:04.662814 containerd[1453]: 2025-07-14 22:17:04.637 [INFO][3948] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fd9df80e5765f532791e44d6d3a146482080874202205ff196ac6c199bd80e03" Namespace="calico-system" Pod="calico-kube-controllers-74c5db56dc-m6q26" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74c5db56dc--m6q26-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--74c5db56dc--m6q26-eth0", GenerateName:"calico-kube-controllers-74c5db56dc-", Namespace:"calico-system", SelfLink:"", UID:"85e42d64-4a00-42a5-acc0-966557a6a6b9", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 16, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74c5db56dc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fd9df80e5765f532791e44d6d3a146482080874202205ff196ac6c199bd80e03", Pod:"calico-kube-controllers-74c5db56dc-m6q26", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif10ec7d4a61", MAC:"be:1a:7c:d1:b3:82", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:17:04.662814 containerd[1453]: 2025-07-14 22:17:04.645 [INFO][3948] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fd9df80e5765f532791e44d6d3a146482080874202205ff196ac6c199bd80e03" Namespace="calico-system" Pod="calico-kube-controllers-74c5db56dc-m6q26" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74c5db56dc--m6q26-eth0" Jul 14 22:17:04.692568 systemd-networkd[1387]: cali2019f0df849: Link UP Jul 14 22:17:04.694758 systemd-networkd[1387]: cali2019f0df849: Gained carrier Jul 14 22:17:04.709916 containerd[1453]: time="2025-07-14T22:17:04.708123110Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:17:04.709916 containerd[1453]: time="2025-07-14T22:17:04.708179710Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:17:04.709916 containerd[1453]: time="2025-07-14T22:17:04.708298408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:17:04.709916 containerd[1453]: time="2025-07-14T22:17:04.708651650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:17:04.715733 kernel: bpftool[4255]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 14 22:17:04.731175 containerd[1453]: time="2025-07-14T22:17:04.731109844Z" level=info msg="StopPodSandbox for \"d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488\"" Jul 14 22:17:04.737171 kubelet[2516]: I0714 22:17:04.737140 2516 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1809be6-31c3-4ebb-abc9-4c861a7f613e" path="/var/lib/kubelet/pods/a1809be6-31c3-4ebb-abc9-4c861a7f613e/volumes" Jul 14 22:17:04.738441 containerd[1453]: 2025-07-14 22:17:04.349 [INFO][3977] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 14 22:17:04.738441 containerd[1453]: 2025-07-14 22:17:04.364 [INFO][3977] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--nd7rw-eth0 coredns-674b8bbfcf- kube-system 2f2c7ddb-17ed-4c7d-97db-2bcad0e280dc 914 0 2025-07-14 22:16:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-nd7rw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2019f0df849 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3fde0d00b9f49ac19a5cc2c24cf93cf8057f13167243b3bbca9b5c29316c9be6" Namespace="kube-system" Pod="coredns-674b8bbfcf-nd7rw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nd7rw-" Jul 14 22:17:04.738441 containerd[1453]: 2025-07-14 22:17:04.364 [INFO][3977] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3fde0d00b9f49ac19a5cc2c24cf93cf8057f13167243b3bbca9b5c29316c9be6" Namespace="kube-system" Pod="coredns-674b8bbfcf-nd7rw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nd7rw-eth0" Jul 14 22:17:04.738441 containerd[1453]: 2025-07-14 22:17:04.398 [INFO][4017] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3fde0d00b9f49ac19a5cc2c24cf93cf8057f13167243b3bbca9b5c29316c9be6" HandleID="k8s-pod-network.3fde0d00b9f49ac19a5cc2c24cf93cf8057f13167243b3bbca9b5c29316c9be6" Workload="localhost-k8s-coredns--674b8bbfcf--nd7rw-eth0" Jul 14 22:17:04.738441 containerd[1453]: 2025-07-14 22:17:04.398 [INFO][4017] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3fde0d00b9f49ac19a5cc2c24cf93cf8057f13167243b3bbca9b5c29316c9be6" HandleID="k8s-pod-network.3fde0d00b9f49ac19a5cc2c24cf93cf8057f13167243b3bbca9b5c29316c9be6" Workload="localhost-k8s-coredns--674b8bbfcf--nd7rw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad490), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-nd7rw", "timestamp":"2025-07-14 22:17:04.398014279 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:17:04.738441 containerd[1453]: 2025-07-14 22:17:04.398 [INFO][4017] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:17:04.738441 containerd[1453]: 2025-07-14 22:17:04.585 [INFO][4017] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:17:04.738441 containerd[1453]: 2025-07-14 22:17:04.585 [INFO][4017] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:17:04.738441 containerd[1453]: 2025-07-14 22:17:04.601 [INFO][4017] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3fde0d00b9f49ac19a5cc2c24cf93cf8057f13167243b3bbca9b5c29316c9be6" host="localhost" Jul 14 22:17:04.738441 containerd[1453]: 2025-07-14 22:17:04.630 [INFO][4017] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:17:04.738441 containerd[1453]: 2025-07-14 22:17:04.637 [INFO][4017] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:17:04.738441 containerd[1453]: 2025-07-14 22:17:04.642 [INFO][4017] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:17:04.738441 containerd[1453]: 2025-07-14 22:17:04.656 [INFO][4017] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:17:04.738441 containerd[1453]: 2025-07-14 22:17:04.656 [INFO][4017] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3fde0d00b9f49ac19a5cc2c24cf93cf8057f13167243b3bbca9b5c29316c9be6" host="localhost" Jul 14 22:17:04.738441 containerd[1453]: 2025-07-14 22:17:04.663 [INFO][4017] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3fde0d00b9f49ac19a5cc2c24cf93cf8057f13167243b3bbca9b5c29316c9be6 Jul 14 22:17:04.738441 containerd[1453]: 2025-07-14 22:17:04.670 [INFO][4017] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3fde0d00b9f49ac19a5cc2c24cf93cf8057f13167243b3bbca9b5c29316c9be6" host="localhost" Jul 14 22:17:04.738441 containerd[1453]: 2025-07-14 22:17:04.684 [INFO][4017] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.3fde0d00b9f49ac19a5cc2c24cf93cf8057f13167243b3bbca9b5c29316c9be6" host="localhost" Jul 14 22:17:04.738441 containerd[1453]: 2025-07-14 22:17:04.685 [INFO][4017] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.3fde0d00b9f49ac19a5cc2c24cf93cf8057f13167243b3bbca9b5c29316c9be6" host="localhost" Jul 14 22:17:04.738441 containerd[1453]: 2025-07-14 22:17:04.685 [INFO][4017] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:17:04.738441 containerd[1453]: 2025-07-14 22:17:04.685 [INFO][4017] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="3fde0d00b9f49ac19a5cc2c24cf93cf8057f13167243b3bbca9b5c29316c9be6" HandleID="k8s-pod-network.3fde0d00b9f49ac19a5cc2c24cf93cf8057f13167243b3bbca9b5c29316c9be6" Workload="localhost-k8s-coredns--674b8bbfcf--nd7rw-eth0" Jul 14 22:17:04.739080 containerd[1453]: 2025-07-14 22:17:04.689 [INFO][3977] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3fde0d00b9f49ac19a5cc2c24cf93cf8057f13167243b3bbca9b5c29316c9be6" Namespace="kube-system" Pod="coredns-674b8bbfcf-nd7rw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nd7rw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--nd7rw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2f2c7ddb-17ed-4c7d-97db-2bcad0e280dc", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 16, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-nd7rw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2019f0df849", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:17:04.739080 containerd[1453]: 2025-07-14 22:17:04.689 [INFO][3977] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="3fde0d00b9f49ac19a5cc2c24cf93cf8057f13167243b3bbca9b5c29316c9be6" Namespace="kube-system" Pod="coredns-674b8bbfcf-nd7rw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nd7rw-eth0" Jul 14 22:17:04.739080 containerd[1453]: 2025-07-14 22:17:04.689 [INFO][3977] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2019f0df849 ContainerID="3fde0d00b9f49ac19a5cc2c24cf93cf8057f13167243b3bbca9b5c29316c9be6" Namespace="kube-system" Pod="coredns-674b8bbfcf-nd7rw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nd7rw-eth0" Jul 14 22:17:04.739080 containerd[1453]: 2025-07-14 22:17:04.695 [INFO][3977] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3fde0d00b9f49ac19a5cc2c24cf93cf8057f13167243b3bbca9b5c29316c9be6" Namespace="kube-system" Pod="coredns-674b8bbfcf-nd7rw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nd7rw-eth0" Jul 14 22:17:04.739080 containerd[1453]: 2025-07-14 22:17:04.696 [INFO][3977] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3fde0d00b9f49ac19a5cc2c24cf93cf8057f13167243b3bbca9b5c29316c9be6" Namespace="kube-system" Pod="coredns-674b8bbfcf-nd7rw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nd7rw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--nd7rw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2f2c7ddb-17ed-4c7d-97db-2bcad0e280dc", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 16, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3fde0d00b9f49ac19a5cc2c24cf93cf8057f13167243b3bbca9b5c29316c9be6", Pod:"coredns-674b8bbfcf-nd7rw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2019f0df849", MAC:"3a:4b:e9:75:88:01", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:17:04.739080 containerd[1453]: 2025-07-14 22:17:04.712 [INFO][3977] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3fde0d00b9f49ac19a5cc2c24cf93cf8057f13167243b3bbca9b5c29316c9be6" Namespace="kube-system" Pod="coredns-674b8bbfcf-nd7rw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nd7rw-eth0" Jul 14 22:17:04.740684 containerd[1453]: time="2025-07-14T22:17:04.740358614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55b999998d-8sbft,Uid:57e67ca6-c616-4a8d-8e63-8c17097d1b86,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"a6deca8b1ccb1c04f6d31cb44c60a362e400adf9843cdf21b0201b71c1074856\"" Jul 14 22:17:04.741847 systemd[1]: Started cri-containerd-fd9df80e5765f532791e44d6d3a146482080874202205ff196ac6c199bd80e03.scope - libcontainer container fd9df80e5765f532791e44d6d3a146482080874202205ff196ac6c199bd80e03. Jul 14 22:17:04.745479 containerd[1453]: time="2025-07-14T22:17:04.745442355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 14 22:17:04.777452 systemd-networkd[1387]: cali3a8b7d7ab80: Link UP Jul 14 22:17:04.778107 systemd-networkd[1387]: cali3a8b7d7ab80: Gained carrier Jul 14 22:17:04.783276 containerd[1453]: time="2025-07-14T22:17:04.782743175Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:17:04.787469 containerd[1453]: time="2025-07-14T22:17:04.785792670Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:17:04.787469 containerd[1453]: time="2025-07-14T22:17:04.785841364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:17:04.787469 containerd[1453]: time="2025-07-14T22:17:04.785938422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:17:04.798927 containerd[1453]: 2025-07-14 22:17:04.358 [INFO][3969] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 14 22:17:04.798927 containerd[1453]: 2025-07-14 22:17:04.372 [INFO][3969] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--55b999998d--g2dgd-eth0 calico-apiserver-55b999998d- calico-apiserver 9c8f1d18-f0df-451d-a336-43d00ca10c65 915 0 2025-07-14 22:16:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:55b999998d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-55b999998d-g2dgd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3a8b7d7ab80 [] [] }} ContainerID="0326a9134043ae5a5e506bd9f2be9f19f14f9eef157d7d4ff4362f6602f630a1" Namespace="calico-apiserver" Pod="calico-apiserver-55b999998d-g2dgd" WorkloadEndpoint="localhost-k8s-calico--apiserver--55b999998d--g2dgd-" Jul 14 22:17:04.798927 containerd[1453]: 2025-07-14 22:17:04.373 [INFO][3969] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0326a9134043ae5a5e506bd9f2be9f19f14f9eef157d7d4ff4362f6602f630a1" Namespace="calico-apiserver" Pod="calico-apiserver-55b999998d-g2dgd" WorkloadEndpoint="localhost-k8s-calico--apiserver--55b999998d--g2dgd-eth0" Jul 14 22:17:04.798927 containerd[1453]: 2025-07-14 22:17:04.437 [INFO][4023] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0326a9134043ae5a5e506bd9f2be9f19f14f9eef157d7d4ff4362f6602f630a1" HandleID="k8s-pod-network.0326a9134043ae5a5e506bd9f2be9f19f14f9eef157d7d4ff4362f6602f630a1" Workload="localhost-k8s-calico--apiserver--55b999998d--g2dgd-eth0" Jul 14 22:17:04.798927 containerd[1453]: 2025-07-14 22:17:04.437 [INFO][4023] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0326a9134043ae5a5e506bd9f2be9f19f14f9eef157d7d4ff4362f6602f630a1" HandleID="k8s-pod-network.0326a9134043ae5a5e506bd9f2be9f19f14f9eef157d7d4ff4362f6602f630a1" Workload="localhost-k8s-calico--apiserver--55b999998d--g2dgd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034ced0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-55b999998d-g2dgd", "timestamp":"2025-07-14 22:17:04.437118769 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:17:04.798927 containerd[1453]: 2025-07-14 22:17:04.437 [INFO][4023] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:17:04.798927 containerd[1453]: 2025-07-14 22:17:04.688 [INFO][4023] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:17:04.798927 containerd[1453]: 2025-07-14 22:17:04.688 [INFO][4023] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:17:04.798927 containerd[1453]: 2025-07-14 22:17:04.700 [INFO][4023] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0326a9134043ae5a5e506bd9f2be9f19f14f9eef157d7d4ff4362f6602f630a1" host="localhost" Jul 14 22:17:04.798927 containerd[1453]: 2025-07-14 22:17:04.734 [INFO][4023] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:17:04.798927 containerd[1453]: 2025-07-14 22:17:04.741 [INFO][4023] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:17:04.798927 containerd[1453]: 2025-07-14 22:17:04.744 [INFO][4023] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:17:04.798927 containerd[1453]: 2025-07-14 22:17:04.746 [INFO][4023] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:17:04.798927 containerd[1453]: 2025-07-14 22:17:04.746 [INFO][4023] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0326a9134043ae5a5e506bd9f2be9f19f14f9eef157d7d4ff4362f6602f630a1" host="localhost" Jul 14 22:17:04.798927 containerd[1453]: 2025-07-14 22:17:04.750 [INFO][4023] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0326a9134043ae5a5e506bd9f2be9f19f14f9eef157d7d4ff4362f6602f630a1 Jul 14 22:17:04.798927 containerd[1453]: 2025-07-14 22:17:04.754 [INFO][4023] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0326a9134043ae5a5e506bd9f2be9f19f14f9eef157d7d4ff4362f6602f630a1" host="localhost" Jul 14 22:17:04.798927 containerd[1453]: 2025-07-14 22:17:04.760 [INFO][4023] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.0326a9134043ae5a5e506bd9f2be9f19f14f9eef157d7d4ff4362f6602f630a1" host="localhost" Jul 14 22:17:04.798927 containerd[1453]: 2025-07-14 22:17:04.760 [INFO][4023] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.0326a9134043ae5a5e506bd9f2be9f19f14f9eef157d7d4ff4362f6602f630a1" host="localhost" Jul 14 22:17:04.798927 containerd[1453]: 2025-07-14 22:17:04.760 [INFO][4023] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:17:04.798927 containerd[1453]: 2025-07-14 22:17:04.760 [INFO][4023] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="0326a9134043ae5a5e506bd9f2be9f19f14f9eef157d7d4ff4362f6602f630a1" HandleID="k8s-pod-network.0326a9134043ae5a5e506bd9f2be9f19f14f9eef157d7d4ff4362f6602f630a1" Workload="localhost-k8s-calico--apiserver--55b999998d--g2dgd-eth0" Jul 14 22:17:04.799598 containerd[1453]: 2025-07-14 22:17:04.769 [INFO][3969] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0326a9134043ae5a5e506bd9f2be9f19f14f9eef157d7d4ff4362f6602f630a1" Namespace="calico-apiserver" Pod="calico-apiserver-55b999998d-g2dgd" WorkloadEndpoint="localhost-k8s-calico--apiserver--55b999998d--g2dgd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55b999998d--g2dgd-eth0", GenerateName:"calico-apiserver-55b999998d-", Namespace:"calico-apiserver", SelfLink:"", UID:"9c8f1d18-f0df-451d-a336-43d00ca10c65", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 16, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55b999998d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-55b999998d-g2dgd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3a8b7d7ab80", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:17:04.799598 containerd[1453]: 2025-07-14 22:17:04.770 [INFO][3969] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="0326a9134043ae5a5e506bd9f2be9f19f14f9eef157d7d4ff4362f6602f630a1" Namespace="calico-apiserver" Pod="calico-apiserver-55b999998d-g2dgd" WorkloadEndpoint="localhost-k8s-calico--apiserver--55b999998d--g2dgd-eth0" Jul 14 22:17:04.799598 containerd[1453]: 2025-07-14 22:17:04.770 [INFO][3969] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3a8b7d7ab80 ContainerID="0326a9134043ae5a5e506bd9f2be9f19f14f9eef157d7d4ff4362f6602f630a1" Namespace="calico-apiserver" Pod="calico-apiserver-55b999998d-g2dgd" WorkloadEndpoint="localhost-k8s-calico--apiserver--55b999998d--g2dgd-eth0" Jul 14 22:17:04.799598 containerd[1453]: 2025-07-14 22:17:04.778 [INFO][3969] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0326a9134043ae5a5e506bd9f2be9f19f14f9eef157d7d4ff4362f6602f630a1" Namespace="calico-apiserver" Pod="calico-apiserver-55b999998d-g2dgd" WorkloadEndpoint="localhost-k8s-calico--apiserver--55b999998d--g2dgd-eth0" Jul 14 22:17:04.799598 containerd[1453]: 2025-07-14 22:17:04.781 [INFO][3969] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0326a9134043ae5a5e506bd9f2be9f19f14f9eef157d7d4ff4362f6602f630a1" Namespace="calico-apiserver" Pod="calico-apiserver-55b999998d-g2dgd" WorkloadEndpoint="localhost-k8s-calico--apiserver--55b999998d--g2dgd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55b999998d--g2dgd-eth0", GenerateName:"calico-apiserver-55b999998d-", Namespace:"calico-apiserver", SelfLink:"", UID:"9c8f1d18-f0df-451d-a336-43d00ca10c65", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 16, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55b999998d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0326a9134043ae5a5e506bd9f2be9f19f14f9eef157d7d4ff4362f6602f630a1", Pod:"calico-apiserver-55b999998d-g2dgd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3a8b7d7ab80", MAC:"36:7b:e5:04:e1:a7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:17:04.799598 containerd[1453]: 2025-07-14 22:17:04.794 [INFO][3969] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0326a9134043ae5a5e506bd9f2be9f19f14f9eef157d7d4ff4362f6602f630a1" Namespace="calico-apiserver" Pod="calico-apiserver-55b999998d-g2dgd" WorkloadEndpoint="localhost-k8s-calico--apiserver--55b999998d--g2dgd-eth0" Jul 14 22:17:04.805652 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:17:04.810950 systemd[1]: Started cri-containerd-3fde0d00b9f49ac19a5cc2c24cf93cf8057f13167243b3bbca9b5c29316c9be6.scope - libcontainer container 3fde0d00b9f49ac19a5cc2c24cf93cf8057f13167243b3bbca9b5c29316c9be6. Jul 14 22:17:04.847390 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:17:04.849672 containerd[1453]: time="2025-07-14T22:17:04.848233119Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:17:04.849672 containerd[1453]: time="2025-07-14T22:17:04.848302463Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:17:04.850364 containerd[1453]: time="2025-07-14T22:17:04.848321429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:17:04.851059 containerd[1453]: time="2025-07-14T22:17:04.851026370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:17:04.864309 containerd[1453]: time="2025-07-14T22:17:04.864084632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74c5db56dc-m6q26,Uid:85e42d64-4a00-42a5-acc0-966557a6a6b9,Namespace:calico-system,Attempt:1,} returns sandbox id \"fd9df80e5765f532791e44d6d3a146482080874202205ff196ac6c199bd80e03\"" Jul 14 22:17:04.887261 systemd[1]: Started cri-containerd-0326a9134043ae5a5e506bd9f2be9f19f14f9eef157d7d4ff4362f6602f630a1.scope - libcontainer container 0326a9134043ae5a5e506bd9f2be9f19f14f9eef157d7d4ff4362f6602f630a1. Jul 14 22:17:04.897488 systemd-networkd[1387]: cali7e2444a031f: Link UP Jul 14 22:17:04.899388 systemd-networkd[1387]: cali7e2444a031f: Gained carrier Jul 14 22:17:04.907134 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:17:04.915703 containerd[1453]: time="2025-07-14T22:17:04.915651474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nd7rw,Uid:2f2c7ddb-17ed-4c7d-97db-2bcad0e280dc,Namespace:kube-system,Attempt:1,} returns sandbox id \"3fde0d00b9f49ac19a5cc2c24cf93cf8057f13167243b3bbca9b5c29316c9be6\"" Jul 14 22:17:04.916747 kubelet[2516]: E0714 22:17:04.916715 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:17:04.922889 containerd[1453]: 2025-07-14 22:17:04.731 [INFO][4208] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--78b748b8f7--7ljwr-eth0 whisker-78b748b8f7- calico-system a53e33ad-ef3c-465f-aaf1-2558458185ed 933 0 2025-07-14 22:17:04 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:78b748b8f7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-78b748b8f7-7ljwr eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali7e2444a031f [] [] }} ContainerID="40ddff8c457a524aa4dab38c2a16c3367ecb75b0e9238d19d4d4157775902535" Namespace="calico-system" Pod="whisker-78b748b8f7-7ljwr" WorkloadEndpoint="localhost-k8s-whisker--78b748b8f7--7ljwr-" Jul 14 22:17:04.922889 containerd[1453]: 2025-07-14 22:17:04.734 [INFO][4208] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="40ddff8c457a524aa4dab38c2a16c3367ecb75b0e9238d19d4d4157775902535" Namespace="calico-system" Pod="whisker-78b748b8f7-7ljwr" WorkloadEndpoint="localhost-k8s-whisker--78b748b8f7--7ljwr-eth0" Jul 14 22:17:04.922889 containerd[1453]: 2025-07-14 22:17:04.785 [INFO][4288] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="40ddff8c457a524aa4dab38c2a16c3367ecb75b0e9238d19d4d4157775902535" HandleID="k8s-pod-network.40ddff8c457a524aa4dab38c2a16c3367ecb75b0e9238d19d4d4157775902535" Workload="localhost-k8s-whisker--78b748b8f7--7ljwr-eth0" Jul 14 22:17:04.922889 containerd[1453]: 2025-07-14 22:17:04.785 [INFO][4288] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="40ddff8c457a524aa4dab38c2a16c3367ecb75b0e9238d19d4d4157775902535" HandleID="k8s-pod-network.40ddff8c457a524aa4dab38c2a16c3367ecb75b0e9238d19d4d4157775902535" Workload="localhost-k8s-whisker--78b748b8f7--7ljwr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df5f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-78b748b8f7-7ljwr", "timestamp":"2025-07-14 22:17:04.785542417 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:17:04.922889 containerd[1453]: 2025-07-14 22:17:04.786 [INFO][4288] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:17:04.922889 containerd[1453]: 2025-07-14 22:17:04.786 [INFO][4288] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:17:04.922889 containerd[1453]: 2025-07-14 22:17:04.786 [INFO][4288] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:17:04.922889 containerd[1453]: 2025-07-14 22:17:04.801 [INFO][4288] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.40ddff8c457a524aa4dab38c2a16c3367ecb75b0e9238d19d4d4157775902535" host="localhost" Jul 14 22:17:04.922889 containerd[1453]: 2025-07-14 22:17:04.831 [INFO][4288] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:17:04.922889 containerd[1453]: 2025-07-14 22:17:04.842 [INFO][4288] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:17:04.922889 containerd[1453]: 2025-07-14 22:17:04.844 [INFO][4288] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:17:04.922889 containerd[1453]: 2025-07-14 22:17:04.847 [INFO][4288] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:17:04.922889 containerd[1453]: 2025-07-14 22:17:04.848 [INFO][4288] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.40ddff8c457a524aa4dab38c2a16c3367ecb75b0e9238d19d4d4157775902535" host="localhost" Jul 14 22:17:04.922889 containerd[1453]: 2025-07-14 22:17:04.855 [INFO][4288] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.40ddff8c457a524aa4dab38c2a16c3367ecb75b0e9238d19d4d4157775902535 Jul 14 22:17:04.922889 containerd[1453]: 2025-07-14 22:17:04.864 [INFO][4288] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.40ddff8c457a524aa4dab38c2a16c3367ecb75b0e9238d19d4d4157775902535" host="localhost" Jul 14 22:17:04.922889 containerd[1453]: 2025-07-14 22:17:04.876 [INFO][4288] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.40ddff8c457a524aa4dab38c2a16c3367ecb75b0e9238d19d4d4157775902535" host="localhost" Jul 14 22:17:04.922889 containerd[1453]: 2025-07-14 22:17:04.876 [INFO][4288] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.40ddff8c457a524aa4dab38c2a16c3367ecb75b0e9238d19d4d4157775902535" host="localhost" Jul 14 22:17:04.922889 containerd[1453]: 2025-07-14 22:17:04.876 [INFO][4288] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:17:04.922889 containerd[1453]: 2025-07-14 22:17:04.876 [INFO][4288] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="40ddff8c457a524aa4dab38c2a16c3367ecb75b0e9238d19d4d4157775902535" HandleID="k8s-pod-network.40ddff8c457a524aa4dab38c2a16c3367ecb75b0e9238d19d4d4157775902535" Workload="localhost-k8s-whisker--78b748b8f7--7ljwr-eth0" Jul 14 22:17:04.923367 containerd[1453]: 2025-07-14 22:17:04.890 [INFO][4208] cni-plugin/k8s.go 418: Populated endpoint ContainerID="40ddff8c457a524aa4dab38c2a16c3367ecb75b0e9238d19d4d4157775902535" Namespace="calico-system" Pod="whisker-78b748b8f7-7ljwr" WorkloadEndpoint="localhost-k8s-whisker--78b748b8f7--7ljwr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--78b748b8f7--7ljwr-eth0", GenerateName:"whisker-78b748b8f7-", Namespace:"calico-system", SelfLink:"", UID:"a53e33ad-ef3c-465f-aaf1-2558458185ed", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 17, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"78b748b8f7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-78b748b8f7-7ljwr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7e2444a031f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:17:04.923367 containerd[1453]: 2025-07-14 22:17:04.892 [INFO][4208] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="40ddff8c457a524aa4dab38c2a16c3367ecb75b0e9238d19d4d4157775902535" Namespace="calico-system" Pod="whisker-78b748b8f7-7ljwr" WorkloadEndpoint="localhost-k8s-whisker--78b748b8f7--7ljwr-eth0" Jul 14 22:17:04.923367 containerd[1453]: 2025-07-14 22:17:04.892 [INFO][4208] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7e2444a031f ContainerID="40ddff8c457a524aa4dab38c2a16c3367ecb75b0e9238d19d4d4157775902535" Namespace="calico-system" Pod="whisker-78b748b8f7-7ljwr" WorkloadEndpoint="localhost-k8s-whisker--78b748b8f7--7ljwr-eth0" Jul 14 22:17:04.923367 containerd[1453]: 2025-07-14 22:17:04.901 [INFO][4208] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="40ddff8c457a524aa4dab38c2a16c3367ecb75b0e9238d19d4d4157775902535" Namespace="calico-system" Pod="whisker-78b748b8f7-7ljwr" WorkloadEndpoint="localhost-k8s-whisker--78b748b8f7--7ljwr-eth0" Jul 14 22:17:04.923367 containerd[1453]: 2025-07-14 22:17:04.902 [INFO][4208] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="40ddff8c457a524aa4dab38c2a16c3367ecb75b0e9238d19d4d4157775902535" Namespace="calico-system" Pod="whisker-78b748b8f7-7ljwr" WorkloadEndpoint="localhost-k8s-whisker--78b748b8f7--7ljwr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--78b748b8f7--7ljwr-eth0", GenerateName:"whisker-78b748b8f7-", Namespace:"calico-system", SelfLink:"", UID:"a53e33ad-ef3c-465f-aaf1-2558458185ed", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 17, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"78b748b8f7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"40ddff8c457a524aa4dab38c2a16c3367ecb75b0e9238d19d4d4157775902535", Pod:"whisker-78b748b8f7-7ljwr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7e2444a031f", MAC:"5a:03:ef:05:7a:6f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:17:04.923367 containerd[1453]: 2025-07-14 22:17:04.913 [INFO][4208] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="40ddff8c457a524aa4dab38c2a16c3367ecb75b0e9238d19d4d4157775902535" Namespace="calico-system" Pod="whisker-78b748b8f7-7ljwr" WorkloadEndpoint="localhost-k8s-whisker--78b748b8f7--7ljwr-eth0" Jul 14 22:17:04.924900 containerd[1453]: time="2025-07-14T22:17:04.924864115Z" level=info msg="CreateContainer within sandbox \"3fde0d00b9f49ac19a5cc2c24cf93cf8057f13167243b3bbca9b5c29316c9be6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 14 22:17:04.926646 containerd[1453]: 2025-07-14 22:17:04.809 [INFO][4292] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488" Jul 14 22:17:04.926646 containerd[1453]: 2025-07-14 22:17:04.810 [INFO][4292] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488" iface="eth0" netns="/var/run/netns/cni-6abab25e-e28a-8a2d-adff-9e6d12090e16" Jul 14 22:17:04.926646 containerd[1453]: 2025-07-14 22:17:04.810 [INFO][4292] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488" iface="eth0" netns="/var/run/netns/cni-6abab25e-e28a-8a2d-adff-9e6d12090e16" Jul 14 22:17:04.926646 containerd[1453]: 2025-07-14 22:17:04.810 [INFO][4292] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488" iface="eth0" netns="/var/run/netns/cni-6abab25e-e28a-8a2d-adff-9e6d12090e16" Jul 14 22:17:04.926646 containerd[1453]: 2025-07-14 22:17:04.810 [INFO][4292] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488" Jul 14 22:17:04.926646 containerd[1453]: 2025-07-14 22:17:04.811 [INFO][4292] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488" Jul 14 22:17:04.926646 containerd[1453]: 2025-07-14 22:17:04.898 [INFO][4346] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488" HandleID="k8s-pod-network.d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488" Workload="localhost-k8s-coredns--674b8bbfcf--cj6qp-eth0" Jul 14 22:17:04.926646 containerd[1453]: 2025-07-14 22:17:04.899 [INFO][4346] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:17:04.926646 containerd[1453]: 2025-07-14 22:17:04.899 [INFO][4346] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:17:04.926646 containerd[1453]: 2025-07-14 22:17:04.913 [WARNING][4346] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488" HandleID="k8s-pod-network.d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488" Workload="localhost-k8s-coredns--674b8bbfcf--cj6qp-eth0" Jul 14 22:17:04.926646 containerd[1453]: 2025-07-14 22:17:04.913 [INFO][4346] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488" HandleID="k8s-pod-network.d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488" Workload="localhost-k8s-coredns--674b8bbfcf--cj6qp-eth0" Jul 14 22:17:04.926646 containerd[1453]: 2025-07-14 22:17:04.914 [INFO][4346] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:17:04.926646 containerd[1453]: 2025-07-14 22:17:04.921 [INFO][4292] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488" Jul 14 22:17:04.927398 containerd[1453]: time="2025-07-14T22:17:04.927371423Z" level=info msg="TearDown network for sandbox \"d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488\" successfully" Jul 14 22:17:04.927470 containerd[1453]: time="2025-07-14T22:17:04.927457299Z" level=info msg="StopPodSandbox for \"d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488\" returns successfully" Jul 14 22:17:04.928972 kubelet[2516]: E0714 22:17:04.928548 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:17:04.930094 containerd[1453]: time="2025-07-14T22:17:04.930057557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cj6qp,Uid:240cf122-725b-4d48-a6f5-1c05c9f2102a,Namespace:kube-system,Attempt:1,}" Jul 14 22:17:04.930616 systemd[1]: run-netns-cni\x2d6abab25e\x2de28a\x2d8a2d\x2dadff\x2d9e6d12090e16.mount: Deactivated successfully. Jul 14 22:17:04.947124 containerd[1453]: time="2025-07-14T22:17:04.947073448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55b999998d-g2dgd,Uid:9c8f1d18-f0df-451d-a336-43d00ca10c65,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"0326a9134043ae5a5e506bd9f2be9f19f14f9eef157d7d4ff4362f6602f630a1\"" Jul 14 22:17:04.954900 containerd[1453]: time="2025-07-14T22:17:04.954357335Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:17:04.954900 containerd[1453]: time="2025-07-14T22:17:04.954461918Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:17:04.954900 containerd[1453]: time="2025-07-14T22:17:04.954473840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:17:04.954900 containerd[1453]: time="2025-07-14T22:17:04.954553965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:17:04.955723 containerd[1453]: time="2025-07-14T22:17:04.955691119Z" level=info msg="CreateContainer within sandbox \"3fde0d00b9f49ac19a5cc2c24cf93cf8057f13167243b3bbca9b5c29316c9be6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4249d54d46ff6518d6f231a1b4712cad99f5cc8a58b983049970753bfaa3f880\"" Jul 14 22:17:04.958845 containerd[1453]: time="2025-07-14T22:17:04.956963475Z" level=info msg="StartContainer for \"4249d54d46ff6518d6f231a1b4712cad99f5cc8a58b983049970753bfaa3f880\"" Jul 14 22:17:04.977059 systemd[1]: Started cri-containerd-40ddff8c457a524aa4dab38c2a16c3367ecb75b0e9238d19d4d4157775902535.scope - libcontainer container 40ddff8c457a524aa4dab38c2a16c3367ecb75b0e9238d19d4d4157775902535. Jul 14 22:17:04.985253 systemd[1]: Started cri-containerd-4249d54d46ff6518d6f231a1b4712cad99f5cc8a58b983049970753bfaa3f880.scope - libcontainer container 4249d54d46ff6518d6f231a1b4712cad99f5cc8a58b983049970753bfaa3f880. Jul 14 22:17:05.002637 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:17:05.024623 containerd[1453]: time="2025-07-14T22:17:05.024563123Z" level=info msg="StartContainer for \"4249d54d46ff6518d6f231a1b4712cad99f5cc8a58b983049970753bfaa3f880\" returns successfully" Jul 14 22:17:05.051001 containerd[1453]: time="2025-07-14T22:17:05.050814567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78b748b8f7-7ljwr,Uid:a53e33ad-ef3c-465f-aaf1-2558458185ed,Namespace:calico-system,Attempt:0,} returns sandbox id \"40ddff8c457a524aa4dab38c2a16c3367ecb75b0e9238d19d4d4157775902535\"" Jul 14 22:17:05.074606 systemd-networkd[1387]: vxlan.calico: Link UP Jul 14 22:17:05.074878 systemd-networkd[1387]: vxlan.calico: Gained carrier Jul 14 22:17:05.090087 systemd-networkd[1387]: calic6fe5943f1f: Link UP Jul 14 22:17:05.090805 systemd-networkd[1387]: calic6fe5943f1f: Gained carrier Jul 14 22:17:05.107984 containerd[1453]: 2025-07-14 22:17:04.988 [INFO][4435] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--cj6qp-eth0 coredns-674b8bbfcf- kube-system 240cf122-725b-4d48-a6f5-1c05c9f2102a 951 0 2025-07-14 22:16:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-cj6qp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic6fe5943f1f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7b2d261e7eff45cd650a9bd4ba2a081573936fcb2241b0df160363fdc7fd1dcc" Namespace="kube-system" Pod="coredns-674b8bbfcf-cj6qp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cj6qp-" Jul 14 22:17:05.107984 containerd[1453]: 2025-07-14 22:17:04.988 [INFO][4435] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7b2d261e7eff45cd650a9bd4ba2a081573936fcb2241b0df160363fdc7fd1dcc" Namespace="kube-system" Pod="coredns-674b8bbfcf-cj6qp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cj6qp-eth0" Jul 14 22:17:05.107984 containerd[1453]: 2025-07-14 22:17:05.030 [INFO][4496] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7b2d261e7eff45cd650a9bd4ba2a081573936fcb2241b0df160363fdc7fd1dcc" HandleID="k8s-pod-network.7b2d261e7eff45cd650a9bd4ba2a081573936fcb2241b0df160363fdc7fd1dcc" Workload="localhost-k8s-coredns--674b8bbfcf--cj6qp-eth0" Jul 14 22:17:05.107984 containerd[1453]: 2025-07-14 22:17:05.030 [INFO][4496] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7b2d261e7eff45cd650a9bd4ba2a081573936fcb2241b0df160363fdc7fd1dcc" HandleID="k8s-pod-network.7b2d261e7eff45cd650a9bd4ba2a081573936fcb2241b0df160363fdc7fd1dcc" Workload="localhost-k8s-coredns--674b8bbfcf--cj6qp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df010), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-cj6qp", "timestamp":"2025-07-14 22:17:05.030433456 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:17:05.107984 containerd[1453]: 2025-07-14 22:17:05.030 [INFO][4496] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:17:05.107984 containerd[1453]: 2025-07-14 22:17:05.030 [INFO][4496] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:17:05.107984 containerd[1453]: 2025-07-14 22:17:05.030 [INFO][4496] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:17:05.107984 containerd[1453]: 2025-07-14 22:17:05.038 [INFO][4496] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7b2d261e7eff45cd650a9bd4ba2a081573936fcb2241b0df160363fdc7fd1dcc" host="localhost" Jul 14 22:17:05.107984 containerd[1453]: 2025-07-14 22:17:05.045 [INFO][4496] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:17:05.107984 containerd[1453]: 2025-07-14 22:17:05.052 [INFO][4496] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:17:05.107984 containerd[1453]: 2025-07-14 22:17:05.055 [INFO][4496] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:17:05.107984 containerd[1453]: 2025-07-14 22:17:05.057 [INFO][4496] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:17:05.107984 containerd[1453]: 2025-07-14 22:17:05.057 [INFO][4496] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7b2d261e7eff45cd650a9bd4ba2a081573936fcb2241b0df160363fdc7fd1dcc" host="localhost" Jul 14 22:17:05.107984 containerd[1453]: 2025-07-14 22:17:05.062 [INFO][4496] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7b2d261e7eff45cd650a9bd4ba2a081573936fcb2241b0df160363fdc7fd1dcc Jul 14 22:17:05.107984 containerd[1453]: 2025-07-14 22:17:05.068 [INFO][4496] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7b2d261e7eff45cd650a9bd4ba2a081573936fcb2241b0df160363fdc7fd1dcc" host="localhost" Jul 14 22:17:05.107984 containerd[1453]: 2025-07-14 22:17:05.079 [INFO][4496] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.7b2d261e7eff45cd650a9bd4ba2a081573936fcb2241b0df160363fdc7fd1dcc" host="localhost" Jul 14 22:17:05.107984 containerd[1453]: 2025-07-14 22:17:05.079 [INFO][4496] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.7b2d261e7eff45cd650a9bd4ba2a081573936fcb2241b0df160363fdc7fd1dcc" host="localhost" Jul 14 22:17:05.107984 containerd[1453]: 2025-07-14 22:17:05.079 [INFO][4496] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:17:05.107984 containerd[1453]: 2025-07-14 22:17:05.079 [INFO][4496] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="7b2d261e7eff45cd650a9bd4ba2a081573936fcb2241b0df160363fdc7fd1dcc" HandleID="k8s-pod-network.7b2d261e7eff45cd650a9bd4ba2a081573936fcb2241b0df160363fdc7fd1dcc" Workload="localhost-k8s-coredns--674b8bbfcf--cj6qp-eth0" Jul 14 22:17:05.108982 containerd[1453]: 2025-07-14 22:17:05.084 [INFO][4435] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7b2d261e7eff45cd650a9bd4ba2a081573936fcb2241b0df160363fdc7fd1dcc" Namespace="kube-system" Pod="coredns-674b8bbfcf-cj6qp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cj6qp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--cj6qp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"240cf122-725b-4d48-a6f5-1c05c9f2102a", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 16, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-cj6qp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic6fe5943f1f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:17:05.108982 containerd[1453]: 2025-07-14 22:17:05.085 [INFO][4435] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="7b2d261e7eff45cd650a9bd4ba2a081573936fcb2241b0df160363fdc7fd1dcc" Namespace="kube-system" Pod="coredns-674b8bbfcf-cj6qp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cj6qp-eth0" Jul 14 22:17:05.108982 containerd[1453]: 2025-07-14 22:17:05.085 [INFO][4435] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic6fe5943f1f ContainerID="7b2d261e7eff45cd650a9bd4ba2a081573936fcb2241b0df160363fdc7fd1dcc" Namespace="kube-system" Pod="coredns-674b8bbfcf-cj6qp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cj6qp-eth0" Jul 14 22:17:05.108982 containerd[1453]: 2025-07-14 22:17:05.091 [INFO][4435] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7b2d261e7eff45cd650a9bd4ba2a081573936fcb2241b0df160363fdc7fd1dcc" Namespace="kube-system" Pod="coredns-674b8bbfcf-cj6qp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cj6qp-eth0" Jul 14 22:17:05.108982 containerd[1453]: 2025-07-14 22:17:05.091 [INFO][4435] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7b2d261e7eff45cd650a9bd4ba2a081573936fcb2241b0df160363fdc7fd1dcc" Namespace="kube-system" Pod="coredns-674b8bbfcf-cj6qp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cj6qp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--cj6qp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"240cf122-725b-4d48-a6f5-1c05c9f2102a", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 16, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7b2d261e7eff45cd650a9bd4ba2a081573936fcb2241b0df160363fdc7fd1dcc", Pod:"coredns-674b8bbfcf-cj6qp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic6fe5943f1f", MAC:"72:2c:ec:9e:8b:fc", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:17:05.108982 containerd[1453]: 2025-07-14 22:17:05.103 [INFO][4435] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7b2d261e7eff45cd650a9bd4ba2a081573936fcb2241b0df160363fdc7fd1dcc" Namespace="kube-system" Pod="coredns-674b8bbfcf-cj6qp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cj6qp-eth0" Jul 14 22:17:05.142773 containerd[1453]: time="2025-07-14T22:17:05.141265370Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:17:05.142773 containerd[1453]: time="2025-07-14T22:17:05.142584514Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:17:05.142773 containerd[1453]: time="2025-07-14T22:17:05.142600976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:17:05.142773 containerd[1453]: time="2025-07-14T22:17:05.142696440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:17:05.170955 systemd[1]: Started cri-containerd-7b2d261e7eff45cd650a9bd4ba2a081573936fcb2241b0df160363fdc7fd1dcc.scope - libcontainer container 7b2d261e7eff45cd650a9bd4ba2a081573936fcb2241b0df160363fdc7fd1dcc. Jul 14 22:17:05.182480 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:17:05.207479 containerd[1453]: time="2025-07-14T22:17:05.207449336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cj6qp,Uid:240cf122-725b-4d48-a6f5-1c05c9f2102a,Namespace:kube-system,Attempt:1,} returns sandbox id \"7b2d261e7eff45cd650a9bd4ba2a081573936fcb2241b0df160363fdc7fd1dcc\"" Jul 14 22:17:05.209218 kubelet[2516]: E0714 22:17:05.208307 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:17:05.214739 containerd[1453]: time="2025-07-14T22:17:05.214706915Z" level=info msg="CreateContainer within sandbox \"7b2d261e7eff45cd650a9bd4ba2a081573936fcb2241b0df160363fdc7fd1dcc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 14 22:17:05.230475 containerd[1453]: time="2025-07-14T22:17:05.230453396Z" level=info msg="CreateContainer within sandbox \"7b2d261e7eff45cd650a9bd4ba2a081573936fcb2241b0df160363fdc7fd1dcc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"66c1c1f4c63fe503bcd6a3bf8fe3be65fb579076005fd799bb927fd8bedf9530\"" Jul 14 22:17:05.230982 containerd[1453]: time="2025-07-14T22:17:05.230945115Z" level=info msg="StartContainer for \"66c1c1f4c63fe503bcd6a3bf8fe3be65fb579076005fd799bb927fd8bedf9530\"" Jul 14 22:17:05.257217 systemd[1]: Started cri-containerd-66c1c1f4c63fe503bcd6a3bf8fe3be65fb579076005fd799bb927fd8bedf9530.scope - libcontainer container 66c1c1f4c63fe503bcd6a3bf8fe3be65fb579076005fd799bb927fd8bedf9530. Jul 14 22:17:05.287353 containerd[1453]: time="2025-07-14T22:17:05.287251850Z" level=info msg="StartContainer for \"66c1c1f4c63fe503bcd6a3bf8fe3be65fb579076005fd799bb927fd8bedf9530\" returns successfully" Jul 14 22:17:05.835731 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1071693183.mount: Deactivated successfully. Jul 14 22:17:05.862984 systemd-networkd[1387]: cali2019f0df849: Gained IPv6LL Jul 14 22:17:05.904613 kubelet[2516]: E0714 22:17:05.904450 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:17:05.907787 kubelet[2516]: E0714 22:17:05.906658 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:17:05.918349 kubelet[2516]: I0714 22:17:05.918049 2516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-cj6qp" podStartSLOduration=40.918032617 podStartE2EDuration="40.918032617s" podCreationTimestamp="2025-07-14 22:16:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:17:05.917783025 +0000 UTC m=+47.290335921" watchObservedRunningTime="2025-07-14 22:17:05.918032617 +0000 UTC m=+47.290585513" Jul 14 22:17:05.939980 kubelet[2516]: I0714 22:17:05.939916 2516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-nd7rw" podStartSLOduration=40.939898533 podStartE2EDuration="40.939898533s" podCreationTimestamp="2025-07-14 22:16:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:17:05.928687949 +0000 UTC m=+47.301240845" watchObservedRunningTime="2025-07-14 22:17:05.939898533 +0000 UTC m=+47.312451429" Jul 14 22:17:06.247067 systemd-networkd[1387]: calif10ec7d4a61: Gained IPv6LL Jul 14 22:17:06.310946 systemd-networkd[1387]: cali7e2444a031f: Gained IPv6LL Jul 14 22:17:06.374957 systemd-networkd[1387]: cali66244ef7a24: Gained IPv6LL Jul 14 22:17:06.567090 systemd-networkd[1387]: cali3a8b7d7ab80: Gained IPv6LL Jul 14 22:17:06.567587 systemd-networkd[1387]: vxlan.calico: Gained IPv6LL Jul 14 22:17:06.721113 containerd[1453]: time="2025-07-14T22:17:06.721069643Z" level=info msg="StopPodSandbox for \"cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e\"" Jul 14 22:17:06.759563 systemd-networkd[1387]: calic6fe5943f1f: Gained IPv6LL Jul 14 22:17:06.813683 containerd[1453]: 2025-07-14 22:17:06.772 [INFO][4717] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e" Jul 14 22:17:06.813683 containerd[1453]: 2025-07-14 22:17:06.773 [INFO][4717] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e" iface="eth0" netns="/var/run/netns/cni-3a38a405-7e8d-a12e-1354-b8f295a622a6" Jul 14 22:17:06.813683 containerd[1453]: 2025-07-14 22:17:06.773 [INFO][4717] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e" iface="eth0" netns="/var/run/netns/cni-3a38a405-7e8d-a12e-1354-b8f295a622a6" Jul 14 22:17:06.813683 containerd[1453]: 2025-07-14 22:17:06.774 [INFO][4717] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e" iface="eth0" netns="/var/run/netns/cni-3a38a405-7e8d-a12e-1354-b8f295a622a6" Jul 14 22:17:06.813683 containerd[1453]: 2025-07-14 22:17:06.774 [INFO][4717] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e" Jul 14 22:17:06.813683 containerd[1453]: 2025-07-14 22:17:06.774 [INFO][4717] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e" Jul 14 22:17:06.813683 containerd[1453]: 2025-07-14 22:17:06.800 [INFO][4726] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e" HandleID="k8s-pod-network.cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e" Workload="localhost-k8s-csi--node--driver--wq2kw-eth0" Jul 14 22:17:06.813683 containerd[1453]: 2025-07-14 22:17:06.801 [INFO][4726] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:17:06.813683 containerd[1453]: 2025-07-14 22:17:06.801 [INFO][4726] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:17:06.813683 containerd[1453]: 2025-07-14 22:17:06.806 [WARNING][4726] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e" HandleID="k8s-pod-network.cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e" Workload="localhost-k8s-csi--node--driver--wq2kw-eth0" Jul 14 22:17:06.813683 containerd[1453]: 2025-07-14 22:17:06.806 [INFO][4726] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e" HandleID="k8s-pod-network.cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e" Workload="localhost-k8s-csi--node--driver--wq2kw-eth0" Jul 14 22:17:06.813683 containerd[1453]: 2025-07-14 22:17:06.807 [INFO][4726] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:17:06.813683 containerd[1453]: 2025-07-14 22:17:06.810 [INFO][4717] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e" Jul 14 22:17:06.814295 containerd[1453]: time="2025-07-14T22:17:06.814167707Z" level=info msg="TearDown network for sandbox \"cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e\" successfully" Jul 14 22:17:06.814295 containerd[1453]: time="2025-07-14T22:17:06.814198967Z" level=info msg="StopPodSandbox for \"cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e\" returns successfully" Jul 14 22:17:06.815279 containerd[1453]: time="2025-07-14T22:17:06.815251847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wq2kw,Uid:eedcfc2c-c8c7-40fa-a5d9-5e29e588b0a5,Namespace:calico-system,Attempt:1,}" Jul 14 22:17:06.817546 systemd[1]: run-netns-cni\x2d3a38a405\x2d7e8d\x2da12e\x2d1354\x2db8f295a622a6.mount: Deactivated successfully. Jul 14 22:17:06.908600 kubelet[2516]: E0714 22:17:06.908269 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:17:06.909306 kubelet[2516]: E0714 22:17:06.909223 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:17:06.981721 systemd-networkd[1387]: calia37470d634d: Link UP Jul 14 22:17:06.982634 systemd-networkd[1387]: calia37470d634d: Gained carrier Jul 14 22:17:06.995566 containerd[1453]: 2025-07-14 22:17:06.916 [INFO][4734] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--wq2kw-eth0 csi-node-driver- calico-system eedcfc2c-c8c7-40fa-a5d9-5e29e588b0a5 989 0 2025-07-14 22:16:40 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-wq2kw eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia37470d634d [] [] }} ContainerID="cfb2e22ab15aa000b6df817192a2589b9fb1c88e35db8613220bf4a727e7487c" Namespace="calico-system" Pod="csi-node-driver-wq2kw" WorkloadEndpoint="localhost-k8s-csi--node--driver--wq2kw-" Jul 14 22:17:06.995566 containerd[1453]: 2025-07-14 22:17:06.916 [INFO][4734] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cfb2e22ab15aa000b6df817192a2589b9fb1c88e35db8613220bf4a727e7487c" Namespace="calico-system" Pod="csi-node-driver-wq2kw" WorkloadEndpoint="localhost-k8s-csi--node--driver--wq2kw-eth0" Jul 14 22:17:06.995566 containerd[1453]: 2025-07-14 22:17:06.946 [INFO][4748] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cfb2e22ab15aa000b6df817192a2589b9fb1c88e35db8613220bf4a727e7487c" HandleID="k8s-pod-network.cfb2e22ab15aa000b6df817192a2589b9fb1c88e35db8613220bf4a727e7487c" Workload="localhost-k8s-csi--node--driver--wq2kw-eth0" Jul 14 22:17:06.995566 containerd[1453]: 2025-07-14 22:17:06.946 [INFO][4748] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cfb2e22ab15aa000b6df817192a2589b9fb1c88e35db8613220bf4a727e7487c" HandleID="k8s-pod-network.cfb2e22ab15aa000b6df817192a2589b9fb1c88e35db8613220bf4a727e7487c" Workload="localhost-k8s-csi--node--driver--wq2kw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c6140), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-wq2kw", "timestamp":"2025-07-14 22:17:06.946150047 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:17:06.995566 containerd[1453]: 2025-07-14 22:17:06.946 [INFO][4748] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:17:06.995566 containerd[1453]: 2025-07-14 22:17:06.946 [INFO][4748] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:17:06.995566 containerd[1453]: 2025-07-14 22:17:06.946 [INFO][4748] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:17:06.995566 containerd[1453]: 2025-07-14 22:17:06.953 [INFO][4748] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cfb2e22ab15aa000b6df817192a2589b9fb1c88e35db8613220bf4a727e7487c" host="localhost" Jul 14 22:17:06.995566 containerd[1453]: 2025-07-14 22:17:06.957 [INFO][4748] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:17:06.995566 containerd[1453]: 2025-07-14 22:17:06.961 [INFO][4748] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:17:06.995566 containerd[1453]: 2025-07-14 22:17:06.963 [INFO][4748] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:17:06.995566 containerd[1453]: 2025-07-14 22:17:06.966 [INFO][4748] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:17:06.995566 containerd[1453]: 2025-07-14 22:17:06.966 [INFO][4748] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cfb2e22ab15aa000b6df817192a2589b9fb1c88e35db8613220bf4a727e7487c" host="localhost" Jul 14 22:17:06.995566 containerd[1453]: 2025-07-14 22:17:06.967 [INFO][4748] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cfb2e22ab15aa000b6df817192a2589b9fb1c88e35db8613220bf4a727e7487c Jul 14 22:17:06.995566 containerd[1453]: 2025-07-14 22:17:06.970 [INFO][4748] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cfb2e22ab15aa000b6df817192a2589b9fb1c88e35db8613220bf4a727e7487c" host="localhost" Jul 14 22:17:06.995566 containerd[1453]: 2025-07-14 22:17:06.975 [INFO][4748] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.cfb2e22ab15aa000b6df817192a2589b9fb1c88e35db8613220bf4a727e7487c" host="localhost" Jul 14 22:17:06.995566 containerd[1453]: 2025-07-14 22:17:06.975 [INFO][4748] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.cfb2e22ab15aa000b6df817192a2589b9fb1c88e35db8613220bf4a727e7487c" host="localhost" Jul 14 22:17:06.995566 containerd[1453]: 2025-07-14 22:17:06.975 [INFO][4748] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:17:06.995566 containerd[1453]: 2025-07-14 22:17:06.975 [INFO][4748] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="cfb2e22ab15aa000b6df817192a2589b9fb1c88e35db8613220bf4a727e7487c" HandleID="k8s-pod-network.cfb2e22ab15aa000b6df817192a2589b9fb1c88e35db8613220bf4a727e7487c" Workload="localhost-k8s-csi--node--driver--wq2kw-eth0" Jul 14 22:17:06.996305 containerd[1453]: 2025-07-14 22:17:06.979 [INFO][4734] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cfb2e22ab15aa000b6df817192a2589b9fb1c88e35db8613220bf4a727e7487c" Namespace="calico-system" Pod="csi-node-driver-wq2kw" WorkloadEndpoint="localhost-k8s-csi--node--driver--wq2kw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wq2kw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"eedcfc2c-c8c7-40fa-a5d9-5e29e588b0a5", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 16, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-wq2kw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia37470d634d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:17:06.996305 containerd[1453]: 2025-07-14 22:17:06.979 [INFO][4734] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="cfb2e22ab15aa000b6df817192a2589b9fb1c88e35db8613220bf4a727e7487c" Namespace="calico-system" Pod="csi-node-driver-wq2kw" WorkloadEndpoint="localhost-k8s-csi--node--driver--wq2kw-eth0" Jul 14 22:17:06.996305 containerd[1453]: 2025-07-14 22:17:06.979 [INFO][4734] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia37470d634d ContainerID="cfb2e22ab15aa000b6df817192a2589b9fb1c88e35db8613220bf4a727e7487c" Namespace="calico-system" Pod="csi-node-driver-wq2kw" WorkloadEndpoint="localhost-k8s-csi--node--driver--wq2kw-eth0" Jul 14 22:17:06.996305 containerd[1453]: 2025-07-14 22:17:06.982 [INFO][4734] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cfb2e22ab15aa000b6df817192a2589b9fb1c88e35db8613220bf4a727e7487c" Namespace="calico-system" Pod="csi-node-driver-wq2kw" WorkloadEndpoint="localhost-k8s-csi--node--driver--wq2kw-eth0" Jul 14 22:17:06.996305 containerd[1453]: 2025-07-14 22:17:06.982 [INFO][4734] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cfb2e22ab15aa000b6df817192a2589b9fb1c88e35db8613220bf4a727e7487c" Namespace="calico-system" Pod="csi-node-driver-wq2kw" WorkloadEndpoint="localhost-k8s-csi--node--driver--wq2kw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wq2kw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"eedcfc2c-c8c7-40fa-a5d9-5e29e588b0a5", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 16, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cfb2e22ab15aa000b6df817192a2589b9fb1c88e35db8613220bf4a727e7487c", Pod:"csi-node-driver-wq2kw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia37470d634d", MAC:"1e:0b:40:ce:f4:b2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:17:06.996305 containerd[1453]: 2025-07-14 22:17:06.992 [INFO][4734] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cfb2e22ab15aa000b6df817192a2589b9fb1c88e35db8613220bf4a727e7487c" Namespace="calico-system" Pod="csi-node-driver-wq2kw" WorkloadEndpoint="localhost-k8s-csi--node--driver--wq2kw-eth0" Jul 14 22:17:07.021133 containerd[1453]: time="2025-07-14T22:17:07.021056872Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:17:07.021133 containerd[1453]: time="2025-07-14T22:17:07.021106638Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:17:07.021133 containerd[1453]: time="2025-07-14T22:17:07.021131576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:17:07.021308 containerd[1453]: time="2025-07-14T22:17:07.021272227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:17:07.040934 systemd[1]: run-containerd-runc-k8s.io-cfb2e22ab15aa000b6df817192a2589b9fb1c88e35db8613220bf4a727e7487c-runc.Zdft2R.mount: Deactivated successfully. Jul 14 22:17:07.047304 containerd[1453]: time="2025-07-14T22:17:07.047258612Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:17:07.048003 containerd[1453]: time="2025-07-14T22:17:07.047952769Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 14 22:17:07.049331 containerd[1453]: time="2025-07-14T22:17:07.049299494Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:17:07.049957 systemd[1]: Started cri-containerd-cfb2e22ab15aa000b6df817192a2589b9fb1c88e35db8613220bf4a727e7487c.scope - libcontainer container cfb2e22ab15aa000b6df817192a2589b9fb1c88e35db8613220bf4a727e7487c. Jul 14 22:17:07.051759 containerd[1453]: time="2025-07-14T22:17:07.051734656Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:17:07.052511 containerd[1453]: time="2025-07-14T22:17:07.052459973Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 2.30697739s" Jul 14 22:17:07.052511 containerd[1453]: time="2025-07-14T22:17:07.052502825Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 14 22:17:07.054087 containerd[1453]: time="2025-07-14T22:17:07.054058894Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 14 22:17:07.058058 containerd[1453]: time="2025-07-14T22:17:07.057883563Z" level=info msg="CreateContainer within sandbox \"a6deca8b1ccb1c04f6d31cb44c60a362e400adf9843cdf21b0201b71c1074856\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 14 22:17:07.065061 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:17:07.079375 containerd[1453]: time="2025-07-14T22:17:07.076520749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wq2kw,Uid:eedcfc2c-c8c7-40fa-a5d9-5e29e588b0a5,Namespace:calico-system,Attempt:1,} returns sandbox id \"cfb2e22ab15aa000b6df817192a2589b9fb1c88e35db8613220bf4a727e7487c\"" Jul 14 22:17:07.091050 containerd[1453]: time="2025-07-14T22:17:07.091004914Z" level=info msg="CreateContainer within sandbox \"a6deca8b1ccb1c04f6d31cb44c60a362e400adf9843cdf21b0201b71c1074856\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"fcead502c9544adaf61851a8a4f22094c90de0a80758d257bf41cfed1018592e\"" Jul 14 22:17:07.091872 containerd[1453]: time="2025-07-14T22:17:07.091478967Z" level=info msg="StartContainer for \"fcead502c9544adaf61851a8a4f22094c90de0a80758d257bf41cfed1018592e\"" Jul 14 22:17:07.123965 systemd[1]: Started cri-containerd-fcead502c9544adaf61851a8a4f22094c90de0a80758d257bf41cfed1018592e.scope - libcontainer container fcead502c9544adaf61851a8a4f22094c90de0a80758d257bf41cfed1018592e. Jul 14 22:17:07.168633 containerd[1453]: time="2025-07-14T22:17:07.168572505Z" level=info msg="StartContainer for \"fcead502c9544adaf61851a8a4f22094c90de0a80758d257bf41cfed1018592e\" returns successfully" Jul 14 22:17:07.912853 kubelet[2516]: E0714 22:17:07.912786 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:17:07.913362 kubelet[2516]: E0714 22:17:07.913081 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:17:08.359251 systemd-networkd[1387]: calia37470d634d: Gained IPv6LL Jul 14 22:17:08.722235 containerd[1453]: time="2025-07-14T22:17:08.722195568Z" level=info msg="StopPodSandbox for \"95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0\"" Jul 14 22:17:08.770028 kubelet[2516]: I0714 22:17:08.769926 2516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-55b999998d-8sbft" podStartSLOduration=28.461368478 podStartE2EDuration="30.769902409s" podCreationTimestamp="2025-07-14 22:16:38 +0000 UTC" firstStartedPulling="2025-07-14 22:17:04.744814343 +0000 UTC m=+46.117367239" lastFinishedPulling="2025-07-14 22:17:07.053348274 +0000 UTC m=+48.425901170" observedRunningTime="2025-07-14 22:17:07.921049313 +0000 UTC m=+49.293602209" watchObservedRunningTime="2025-07-14 22:17:08.769902409 +0000 UTC m=+50.142455305" Jul 14 22:17:08.845753 containerd[1453]: 2025-07-14 22:17:08.771 [INFO][4868] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0" Jul 14 22:17:08.845753 containerd[1453]: 2025-07-14 22:17:08.771 [INFO][4868] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0" iface="eth0" netns="/var/run/netns/cni-2de19cb6-c76e-6d59-dc65-d1bda47158eb" Jul 14 22:17:08.845753 containerd[1453]: 2025-07-14 22:17:08.771 [INFO][4868] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0" iface="eth0" netns="/var/run/netns/cni-2de19cb6-c76e-6d59-dc65-d1bda47158eb" Jul 14 22:17:08.845753 containerd[1453]: 2025-07-14 22:17:08.774 [INFO][4868] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0" iface="eth0" netns="/var/run/netns/cni-2de19cb6-c76e-6d59-dc65-d1bda47158eb" Jul 14 22:17:08.845753 containerd[1453]: 2025-07-14 22:17:08.774 [INFO][4868] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0" Jul 14 22:17:08.845753 containerd[1453]: 2025-07-14 22:17:08.774 [INFO][4868] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0" Jul 14 22:17:08.845753 containerd[1453]: 2025-07-14 22:17:08.828 [INFO][4877] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0" HandleID="k8s-pod-network.95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0" Workload="localhost-k8s-goldmane--768f4c5c69--7c28s-eth0" Jul 14 22:17:08.845753 containerd[1453]: 2025-07-14 22:17:08.829 [INFO][4877] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:17:08.845753 containerd[1453]: 2025-07-14 22:17:08.829 [INFO][4877] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:17:08.845753 containerd[1453]: 2025-07-14 22:17:08.834 [WARNING][4877] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0" HandleID="k8s-pod-network.95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0" Workload="localhost-k8s-goldmane--768f4c5c69--7c28s-eth0" Jul 14 22:17:08.845753 containerd[1453]: 2025-07-14 22:17:08.834 [INFO][4877] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0" HandleID="k8s-pod-network.95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0" Workload="localhost-k8s-goldmane--768f4c5c69--7c28s-eth0" Jul 14 22:17:08.845753 containerd[1453]: 2025-07-14 22:17:08.835 [INFO][4877] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:17:08.845753 containerd[1453]: 2025-07-14 22:17:08.839 [INFO][4868] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0" Jul 14 22:17:08.845753 containerd[1453]: time="2025-07-14T22:17:08.843742971Z" level=info msg="TearDown network for sandbox \"95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0\" successfully" Jul 14 22:17:08.845753 containerd[1453]: time="2025-07-14T22:17:08.843768611Z" level=info msg="StopPodSandbox for \"95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0\" returns successfully" Jul 14 22:17:08.845753 containerd[1453]: time="2025-07-14T22:17:08.845286744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-7c28s,Uid:0b90dbe9-4134-4806-83a7-13f4785a8131,Namespace:calico-system,Attempt:1,}" Jul 14 22:17:08.848660 systemd[1]: run-netns-cni\x2d2de19cb6\x2dc76e\x2d6d59\x2ddc65\x2dd1bda47158eb.mount: Deactivated successfully. Jul 14 22:17:08.915743 kubelet[2516]: I0714 22:17:08.915349 2516 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 22:17:08.980984 systemd-networkd[1387]: calid7eb0691f8e: Link UP Jul 14 22:17:08.982543 systemd-networkd[1387]: calid7eb0691f8e: Gained carrier Jul 14 22:17:08.998317 containerd[1453]: 2025-07-14 22:17:08.906 [INFO][4887] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--768f4c5c69--7c28s-eth0 goldmane-768f4c5c69- calico-system 0b90dbe9-4134-4806-83a7-13f4785a8131 1010 0 2025-07-14 22:16:41 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-768f4c5c69-7c28s eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calid7eb0691f8e [] [] }} ContainerID="c3cc3fa91c78172a1b92eaa88823aa73d69de25ae6cbdb05876d8ad2df03e732" Namespace="calico-system" Pod="goldmane-768f4c5c69-7c28s" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--7c28s-" Jul 14 22:17:08.998317 containerd[1453]: 2025-07-14 22:17:08.907 [INFO][4887] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c3cc3fa91c78172a1b92eaa88823aa73d69de25ae6cbdb05876d8ad2df03e732" Namespace="calico-system" Pod="goldmane-768f4c5c69-7c28s" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--7c28s-eth0" Jul 14 22:17:08.998317 containerd[1453]: 2025-07-14 22:17:08.937 [INFO][4902] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c3cc3fa91c78172a1b92eaa88823aa73d69de25ae6cbdb05876d8ad2df03e732" HandleID="k8s-pod-network.c3cc3fa91c78172a1b92eaa88823aa73d69de25ae6cbdb05876d8ad2df03e732" Workload="localhost-k8s-goldmane--768f4c5c69--7c28s-eth0" Jul 14 22:17:08.998317 containerd[1453]: 2025-07-14 22:17:08.937 [INFO][4902] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c3cc3fa91c78172a1b92eaa88823aa73d69de25ae6cbdb05876d8ad2df03e732" HandleID="k8s-pod-network.c3cc3fa91c78172a1b92eaa88823aa73d69de25ae6cbdb05876d8ad2df03e732" Workload="localhost-k8s-goldmane--768f4c5c69--7c28s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c6fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-768f4c5c69-7c28s", "timestamp":"2025-07-14 22:17:08.937569659 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:17:08.998317 containerd[1453]: 2025-07-14 22:17:08.937 [INFO][4902] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:17:08.998317 containerd[1453]: 2025-07-14 22:17:08.937 [INFO][4902] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:17:08.998317 containerd[1453]: 2025-07-14 22:17:08.937 [INFO][4902] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:17:08.998317 containerd[1453]: 2025-07-14 22:17:08.945 [INFO][4902] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c3cc3fa91c78172a1b92eaa88823aa73d69de25ae6cbdb05876d8ad2df03e732" host="localhost" Jul 14 22:17:08.998317 containerd[1453]: 2025-07-14 22:17:08.952 [INFO][4902] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:17:08.998317 containerd[1453]: 2025-07-14 22:17:08.958 [INFO][4902] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:17:08.998317 containerd[1453]: 2025-07-14 22:17:08.960 [INFO][4902] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:17:08.998317 containerd[1453]: 2025-07-14 22:17:08.962 [INFO][4902] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:17:08.998317 containerd[1453]: 2025-07-14 22:17:08.962 [INFO][4902] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c3cc3fa91c78172a1b92eaa88823aa73d69de25ae6cbdb05876d8ad2df03e732" host="localhost" Jul 14 22:17:08.998317 containerd[1453]: 2025-07-14 22:17:08.964 [INFO][4902] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c3cc3fa91c78172a1b92eaa88823aa73d69de25ae6cbdb05876d8ad2df03e732 Jul 14 22:17:08.998317 containerd[1453]: 2025-07-14 22:17:08.967 [INFO][4902] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c3cc3fa91c78172a1b92eaa88823aa73d69de25ae6cbdb05876d8ad2df03e732" host="localhost" Jul 14 22:17:08.998317 containerd[1453]: 2025-07-14 22:17:08.972 [INFO][4902] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.c3cc3fa91c78172a1b92eaa88823aa73d69de25ae6cbdb05876d8ad2df03e732" host="localhost" Jul 14 22:17:08.998317 containerd[1453]: 2025-07-14 22:17:08.973 [INFO][4902] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.c3cc3fa91c78172a1b92eaa88823aa73d69de25ae6cbdb05876d8ad2df03e732" host="localhost" Jul 14 22:17:08.998317 containerd[1453]: 2025-07-14 22:17:08.973 [INFO][4902] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:17:08.998317 containerd[1453]: 2025-07-14 22:17:08.973 [INFO][4902] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="c3cc3fa91c78172a1b92eaa88823aa73d69de25ae6cbdb05876d8ad2df03e732" HandleID="k8s-pod-network.c3cc3fa91c78172a1b92eaa88823aa73d69de25ae6cbdb05876d8ad2df03e732" Workload="localhost-k8s-goldmane--768f4c5c69--7c28s-eth0" Jul 14 22:17:08.999956 containerd[1453]: 2025-07-14 22:17:08.977 [INFO][4887] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c3cc3fa91c78172a1b92eaa88823aa73d69de25ae6cbdb05876d8ad2df03e732" Namespace="calico-system" Pod="goldmane-768f4c5c69-7c28s" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--7c28s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--7c28s-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"0b90dbe9-4134-4806-83a7-13f4785a8131", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 16, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-768f4c5c69-7c28s", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid7eb0691f8e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:17:08.999956 containerd[1453]: 2025-07-14 22:17:08.978 [INFO][4887] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="c3cc3fa91c78172a1b92eaa88823aa73d69de25ae6cbdb05876d8ad2df03e732" Namespace="calico-system" Pod="goldmane-768f4c5c69-7c28s" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--7c28s-eth0" Jul 14 22:17:08.999956 containerd[1453]: 2025-07-14 22:17:08.978 [INFO][4887] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid7eb0691f8e ContainerID="c3cc3fa91c78172a1b92eaa88823aa73d69de25ae6cbdb05876d8ad2df03e732" Namespace="calico-system" Pod="goldmane-768f4c5c69-7c28s" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--7c28s-eth0" Jul 14 22:17:08.999956 containerd[1453]: 2025-07-14 22:17:08.982 [INFO][4887] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c3cc3fa91c78172a1b92eaa88823aa73d69de25ae6cbdb05876d8ad2df03e732" Namespace="calico-system" Pod="goldmane-768f4c5c69-7c28s" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--7c28s-eth0" Jul 14 22:17:08.999956 containerd[1453]: 2025-07-14 22:17:08.984 [INFO][4887] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c3cc3fa91c78172a1b92eaa88823aa73d69de25ae6cbdb05876d8ad2df03e732" Namespace="calico-system" Pod="goldmane-768f4c5c69-7c28s" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--7c28s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--7c28s-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"0b90dbe9-4134-4806-83a7-13f4785a8131", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 16, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c3cc3fa91c78172a1b92eaa88823aa73d69de25ae6cbdb05876d8ad2df03e732", Pod:"goldmane-768f4c5c69-7c28s", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid7eb0691f8e", MAC:"3a:e6:f8:3a:5c:20", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:17:08.999956 containerd[1453]: 2025-07-14 22:17:08.994 [INFO][4887] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c3cc3fa91c78172a1b92eaa88823aa73d69de25ae6cbdb05876d8ad2df03e732" Namespace="calico-system" Pod="goldmane-768f4c5c69-7c28s" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--7c28s-eth0" Jul 14 22:17:09.031262 containerd[1453]: time="2025-07-14T22:17:09.031051371Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:17:09.031420 containerd[1453]: time="2025-07-14T22:17:09.031273938Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:17:09.031420 containerd[1453]: time="2025-07-14T22:17:09.031286092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:17:09.031969 containerd[1453]: time="2025-07-14T22:17:09.031731991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:17:09.061043 systemd[1]: Started cri-containerd-c3cc3fa91c78172a1b92eaa88823aa73d69de25ae6cbdb05876d8ad2df03e732.scope - libcontainer container c3cc3fa91c78172a1b92eaa88823aa73d69de25ae6cbdb05876d8ad2df03e732. Jul 14 22:17:09.073856 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:17:09.099480 containerd[1453]: time="2025-07-14T22:17:09.099421289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-7c28s,Uid:0b90dbe9-4134-4806-83a7-13f4785a8131,Namespace:calico-system,Attempt:1,} returns sandbox id \"c3cc3fa91c78172a1b92eaa88823aa73d69de25ae6cbdb05876d8ad2df03e732\"" Jul 14 22:17:09.465645 containerd[1453]: time="2025-07-14T22:17:09.465571512Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:17:09.466618 containerd[1453]: time="2025-07-14T22:17:09.466517392Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 14 22:17:09.467834 containerd[1453]: time="2025-07-14T22:17:09.467789239Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:17:09.469903 containerd[1453]: time="2025-07-14T22:17:09.469865796Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:17:09.470448 containerd[1453]: time="2025-07-14T22:17:09.470422187Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 2.416332884s" Jul 14 22:17:09.470514 containerd[1453]: time="2025-07-14T22:17:09.470450381Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 14 22:17:09.471569 containerd[1453]: time="2025-07-14T22:17:09.471538274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 14 22:17:09.481938 containerd[1453]: time="2025-07-14T22:17:09.481889095Z" level=info msg="CreateContainer within sandbox \"fd9df80e5765f532791e44d6d3a146482080874202205ff196ac6c199bd80e03\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 14 22:17:09.497694 containerd[1453]: time="2025-07-14T22:17:09.497651700Z" level=info msg="CreateContainer within sandbox \"fd9df80e5765f532791e44d6d3a146482080874202205ff196ac6c199bd80e03\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a5e5aec2ba6f681f87482740497db87776cf9e202b23343dbfee6b678c37180e\"" Jul 14 22:17:09.498281 containerd[1453]: time="2025-07-14T22:17:09.498221256Z" level=info msg="StartContainer for \"a5e5aec2ba6f681f87482740497db87776cf9e202b23343dbfee6b678c37180e\"" Jul 14 22:17:09.532207 systemd[1]: Started cri-containerd-a5e5aec2ba6f681f87482740497db87776cf9e202b23343dbfee6b678c37180e.scope - libcontainer container a5e5aec2ba6f681f87482740497db87776cf9e202b23343dbfee6b678c37180e. Jul 14 22:17:09.573741 containerd[1453]: time="2025-07-14T22:17:09.573696239Z" level=info msg="StartContainer for \"a5e5aec2ba6f681f87482740497db87776cf9e202b23343dbfee6b678c37180e\" returns successfully" Jul 14 22:17:09.825595 containerd[1453]: time="2025-07-14T22:17:09.825436247Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:17:09.826530 containerd[1453]: time="2025-07-14T22:17:09.826489954Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 14 22:17:09.828367 containerd[1453]: time="2025-07-14T22:17:09.828328623Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 356.761504ms" Jul 14 22:17:09.828367 containerd[1453]: time="2025-07-14T22:17:09.828363180Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 14 22:17:09.829273 containerd[1453]: time="2025-07-14T22:17:09.829230248Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 14 22:17:09.833446 containerd[1453]: time="2025-07-14T22:17:09.833410434Z" level=info msg="CreateContainer within sandbox \"0326a9134043ae5a5e506bd9f2be9f19f14f9eef157d7d4ff4362f6602f630a1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 14 22:17:09.847604 containerd[1453]: time="2025-07-14T22:17:09.847529977Z" level=info msg="CreateContainer within sandbox \"0326a9134043ae5a5e506bd9f2be9f19f14f9eef157d7d4ff4362f6602f630a1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2b02a7b8ee4dc2fafc713b01b27794108800ba3b742bedc59eac09c6acb47ec8\"" Jul 14 22:17:09.848139 containerd[1453]: time="2025-07-14T22:17:09.848091467Z" level=info msg="StartContainer for \"2b02a7b8ee4dc2fafc713b01b27794108800ba3b742bedc59eac09c6acb47ec8\"" Jul 14 22:17:09.878979 systemd[1]: Started cri-containerd-2b02a7b8ee4dc2fafc713b01b27794108800ba3b742bedc59eac09c6acb47ec8.scope - libcontainer container 2b02a7b8ee4dc2fafc713b01b27794108800ba3b742bedc59eac09c6acb47ec8. Jul 14 22:17:09.928882 containerd[1453]: time="2025-07-14T22:17:09.927899640Z" level=info msg="StartContainer for \"2b02a7b8ee4dc2fafc713b01b27794108800ba3b742bedc59eac09c6acb47ec8\" returns successfully" Jul 14 22:17:09.939447 kubelet[2516]: I0714 22:17:09.939380 2516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-74c5db56dc-m6q26" podStartSLOduration=24.334394704 podStartE2EDuration="28.939359614s" podCreationTimestamp="2025-07-14 22:16:41 +0000 UTC" firstStartedPulling="2025-07-14 22:17:04.866421442 +0000 UTC m=+46.238974338" lastFinishedPulling="2025-07-14 22:17:09.471386352 +0000 UTC m=+50.843939248" observedRunningTime="2025-07-14 22:17:09.93920718 +0000 UTC m=+51.311760106" watchObservedRunningTime="2025-07-14 22:17:09.939359614 +0000 UTC m=+51.311912510" Jul 14 22:17:09.977504 kubelet[2516]: I0714 22:17:09.977466 2516 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 22:17:10.345296 systemd-networkd[1387]: calid7eb0691f8e: Gained IPv6LL Jul 14 22:17:10.944923 kubelet[2516]: I0714 22:17:10.943963 2516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-55b999998d-g2dgd" podStartSLOduration=28.063315226 podStartE2EDuration="32.943943056s" podCreationTimestamp="2025-07-14 22:16:38 +0000 UTC" firstStartedPulling="2025-07-14 22:17:04.948412222 +0000 UTC m=+46.320965108" lastFinishedPulling="2025-07-14 22:17:09.829040042 +0000 UTC m=+51.201592938" observedRunningTime="2025-07-14 22:17:10.943210688 +0000 UTC m=+52.315763594" watchObservedRunningTime="2025-07-14 22:17:10.943943056 +0000 UTC m=+52.316495982" Jul 14 22:17:11.567490 containerd[1453]: time="2025-07-14T22:17:11.567418263Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:17:11.568236 containerd[1453]: time="2025-07-14T22:17:11.568190066Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 14 22:17:11.569569 containerd[1453]: time="2025-07-14T22:17:11.569539620Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:17:11.572838 containerd[1453]: time="2025-07-14T22:17:11.572271422Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:17:11.574500 containerd[1453]: time="2025-07-14T22:17:11.573337323Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.74406943s" Jul 14 22:17:11.574500 containerd[1453]: time="2025-07-14T22:17:11.573658931Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 14 22:17:11.577810 containerd[1453]: time="2025-07-14T22:17:11.577764523Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 14 22:17:11.581923 containerd[1453]: time="2025-07-14T22:17:11.581833125Z" level=info msg="CreateContainer within sandbox \"40ddff8c457a524aa4dab38c2a16c3367ecb75b0e9238d19d4d4157775902535\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 14 22:17:11.600948 containerd[1453]: time="2025-07-14T22:17:11.600899291Z" level=info msg="CreateContainer within sandbox \"40ddff8c457a524aa4dab38c2a16c3367ecb75b0e9238d19d4d4157775902535\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"a51794517db8a1b34b0ef22cb66434a98c9e3e919e1f155636c9eaca505e8133\"" Jul 14 22:17:11.601936 containerd[1453]: time="2025-07-14T22:17:11.601897300Z" level=info msg="StartContainer for \"a51794517db8a1b34b0ef22cb66434a98c9e3e919e1f155636c9eaca505e8133\"" Jul 14 22:17:11.634970 systemd[1]: Started cri-containerd-a51794517db8a1b34b0ef22cb66434a98c9e3e919e1f155636c9eaca505e8133.scope - libcontainer container a51794517db8a1b34b0ef22cb66434a98c9e3e919e1f155636c9eaca505e8133. Jul 14 22:17:11.680142 containerd[1453]: time="2025-07-14T22:17:11.680092887Z" level=info msg="StartContainer for \"a51794517db8a1b34b0ef22cb66434a98c9e3e919e1f155636c9eaca505e8133\" returns successfully" Jul 14 22:17:11.927321 kubelet[2516]: I0714 22:17:11.927292 2516 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 22:17:13.225643 containerd[1453]: time="2025-07-14T22:17:13.225575607Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:17:13.226485 containerd[1453]: time="2025-07-14T22:17:13.226353932Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 14 22:17:13.227312 containerd[1453]: time="2025-07-14T22:17:13.227282446Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:17:13.230121 containerd[1453]: time="2025-07-14T22:17:13.230081141Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:17:13.230580 containerd[1453]: time="2025-07-14T22:17:13.230552095Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.652746483s" Jul 14 22:17:13.230659 containerd[1453]: time="2025-07-14T22:17:13.230582484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 14 22:17:13.231987 containerd[1453]: time="2025-07-14T22:17:13.231761779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 14 22:17:13.236008 containerd[1453]: time="2025-07-14T22:17:13.235969612Z" level=info msg="CreateContainer within sandbox \"cfb2e22ab15aa000b6df817192a2589b9fb1c88e35db8613220bf4a727e7487c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 14 22:17:13.253633 containerd[1453]: time="2025-07-14T22:17:13.253578423Z" level=info msg="CreateContainer within sandbox \"cfb2e22ab15aa000b6df817192a2589b9fb1c88e35db8613220bf4a727e7487c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"dd708e786a1fc6a44816434d49d04a8277f964b4c0e83a281ec8ae1b977bbcfd\"" Jul 14 22:17:13.254131 containerd[1453]: time="2025-07-14T22:17:13.254106367Z" level=info msg="StartContainer for \"dd708e786a1fc6a44816434d49d04a8277f964b4c0e83a281ec8ae1b977bbcfd\"" Jul 14 22:17:13.293999 systemd[1]: Started cri-containerd-dd708e786a1fc6a44816434d49d04a8277f964b4c0e83a281ec8ae1b977bbcfd.scope - libcontainer container dd708e786a1fc6a44816434d49d04a8277f964b4c0e83a281ec8ae1b977bbcfd. Jul 14 22:17:13.327150 containerd[1453]: time="2025-07-14T22:17:13.327068779Z" level=info msg="StartContainer for \"dd708e786a1fc6a44816434d49d04a8277f964b4c0e83a281ec8ae1b977bbcfd\" returns successfully" Jul 14 22:17:17.379421 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount762507230.mount: Deactivated successfully. Jul 14 22:17:18.700258 containerd[1453]: time="2025-07-14T22:17:18.700211132Z" level=info msg="StopPodSandbox for \"1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d\"" Jul 14 22:17:18.776501 containerd[1453]: 2025-07-14 22:17:18.735 [WARNING][5226] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55b999998d--8sbft-eth0", GenerateName:"calico-apiserver-55b999998d-", Namespace:"calico-apiserver", SelfLink:"", UID:"57e67ca6-c616-4a8d-8e63-8c17097d1b86", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 16, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55b999998d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a6deca8b1ccb1c04f6d31cb44c60a362e400adf9843cdf21b0201b71c1074856", Pod:"calico-apiserver-55b999998d-8sbft", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali66244ef7a24", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:17:18.776501 containerd[1453]: 2025-07-14 22:17:18.735 [INFO][5226] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d" Jul 14 22:17:18.776501 containerd[1453]: 2025-07-14 22:17:18.735 [INFO][5226] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d" iface="eth0" netns="" Jul 14 22:17:18.776501 containerd[1453]: 2025-07-14 22:17:18.735 [INFO][5226] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d" Jul 14 22:17:18.776501 containerd[1453]: 2025-07-14 22:17:18.735 [INFO][5226] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d" Jul 14 22:17:18.776501 containerd[1453]: 2025-07-14 22:17:18.762 [INFO][5237] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d" HandleID="k8s-pod-network.1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d" Workload="localhost-k8s-calico--apiserver--55b999998d--8sbft-eth0" Jul 14 22:17:18.776501 containerd[1453]: 2025-07-14 22:17:18.762 [INFO][5237] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:17:18.776501 containerd[1453]: 2025-07-14 22:17:18.762 [INFO][5237] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:17:18.776501 containerd[1453]: 2025-07-14 22:17:18.767 [WARNING][5237] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d" HandleID="k8s-pod-network.1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d" Workload="localhost-k8s-calico--apiserver--55b999998d--8sbft-eth0" Jul 14 22:17:18.776501 containerd[1453]: 2025-07-14 22:17:18.767 [INFO][5237] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d" HandleID="k8s-pod-network.1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d" Workload="localhost-k8s-calico--apiserver--55b999998d--8sbft-eth0" Jul 14 22:17:18.776501 containerd[1453]: 2025-07-14 22:17:18.769 [INFO][5237] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:17:18.776501 containerd[1453]: 2025-07-14 22:17:18.773 [INFO][5226] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d" Jul 14 22:17:18.777073 containerd[1453]: time="2025-07-14T22:17:18.776531911Z" level=info msg="TearDown network for sandbox \"1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d\" successfully" Jul 14 22:17:18.777073 containerd[1453]: time="2025-07-14T22:17:18.776556157Z" level=info msg="StopPodSandbox for \"1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d\" returns successfully" Jul 14 22:17:18.777073 containerd[1453]: time="2025-07-14T22:17:18.777050695Z" level=info msg="RemovePodSandbox for \"1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d\"" Jul 14 22:17:18.779318 containerd[1453]: time="2025-07-14T22:17:18.779274147Z" level=info msg="Forcibly stopping sandbox \"1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d\"" Jul 14 22:17:18.999680 containerd[1453]: 2025-07-14 22:17:18.851 [WARNING][5255] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55b999998d--8sbft-eth0", GenerateName:"calico-apiserver-55b999998d-", Namespace:"calico-apiserver", SelfLink:"", UID:"57e67ca6-c616-4a8d-8e63-8c17097d1b86", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 16, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55b999998d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a6deca8b1ccb1c04f6d31cb44c60a362e400adf9843cdf21b0201b71c1074856", Pod:"calico-apiserver-55b999998d-8sbft", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali66244ef7a24", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:17:18.999680 containerd[1453]: 2025-07-14 22:17:18.851 [INFO][5255] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d" Jul 14 22:17:18.999680 containerd[1453]: 2025-07-14 22:17:18.851 [INFO][5255] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d" iface="eth0" netns="" Jul 14 22:17:18.999680 containerd[1453]: 2025-07-14 22:17:18.851 [INFO][5255] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d" Jul 14 22:17:18.999680 containerd[1453]: 2025-07-14 22:17:18.851 [INFO][5255] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d" Jul 14 22:17:18.999680 containerd[1453]: 2025-07-14 22:17:18.872 [INFO][5264] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d" HandleID="k8s-pod-network.1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d" Workload="localhost-k8s-calico--apiserver--55b999998d--8sbft-eth0" Jul 14 22:17:18.999680 containerd[1453]: 2025-07-14 22:17:18.872 [INFO][5264] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:17:18.999680 containerd[1453]: 2025-07-14 22:17:18.872 [INFO][5264] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:17:18.999680 containerd[1453]: 2025-07-14 22:17:18.968 [WARNING][5264] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d" HandleID="k8s-pod-network.1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d" Workload="localhost-k8s-calico--apiserver--55b999998d--8sbft-eth0" Jul 14 22:17:18.999680 containerd[1453]: 2025-07-14 22:17:18.969 [INFO][5264] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d" HandleID="k8s-pod-network.1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d" Workload="localhost-k8s-calico--apiserver--55b999998d--8sbft-eth0" Jul 14 22:17:18.999680 containerd[1453]: 2025-07-14 22:17:18.975 [INFO][5264] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:17:18.999680 containerd[1453]: 2025-07-14 22:17:18.986 [INFO][5255] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d" Jul 14 22:17:18.999680 containerd[1453]: time="2025-07-14T22:17:18.999628856Z" level=info msg="TearDown network for sandbox \"1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d\" successfully" Jul 14 22:17:19.157557 containerd[1453]: time="2025-07-14T22:17:19.157351365Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:17:19.158847 containerd[1453]: time="2025-07-14T22:17:19.158705791Z" level=info msg="RemovePodSandbox \"1410ecc07f89bba239375a2e87f6ebb37caac8459e21827b8e7dc2af0667974d\" returns successfully" Jul 14 22:17:19.159976 containerd[1453]: time="2025-07-14T22:17:19.159951768Z" level=info msg="StopPodSandbox for \"6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee\"" Jul 14 22:17:19.250391 containerd[1453]: 2025-07-14 22:17:19.208 [WARNING][5291] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--74c5db56dc--m6q26-eth0", GenerateName:"calico-kube-controllers-74c5db56dc-", Namespace:"calico-system", SelfLink:"", UID:"85e42d64-4a00-42a5-acc0-966557a6a6b9", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 16, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74c5db56dc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fd9df80e5765f532791e44d6d3a146482080874202205ff196ac6c199bd80e03", Pod:"calico-kube-controllers-74c5db56dc-m6q26", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif10ec7d4a61", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:17:19.250391 containerd[1453]: 2025-07-14 22:17:19.208 [INFO][5291] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee" Jul 14 22:17:19.250391 containerd[1453]: 2025-07-14 22:17:19.208 [INFO][5291] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee" iface="eth0" netns="" Jul 14 22:17:19.250391 containerd[1453]: 2025-07-14 22:17:19.208 [INFO][5291] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee" Jul 14 22:17:19.250391 containerd[1453]: 2025-07-14 22:17:19.208 [INFO][5291] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee" Jul 14 22:17:19.250391 containerd[1453]: 2025-07-14 22:17:19.232 [INFO][5300] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee" HandleID="k8s-pod-network.6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee" Workload="localhost-k8s-calico--kube--controllers--74c5db56dc--m6q26-eth0" Jul 14 22:17:19.250391 containerd[1453]: 2025-07-14 22:17:19.232 [INFO][5300] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:17:19.250391 containerd[1453]: 2025-07-14 22:17:19.232 [INFO][5300] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:17:19.250391 containerd[1453]: 2025-07-14 22:17:19.239 [WARNING][5300] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee" HandleID="k8s-pod-network.6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee" Workload="localhost-k8s-calico--kube--controllers--74c5db56dc--m6q26-eth0" Jul 14 22:17:19.250391 containerd[1453]: 2025-07-14 22:17:19.240 [INFO][5300] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee" HandleID="k8s-pod-network.6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee" Workload="localhost-k8s-calico--kube--controllers--74c5db56dc--m6q26-eth0" Jul 14 22:17:19.250391 containerd[1453]: 2025-07-14 22:17:19.241 [INFO][5300] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:17:19.250391 containerd[1453]: 2025-07-14 22:17:19.246 [INFO][5291] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee" Jul 14 22:17:19.250391 containerd[1453]: time="2025-07-14T22:17:19.250233757Z" level=info msg="TearDown network for sandbox \"6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee\" successfully" Jul 14 22:17:19.250391 containerd[1453]: time="2025-07-14T22:17:19.250272261Z" level=info msg="StopPodSandbox for \"6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee\" returns successfully" Jul 14 22:17:19.250964 containerd[1453]: time="2025-07-14T22:17:19.250737072Z" level=info msg="RemovePodSandbox for \"6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee\"" Jul 14 22:17:19.250964 containerd[1453]: time="2025-07-14T22:17:19.250758333Z" level=info msg="Forcibly stopping sandbox \"6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee\"" Jul 14 22:17:19.340561 containerd[1453]: 2025-07-14 22:17:19.294 [WARNING][5317] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--74c5db56dc--m6q26-eth0", GenerateName:"calico-kube-controllers-74c5db56dc-", Namespace:"calico-system", SelfLink:"", UID:"85e42d64-4a00-42a5-acc0-966557a6a6b9", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 16, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74c5db56dc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fd9df80e5765f532791e44d6d3a146482080874202205ff196ac6c199bd80e03", Pod:"calico-kube-controllers-74c5db56dc-m6q26", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif10ec7d4a61", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:17:19.340561 containerd[1453]: 2025-07-14 22:17:19.295 [INFO][5317] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee" Jul 14 22:17:19.340561 containerd[1453]: 2025-07-14 22:17:19.295 [INFO][5317] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee" iface="eth0" netns="" Jul 14 22:17:19.340561 containerd[1453]: 2025-07-14 22:17:19.295 [INFO][5317] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee" Jul 14 22:17:19.340561 containerd[1453]: 2025-07-14 22:17:19.295 [INFO][5317] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee" Jul 14 22:17:19.340561 containerd[1453]: 2025-07-14 22:17:19.322 [INFO][5326] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee" HandleID="k8s-pod-network.6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee" Workload="localhost-k8s-calico--kube--controllers--74c5db56dc--m6q26-eth0" Jul 14 22:17:19.340561 containerd[1453]: 2025-07-14 22:17:19.322 [INFO][5326] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:17:19.340561 containerd[1453]: 2025-07-14 22:17:19.322 [INFO][5326] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:17:19.340561 containerd[1453]: 2025-07-14 22:17:19.329 [WARNING][5326] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee" HandleID="k8s-pod-network.6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee" Workload="localhost-k8s-calico--kube--controllers--74c5db56dc--m6q26-eth0" Jul 14 22:17:19.340561 containerd[1453]: 2025-07-14 22:17:19.330 [INFO][5326] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee" HandleID="k8s-pod-network.6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee" Workload="localhost-k8s-calico--kube--controllers--74c5db56dc--m6q26-eth0" Jul 14 22:17:19.340561 containerd[1453]: 2025-07-14 22:17:19.331 [INFO][5326] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:17:19.340561 containerd[1453]: 2025-07-14 22:17:19.335 [INFO][5317] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee" Jul 14 22:17:19.341667 containerd[1453]: time="2025-07-14T22:17:19.341378521Z" level=info msg="TearDown network for sandbox \"6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee\" successfully" Jul 14 22:17:19.909423 containerd[1453]: time="2025-07-14T22:17:19.909357514Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:17:19.909879 containerd[1453]: time="2025-07-14T22:17:19.909452657Z" level=info msg="RemovePodSandbox \"6830cf09639e1b574651394a424fd0848b429162023ab6019d232da68a276dee\" returns successfully" Jul 14 22:17:19.910023 containerd[1453]: time="2025-07-14T22:17:19.909992281Z" level=info msg="StopPodSandbox for \"59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf\"" Jul 14 22:17:19.980000 containerd[1453]: 2025-07-14 22:17:19.945 [WARNING][5344] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55b999998d--g2dgd-eth0", GenerateName:"calico-apiserver-55b999998d-", Namespace:"calico-apiserver", SelfLink:"", UID:"9c8f1d18-f0df-451d-a336-43d00ca10c65", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 16, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55b999998d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0326a9134043ae5a5e506bd9f2be9f19f14f9eef157d7d4ff4362f6602f630a1", Pod:"calico-apiserver-55b999998d-g2dgd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3a8b7d7ab80", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:17:19.980000 containerd[1453]: 2025-07-14 22:17:19.945 [INFO][5344] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf" Jul 14 22:17:19.980000 containerd[1453]: 2025-07-14 22:17:19.945 [INFO][5344] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf" iface="eth0" netns="" Jul 14 22:17:19.980000 containerd[1453]: 2025-07-14 22:17:19.946 [INFO][5344] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf" Jul 14 22:17:19.980000 containerd[1453]: 2025-07-14 22:17:19.946 [INFO][5344] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf" Jul 14 22:17:19.980000 containerd[1453]: 2025-07-14 22:17:19.966 [INFO][5353] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf" HandleID="k8s-pod-network.59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf" Workload="localhost-k8s-calico--apiserver--55b999998d--g2dgd-eth0" Jul 14 22:17:19.980000 containerd[1453]: 2025-07-14 22:17:19.966 [INFO][5353] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:17:19.980000 containerd[1453]: 2025-07-14 22:17:19.967 [INFO][5353] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:17:19.980000 containerd[1453]: 2025-07-14 22:17:19.973 [WARNING][5353] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf" HandleID="k8s-pod-network.59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf" Workload="localhost-k8s-calico--apiserver--55b999998d--g2dgd-eth0" Jul 14 22:17:19.980000 containerd[1453]: 2025-07-14 22:17:19.973 [INFO][5353] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf" HandleID="k8s-pod-network.59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf" Workload="localhost-k8s-calico--apiserver--55b999998d--g2dgd-eth0" Jul 14 22:17:19.980000 containerd[1453]: 2025-07-14 22:17:19.974 [INFO][5353] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:17:19.980000 containerd[1453]: 2025-07-14 22:17:19.977 [INFO][5344] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf" Jul 14 22:17:19.980571 containerd[1453]: time="2025-07-14T22:17:19.980017621Z" level=info msg="TearDown network for sandbox \"59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf\" successfully" Jul 14 22:17:19.980571 containerd[1453]: time="2025-07-14T22:17:19.980041316Z" level=info msg="StopPodSandbox for \"59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf\" returns successfully" Jul 14 22:17:19.980571 containerd[1453]: time="2025-07-14T22:17:19.980490617Z" level=info msg="RemovePodSandbox for \"59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf\"" Jul 14 22:17:19.980571 containerd[1453]: time="2025-07-14T22:17:19.980513331Z" level=info msg="Forcibly stopping sandbox \"59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf\"" Jul 14 22:17:20.042794 containerd[1453]: 2025-07-14 22:17:20.012 [WARNING][5371] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55b999998d--g2dgd-eth0", GenerateName:"calico-apiserver-55b999998d-", Namespace:"calico-apiserver", SelfLink:"", UID:"9c8f1d18-f0df-451d-a336-43d00ca10c65", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 16, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55b999998d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0326a9134043ae5a5e506bd9f2be9f19f14f9eef157d7d4ff4362f6602f630a1", Pod:"calico-apiserver-55b999998d-g2dgd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3a8b7d7ab80", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:17:20.042794 containerd[1453]: 2025-07-14 22:17:20.012 [INFO][5371] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf" Jul 14 22:17:20.042794 containerd[1453]: 2025-07-14 22:17:20.012 [INFO][5371] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf" iface="eth0" netns="" Jul 14 22:17:20.042794 containerd[1453]: 2025-07-14 22:17:20.012 [INFO][5371] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf" Jul 14 22:17:20.042794 containerd[1453]: 2025-07-14 22:17:20.012 [INFO][5371] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf" Jul 14 22:17:20.042794 containerd[1453]: 2025-07-14 22:17:20.031 [INFO][5379] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf" HandleID="k8s-pod-network.59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf" Workload="localhost-k8s-calico--apiserver--55b999998d--g2dgd-eth0" Jul 14 22:17:20.042794 containerd[1453]: 2025-07-14 22:17:20.031 [INFO][5379] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:17:20.042794 containerd[1453]: 2025-07-14 22:17:20.031 [INFO][5379] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:17:20.042794 containerd[1453]: 2025-07-14 22:17:20.036 [WARNING][5379] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf" HandleID="k8s-pod-network.59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf" Workload="localhost-k8s-calico--apiserver--55b999998d--g2dgd-eth0" Jul 14 22:17:20.042794 containerd[1453]: 2025-07-14 22:17:20.036 [INFO][5379] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf" HandleID="k8s-pod-network.59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf" Workload="localhost-k8s-calico--apiserver--55b999998d--g2dgd-eth0" Jul 14 22:17:20.042794 containerd[1453]: 2025-07-14 22:17:20.037 [INFO][5379] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:17:20.042794 containerd[1453]: 2025-07-14 22:17:20.040 [INFO][5371] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf" Jul 14 22:17:20.043383 containerd[1453]: time="2025-07-14T22:17:20.042860210Z" level=info msg="TearDown network for sandbox \"59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf\" successfully" Jul 14 22:17:20.172215 containerd[1453]: time="2025-07-14T22:17:20.171600990Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:17:20.172215 containerd[1453]: time="2025-07-14T22:17:20.171685963Z" level=info msg="RemovePodSandbox \"59019d48c6c037707ed09f9d43c00436741ae060832922652257beaeb87245cf\" returns successfully" Jul 14 22:17:20.172377 containerd[1453]: time="2025-07-14T22:17:20.172264160Z" level=info msg="StopPodSandbox for \"45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246\"" Jul 14 22:17:20.275635 containerd[1453]: 2025-07-14 22:17:20.208 [WARNING][5396] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246" WorkloadEndpoint="localhost-k8s-whisker--f5cc46c99--xrsn7-eth0" Jul 14 22:17:20.275635 containerd[1453]: 2025-07-14 22:17:20.208 [INFO][5396] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246" Jul 14 22:17:20.275635 containerd[1453]: 2025-07-14 22:17:20.208 [INFO][5396] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246" iface="eth0" netns="" Jul 14 22:17:20.275635 containerd[1453]: 2025-07-14 22:17:20.208 [INFO][5396] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246" Jul 14 22:17:20.275635 containerd[1453]: 2025-07-14 22:17:20.208 [INFO][5396] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246" Jul 14 22:17:20.275635 containerd[1453]: 2025-07-14 22:17:20.263 [INFO][5405] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246" HandleID="k8s-pod-network.45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246" Workload="localhost-k8s-whisker--f5cc46c99--xrsn7-eth0" Jul 14 22:17:20.275635 containerd[1453]: 2025-07-14 22:17:20.263 [INFO][5405] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:17:20.275635 containerd[1453]: 2025-07-14 22:17:20.263 [INFO][5405] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:17:20.275635 containerd[1453]: 2025-07-14 22:17:20.269 [WARNING][5405] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246" HandleID="k8s-pod-network.45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246" Workload="localhost-k8s-whisker--f5cc46c99--xrsn7-eth0" Jul 14 22:17:20.275635 containerd[1453]: 2025-07-14 22:17:20.269 [INFO][5405] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246" HandleID="k8s-pod-network.45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246" Workload="localhost-k8s-whisker--f5cc46c99--xrsn7-eth0" Jul 14 22:17:20.275635 containerd[1453]: 2025-07-14 22:17:20.270 [INFO][5405] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:17:20.275635 containerd[1453]: 2025-07-14 22:17:20.272 [INFO][5396] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246" Jul 14 22:17:20.276702 containerd[1453]: time="2025-07-14T22:17:20.275667144Z" level=info msg="TearDown network for sandbox \"45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246\" successfully" Jul 14 22:17:20.276702 containerd[1453]: time="2025-07-14T22:17:20.275696981Z" level=info msg="StopPodSandbox for \"45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246\" returns successfully" Jul 14 22:17:20.276702 containerd[1453]: time="2025-07-14T22:17:20.276271361Z" level=info msg="RemovePodSandbox for \"45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246\"" Jul 14 22:17:20.276702 containerd[1453]: time="2025-07-14T22:17:20.276307781Z" level=info msg="Forcibly stopping sandbox \"45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246\"" Jul 14 22:17:20.352358 containerd[1453]: 2025-07-14 22:17:20.311 [WARNING][5427] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246" WorkloadEndpoint="localhost-k8s-whisker--f5cc46c99--xrsn7-eth0" Jul 14 22:17:20.352358 containerd[1453]: 2025-07-14 22:17:20.311 [INFO][5427] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246" Jul 14 22:17:20.352358 containerd[1453]: 2025-07-14 22:17:20.311 [INFO][5427] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246" iface="eth0" netns="" Jul 14 22:17:20.352358 containerd[1453]: 2025-07-14 22:17:20.311 [INFO][5427] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246" Jul 14 22:17:20.352358 containerd[1453]: 2025-07-14 22:17:20.311 [INFO][5427] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246" Jul 14 22:17:20.352358 containerd[1453]: 2025-07-14 22:17:20.334 [INFO][5436] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246" HandleID="k8s-pod-network.45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246" Workload="localhost-k8s-whisker--f5cc46c99--xrsn7-eth0" Jul 14 22:17:20.352358 containerd[1453]: 2025-07-14 22:17:20.334 [INFO][5436] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:17:20.352358 containerd[1453]: 2025-07-14 22:17:20.335 [INFO][5436] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:17:20.352358 containerd[1453]: 2025-07-14 22:17:20.342 [WARNING][5436] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246" HandleID="k8s-pod-network.45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246" Workload="localhost-k8s-whisker--f5cc46c99--xrsn7-eth0" Jul 14 22:17:20.352358 containerd[1453]: 2025-07-14 22:17:20.343 [INFO][5436] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246" HandleID="k8s-pod-network.45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246" Workload="localhost-k8s-whisker--f5cc46c99--xrsn7-eth0" Jul 14 22:17:20.352358 containerd[1453]: 2025-07-14 22:17:20.344 [INFO][5436] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:17:20.352358 containerd[1453]: 2025-07-14 22:17:20.348 [INFO][5427] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246" Jul 14 22:17:20.352900 containerd[1453]: time="2025-07-14T22:17:20.352376104Z" level=info msg="TearDown network for sandbox \"45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246\" successfully" Jul 14 22:17:20.674360 containerd[1453]: time="2025-07-14T22:17:20.674313919Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:17:20.674512 containerd[1453]: time="2025-07-14T22:17:20.674402038Z" level=info msg="RemovePodSandbox \"45b3b1c86cac12e597fbd76141b672927ec9835988b783cad6cabe5027db7246\" returns successfully" Jul 14 22:17:20.674957 containerd[1453]: time="2025-07-14T22:17:20.674927814Z" level=info msg="StopPodSandbox for \"4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587\"" Jul 14 22:17:20.747020 containerd[1453]: 2025-07-14 22:17:20.712 [WARNING][5454] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--nd7rw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2f2c7ddb-17ed-4c7d-97db-2bcad0e280dc", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 16, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3fde0d00b9f49ac19a5cc2c24cf93cf8057f13167243b3bbca9b5c29316c9be6", Pod:"coredns-674b8bbfcf-nd7rw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2019f0df849", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:17:20.747020 containerd[1453]: 2025-07-14 22:17:20.712 [INFO][5454] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587" Jul 14 22:17:20.747020 containerd[1453]: 2025-07-14 22:17:20.712 [INFO][5454] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587" iface="eth0" netns="" Jul 14 22:17:20.747020 containerd[1453]: 2025-07-14 22:17:20.712 [INFO][5454] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587" Jul 14 22:17:20.747020 containerd[1453]: 2025-07-14 22:17:20.712 [INFO][5454] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587" Jul 14 22:17:20.747020 containerd[1453]: 2025-07-14 22:17:20.734 [INFO][5463] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587" HandleID="k8s-pod-network.4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587" Workload="localhost-k8s-coredns--674b8bbfcf--nd7rw-eth0" Jul 14 22:17:20.747020 containerd[1453]: 2025-07-14 22:17:20.734 [INFO][5463] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:17:20.747020 containerd[1453]: 2025-07-14 22:17:20.734 [INFO][5463] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:17:20.747020 containerd[1453]: 2025-07-14 22:17:20.740 [WARNING][5463] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587" HandleID="k8s-pod-network.4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587" Workload="localhost-k8s-coredns--674b8bbfcf--nd7rw-eth0" Jul 14 22:17:20.747020 containerd[1453]: 2025-07-14 22:17:20.740 [INFO][5463] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587" HandleID="k8s-pod-network.4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587" Workload="localhost-k8s-coredns--674b8bbfcf--nd7rw-eth0" Jul 14 22:17:20.747020 containerd[1453]: 2025-07-14 22:17:20.741 [INFO][5463] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:17:20.747020 containerd[1453]: 2025-07-14 22:17:20.743 [INFO][5454] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587" Jul 14 22:17:20.747579 containerd[1453]: time="2025-07-14T22:17:20.747067805Z" level=info msg="TearDown network for sandbox \"4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587\" successfully" Jul 14 22:17:20.747579 containerd[1453]: time="2025-07-14T22:17:20.747099135Z" level=info msg="StopPodSandbox for \"4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587\" returns successfully" Jul 14 22:17:20.748136 containerd[1453]: time="2025-07-14T22:17:20.747808864Z" level=info msg="RemovePodSandbox for \"4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587\"" Jul 14 22:17:20.748136 containerd[1453]: time="2025-07-14T22:17:20.747866013Z" level=info msg="Forcibly stopping sandbox \"4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587\"" Jul 14 22:17:20.762067 containerd[1453]: time="2025-07-14T22:17:20.761498943Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:17:20.790294 containerd[1453]: time="2025-07-14T22:17:20.790220199Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 14 22:17:20.809853 containerd[1453]: time="2025-07-14T22:17:20.809694245Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:17:20.858185 containerd[1453]: time="2025-07-14T22:17:20.858121040Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:17:20.859056 containerd[1453]: time="2025-07-14T22:17:20.859018800Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 7.627221192s" Jul 14 22:17:20.859206 containerd[1453]: time="2025-07-14T22:17:20.859063314Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 14 22:17:20.860193 containerd[1453]: time="2025-07-14T22:17:20.860155256Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 14 22:17:20.969887 containerd[1453]: 2025-07-14 22:17:20.780 [WARNING][5481] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--nd7rw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2f2c7ddb-17ed-4c7d-97db-2bcad0e280dc", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 16, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3fde0d00b9f49ac19a5cc2c24cf93cf8057f13167243b3bbca9b5c29316c9be6", Pod:"coredns-674b8bbfcf-nd7rw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2019f0df849", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:17:20.969887 containerd[1453]: 2025-07-14 22:17:20.780 [INFO][5481] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587" Jul 14 22:17:20.969887 containerd[1453]: 2025-07-14 22:17:20.780 [INFO][5481] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587" iface="eth0" netns="" Jul 14 22:17:20.969887 containerd[1453]: 2025-07-14 22:17:20.780 [INFO][5481] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587" Jul 14 22:17:20.969887 containerd[1453]: 2025-07-14 22:17:20.780 [INFO][5481] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587" Jul 14 22:17:20.969887 containerd[1453]: 2025-07-14 22:17:20.804 [INFO][5489] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587" HandleID="k8s-pod-network.4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587" Workload="localhost-k8s-coredns--674b8bbfcf--nd7rw-eth0" Jul 14 22:17:20.969887 containerd[1453]: 2025-07-14 22:17:20.805 [INFO][5489] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:17:20.969887 containerd[1453]: 2025-07-14 22:17:20.805 [INFO][5489] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:17:20.969887 containerd[1453]: 2025-07-14 22:17:20.918 [WARNING][5489] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587" HandleID="k8s-pod-network.4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587" Workload="localhost-k8s-coredns--674b8bbfcf--nd7rw-eth0" Jul 14 22:17:20.969887 containerd[1453]: 2025-07-14 22:17:20.918 [INFO][5489] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587" HandleID="k8s-pod-network.4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587" Workload="localhost-k8s-coredns--674b8bbfcf--nd7rw-eth0" Jul 14 22:17:20.969887 containerd[1453]: 2025-07-14 22:17:20.963 [INFO][5489] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:17:20.969887 containerd[1453]: 2025-07-14 22:17:20.965 [INFO][5481] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587" Jul 14 22:17:20.969887 containerd[1453]: time="2025-07-14T22:17:20.969258006Z" level=info msg="TearDown network for sandbox \"4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587\" successfully" Jul 14 22:17:20.982280 containerd[1453]: time="2025-07-14T22:17:20.982239759Z" level=info msg="CreateContainer within sandbox \"c3cc3fa91c78172a1b92eaa88823aa73d69de25ae6cbdb05876d8ad2df03e732\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 14 22:17:21.119855 containerd[1453]: time="2025-07-14T22:17:21.119640530Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:17:21.119855 containerd[1453]: time="2025-07-14T22:17:21.119741434Z" level=info msg="RemovePodSandbox \"4af750a40b74a261042008d1f5da34f6e97a97a8d6851d3390e58f3de311f587\" returns successfully" Jul 14 22:17:21.120473 containerd[1453]: time="2025-07-14T22:17:21.120374025Z" level=info msg="StopPodSandbox for \"cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e\"" Jul 14 22:17:21.199547 containerd[1453]: 2025-07-14 22:17:21.155 [WARNING][5506] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wq2kw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"eedcfc2c-c8c7-40fa-a5d9-5e29e588b0a5", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 16, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cfb2e22ab15aa000b6df817192a2589b9fb1c88e35db8613220bf4a727e7487c", Pod:"csi-node-driver-wq2kw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia37470d634d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:17:21.199547 containerd[1453]: 2025-07-14 22:17:21.155 [INFO][5506] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e" Jul 14 22:17:21.199547 containerd[1453]: 2025-07-14 22:17:21.155 [INFO][5506] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e" iface="eth0" netns="" Jul 14 22:17:21.199547 containerd[1453]: 2025-07-14 22:17:21.155 [INFO][5506] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e" Jul 14 22:17:21.199547 containerd[1453]: 2025-07-14 22:17:21.155 [INFO][5506] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e" Jul 14 22:17:21.199547 containerd[1453]: 2025-07-14 22:17:21.186 [INFO][5515] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e" HandleID="k8s-pod-network.cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e" Workload="localhost-k8s-csi--node--driver--wq2kw-eth0" Jul 14 22:17:21.199547 containerd[1453]: 2025-07-14 22:17:21.186 [INFO][5515] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:17:21.199547 containerd[1453]: 2025-07-14 22:17:21.186 [INFO][5515] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:17:21.199547 containerd[1453]: 2025-07-14 22:17:21.192 [WARNING][5515] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e" HandleID="k8s-pod-network.cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e" Workload="localhost-k8s-csi--node--driver--wq2kw-eth0" Jul 14 22:17:21.199547 containerd[1453]: 2025-07-14 22:17:21.192 [INFO][5515] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e" HandleID="k8s-pod-network.cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e" Workload="localhost-k8s-csi--node--driver--wq2kw-eth0" Jul 14 22:17:21.199547 containerd[1453]: 2025-07-14 22:17:21.193 [INFO][5515] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:17:21.199547 containerd[1453]: 2025-07-14 22:17:21.196 [INFO][5506] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e" Jul 14 22:17:21.200023 containerd[1453]: time="2025-07-14T22:17:21.199585923Z" level=info msg="TearDown network for sandbox \"cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e\" successfully" Jul 14 22:17:21.200023 containerd[1453]: time="2025-07-14T22:17:21.199611552Z" level=info msg="StopPodSandbox for \"cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e\" returns successfully" Jul 14 22:17:21.200150 containerd[1453]: time="2025-07-14T22:17:21.200115908Z" level=info msg="RemovePodSandbox for \"cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e\"" Jul 14 22:17:21.200293 containerd[1453]: time="2025-07-14T22:17:21.200152268Z" level=info msg="Forcibly stopping sandbox \"cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e\"" Jul 14 22:17:21.264326 containerd[1453]: 2025-07-14 22:17:21.230 [WARNING][5533] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wq2kw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"eedcfc2c-c8c7-40fa-a5d9-5e29e588b0a5", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 16, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cfb2e22ab15aa000b6df817192a2589b9fb1c88e35db8613220bf4a727e7487c", Pod:"csi-node-driver-wq2kw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia37470d634d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:17:21.264326 containerd[1453]: 2025-07-14 22:17:21.231 [INFO][5533] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e" Jul 14 22:17:21.264326 containerd[1453]: 2025-07-14 22:17:21.231 [INFO][5533] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e" iface="eth0" netns="" Jul 14 22:17:21.264326 containerd[1453]: 2025-07-14 22:17:21.231 [INFO][5533] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e" Jul 14 22:17:21.264326 containerd[1453]: 2025-07-14 22:17:21.231 [INFO][5533] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e" Jul 14 22:17:21.264326 containerd[1453]: 2025-07-14 22:17:21.250 [INFO][5542] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e" HandleID="k8s-pod-network.cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e" Workload="localhost-k8s-csi--node--driver--wq2kw-eth0" Jul 14 22:17:21.264326 containerd[1453]: 2025-07-14 22:17:21.250 [INFO][5542] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:17:21.264326 containerd[1453]: 2025-07-14 22:17:21.250 [INFO][5542] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:17:21.264326 containerd[1453]: 2025-07-14 22:17:21.256 [WARNING][5542] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e" HandleID="k8s-pod-network.cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e" Workload="localhost-k8s-csi--node--driver--wq2kw-eth0" Jul 14 22:17:21.264326 containerd[1453]: 2025-07-14 22:17:21.256 [INFO][5542] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e" HandleID="k8s-pod-network.cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e" Workload="localhost-k8s-csi--node--driver--wq2kw-eth0" Jul 14 22:17:21.264326 containerd[1453]: 2025-07-14 22:17:21.259 [INFO][5542] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:17:21.264326 containerd[1453]: 2025-07-14 22:17:21.261 [INFO][5533] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e" Jul 14 22:17:21.266814 containerd[1453]: time="2025-07-14T22:17:21.264921396Z" level=info msg="TearDown network for sandbox \"cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e\" successfully" Jul 14 22:17:21.281680 containerd[1453]: time="2025-07-14T22:17:21.281633221Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:17:21.281725 containerd[1453]: time="2025-07-14T22:17:21.281712803Z" level=info msg="RemovePodSandbox \"cee6599400c8a1a0e69bfa4bef0f44eb19dcb4d7e77db5ae1bd020556641ac3e\" returns successfully" Jul 14 22:17:21.282319 containerd[1453]: time="2025-07-14T22:17:21.282290969Z" level=info msg="StopPodSandbox for \"95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0\"" Jul 14 22:17:21.394705 containerd[1453]: 2025-07-14 22:17:21.318 [WARNING][5560] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--7c28s-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"0b90dbe9-4134-4806-83a7-13f4785a8131", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 16, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c3cc3fa91c78172a1b92eaa88823aa73d69de25ae6cbdb05876d8ad2df03e732", Pod:"goldmane-768f4c5c69-7c28s", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid7eb0691f8e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:17:21.394705 containerd[1453]: 2025-07-14 22:17:21.318 [INFO][5560] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0" Jul 14 22:17:21.394705 containerd[1453]: 2025-07-14 22:17:21.318 [INFO][5560] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0" iface="eth0" netns="" Jul 14 22:17:21.394705 containerd[1453]: 2025-07-14 22:17:21.318 [INFO][5560] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0" Jul 14 22:17:21.394705 containerd[1453]: 2025-07-14 22:17:21.318 [INFO][5560] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0" Jul 14 22:17:21.394705 containerd[1453]: 2025-07-14 22:17:21.372 [INFO][5569] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0" HandleID="k8s-pod-network.95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0" Workload="localhost-k8s-goldmane--768f4c5c69--7c28s-eth0" Jul 14 22:17:21.394705 containerd[1453]: 2025-07-14 22:17:21.373 [INFO][5569] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:17:21.394705 containerd[1453]: 2025-07-14 22:17:21.373 [INFO][5569] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:17:21.394705 containerd[1453]: 2025-07-14 22:17:21.378 [WARNING][5569] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0" HandleID="k8s-pod-network.95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0" Workload="localhost-k8s-goldmane--768f4c5c69--7c28s-eth0" Jul 14 22:17:21.394705 containerd[1453]: 2025-07-14 22:17:21.378 [INFO][5569] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0" HandleID="k8s-pod-network.95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0" Workload="localhost-k8s-goldmane--768f4c5c69--7c28s-eth0" Jul 14 22:17:21.394705 containerd[1453]: 2025-07-14 22:17:21.380 [INFO][5569] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:17:21.394705 containerd[1453]: 2025-07-14 22:17:21.382 [INFO][5560] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0" Jul 14 22:17:21.395157 containerd[1453]: time="2025-07-14T22:17:21.394776288Z" level=info msg="TearDown network for sandbox \"95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0\" successfully" Jul 14 22:17:21.395157 containerd[1453]: time="2025-07-14T22:17:21.394809102Z" level=info msg="StopPodSandbox for \"95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0\" returns successfully" Jul 14 22:17:21.395454 containerd[1453]: time="2025-07-14T22:17:21.395420141Z" level=info msg="RemovePodSandbox for \"95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0\"" Jul 14 22:17:21.395479 containerd[1453]: time="2025-07-14T22:17:21.395455550Z" level=info msg="Forcibly stopping sandbox \"95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0\"" Jul 14 22:17:21.463654 containerd[1453]: time="2025-07-14T22:17:21.463599639Z" level=info msg="CreateContainer within sandbox \"c3cc3fa91c78172a1b92eaa88823aa73d69de25ae6cbdb05876d8ad2df03e732\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"fb5b45ae4b8bb5306b14a7bd795005dee56347588c83ca74ae576f3f640bf077\"" Jul 14 22:17:21.464743 containerd[1453]: time="2025-07-14T22:17:21.464552484Z" level=info msg="StartContainer for \"fb5b45ae4b8bb5306b14a7bd795005dee56347588c83ca74ae576f3f640bf077\"" Jul 14 22:17:21.490474 containerd[1453]: 2025-07-14 22:17:21.430 [WARNING][5587] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--7c28s-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"0b90dbe9-4134-4806-83a7-13f4785a8131", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 16, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c3cc3fa91c78172a1b92eaa88823aa73d69de25ae6cbdb05876d8ad2df03e732", Pod:"goldmane-768f4c5c69-7c28s", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid7eb0691f8e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:17:21.490474 containerd[1453]: 2025-07-14 22:17:21.430 [INFO][5587] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0" Jul 14 22:17:21.490474 containerd[1453]: 2025-07-14 22:17:21.430 [INFO][5587] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0" iface="eth0" netns="" Jul 14 22:17:21.490474 containerd[1453]: 2025-07-14 22:17:21.430 [INFO][5587] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0" Jul 14 22:17:21.490474 containerd[1453]: 2025-07-14 22:17:21.431 [INFO][5587] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0" Jul 14 22:17:21.490474 containerd[1453]: 2025-07-14 22:17:21.459 [INFO][5596] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0" HandleID="k8s-pod-network.95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0" Workload="localhost-k8s-goldmane--768f4c5c69--7c28s-eth0" Jul 14 22:17:21.490474 containerd[1453]: 2025-07-14 22:17:21.460 [INFO][5596] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:17:21.490474 containerd[1453]: 2025-07-14 22:17:21.460 [INFO][5596] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:17:21.490474 containerd[1453]: 2025-07-14 22:17:21.467 [WARNING][5596] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0" HandleID="k8s-pod-network.95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0" Workload="localhost-k8s-goldmane--768f4c5c69--7c28s-eth0" Jul 14 22:17:21.490474 containerd[1453]: 2025-07-14 22:17:21.467 [INFO][5596] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0" HandleID="k8s-pod-network.95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0" Workload="localhost-k8s-goldmane--768f4c5c69--7c28s-eth0" Jul 14 22:17:21.490474 containerd[1453]: 2025-07-14 22:17:21.480 [INFO][5596] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:17:21.490474 containerd[1453]: 2025-07-14 22:17:21.485 [INFO][5587] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0" Jul 14 22:17:21.491096 containerd[1453]: time="2025-07-14T22:17:21.490497093Z" level=info msg="TearDown network for sandbox \"95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0\" successfully" Jul 14 22:17:21.504970 systemd[1]: Started cri-containerd-fb5b45ae4b8bb5306b14a7bd795005dee56347588c83ca74ae576f3f640bf077.scope - libcontainer container fb5b45ae4b8bb5306b14a7bd795005dee56347588c83ca74ae576f3f640bf077. Jul 14 22:17:21.510244 containerd[1453]: time="2025-07-14T22:17:21.510182931Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:17:21.510310 containerd[1453]: time="2025-07-14T22:17:21.510273455Z" level=info msg="RemovePodSandbox \"95e299ec26890f905f66e213650252ef9c1da34e94355bf00814c399a32ed7e0\" returns successfully" Jul 14 22:17:21.511019 containerd[1453]: time="2025-07-14T22:17:21.510810712Z" level=info msg="StopPodSandbox for \"d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488\"" Jul 14 22:17:21.557279 containerd[1453]: time="2025-07-14T22:17:21.557170537Z" level=info msg="StartContainer for \"fb5b45ae4b8bb5306b14a7bd795005dee56347588c83ca74ae576f3f640bf077\" returns successfully" Jul 14 22:17:21.591691 containerd[1453]: 2025-07-14 22:17:21.548 [WARNING][5639] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--cj6qp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"240cf122-725b-4d48-a6f5-1c05c9f2102a", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 16, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7b2d261e7eff45cd650a9bd4ba2a081573936fcb2241b0df160363fdc7fd1dcc", Pod:"coredns-674b8bbfcf-cj6qp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic6fe5943f1f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:17:21.591691 containerd[1453]: 2025-07-14 22:17:21.549 [INFO][5639] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488" Jul 14 22:17:21.591691 containerd[1453]: 2025-07-14 22:17:21.549 [INFO][5639] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488" iface="eth0" netns="" Jul 14 22:17:21.591691 containerd[1453]: 2025-07-14 22:17:21.549 [INFO][5639] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488" Jul 14 22:17:21.591691 containerd[1453]: 2025-07-14 22:17:21.549 [INFO][5639] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488" Jul 14 22:17:21.591691 containerd[1453]: 2025-07-14 22:17:21.574 [INFO][5656] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488" HandleID="k8s-pod-network.d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488" Workload="localhost-k8s-coredns--674b8bbfcf--cj6qp-eth0" Jul 14 22:17:21.591691 containerd[1453]: 2025-07-14 22:17:21.574 [INFO][5656] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:17:21.591691 containerd[1453]: 2025-07-14 22:17:21.574 [INFO][5656] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:17:21.591691 containerd[1453]: 2025-07-14 22:17:21.584 [WARNING][5656] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488" HandleID="k8s-pod-network.d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488" Workload="localhost-k8s-coredns--674b8bbfcf--cj6qp-eth0" Jul 14 22:17:21.591691 containerd[1453]: 2025-07-14 22:17:21.584 [INFO][5656] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488" HandleID="k8s-pod-network.d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488" Workload="localhost-k8s-coredns--674b8bbfcf--cj6qp-eth0" Jul 14 22:17:21.591691 containerd[1453]: 2025-07-14 22:17:21.586 [INFO][5656] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:17:21.591691 containerd[1453]: 2025-07-14 22:17:21.588 [INFO][5639] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488" Jul 14 22:17:21.592183 containerd[1453]: time="2025-07-14T22:17:21.591734819Z" level=info msg="TearDown network for sandbox \"d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488\" successfully" Jul 14 22:17:21.592183 containerd[1453]: time="2025-07-14T22:17:21.591768484Z" level=info msg="StopPodSandbox for \"d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488\" returns successfully" Jul 14 22:17:21.592327 containerd[1453]: time="2025-07-14T22:17:21.592296254Z" level=info msg="RemovePodSandbox for \"d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488\"" Jul 14 22:17:21.592358 containerd[1453]: time="2025-07-14T22:17:21.592326301Z" level=info msg="Forcibly stopping sandbox \"d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488\"" Jul 14 22:17:21.679503 containerd[1453]: 2025-07-14 22:17:21.633 [WARNING][5679] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--cj6qp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"240cf122-725b-4d48-a6f5-1c05c9f2102a", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 16, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7b2d261e7eff45cd650a9bd4ba2a081573936fcb2241b0df160363fdc7fd1dcc", Pod:"coredns-674b8bbfcf-cj6qp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic6fe5943f1f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:17:21.679503 containerd[1453]: 2025-07-14 22:17:21.633 [INFO][5679] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488" Jul 14 22:17:21.679503 containerd[1453]: 2025-07-14 22:17:21.633 [INFO][5679] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488" iface="eth0" netns="" Jul 14 22:17:21.679503 containerd[1453]: 2025-07-14 22:17:21.633 [INFO][5679] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488" Jul 14 22:17:21.679503 containerd[1453]: 2025-07-14 22:17:21.633 [INFO][5679] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488" Jul 14 22:17:21.679503 containerd[1453]: 2025-07-14 22:17:21.665 [INFO][5689] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488" HandleID="k8s-pod-network.d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488" Workload="localhost-k8s-coredns--674b8bbfcf--cj6qp-eth0" Jul 14 22:17:21.679503 containerd[1453]: 2025-07-14 22:17:21.666 [INFO][5689] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:17:21.679503 containerd[1453]: 2025-07-14 22:17:21.666 [INFO][5689] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:17:21.679503 containerd[1453]: 2025-07-14 22:17:21.671 [WARNING][5689] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488" HandleID="k8s-pod-network.d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488" Workload="localhost-k8s-coredns--674b8bbfcf--cj6qp-eth0" Jul 14 22:17:21.679503 containerd[1453]: 2025-07-14 22:17:21.671 [INFO][5689] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488" HandleID="k8s-pod-network.d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488" Workload="localhost-k8s-coredns--674b8bbfcf--cj6qp-eth0" Jul 14 22:17:21.679503 containerd[1453]: 2025-07-14 22:17:21.673 [INFO][5689] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:17:21.679503 containerd[1453]: 2025-07-14 22:17:21.676 [INFO][5679] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488" Jul 14 22:17:21.679943 containerd[1453]: time="2025-07-14T22:17:21.679547234Z" level=info msg="TearDown network for sandbox \"d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488\" successfully" Jul 14 22:17:21.748214 containerd[1453]: time="2025-07-14T22:17:21.748160291Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:17:21.748335 containerd[1453]: time="2025-07-14T22:17:21.748232780Z" level=info msg="RemovePodSandbox \"d0ee563ac5a7852602b6d8937412ed36b5310902d83d10c11978211aaf581488\" returns successfully" Jul 14 22:17:21.979750 kubelet[2516]: I0714 22:17:21.979660 2516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-7c28s" podStartSLOduration=29.22081198 podStartE2EDuration="40.979641844s" podCreationTimestamp="2025-07-14 22:16:41 +0000 UTC" firstStartedPulling="2025-07-14 22:17:09.101071095 +0000 UTC m=+50.473623991" lastFinishedPulling="2025-07-14 22:17:20.859900959 +0000 UTC m=+62.232453855" observedRunningTime="2025-07-14 22:17:21.979474875 +0000 UTC m=+63.352027771" watchObservedRunningTime="2025-07-14 22:17:21.979641844 +0000 UTC m=+63.352194741" Jul 14 22:17:24.326964 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3506294882.mount: Deactivated successfully. Jul 14 22:17:24.341355 containerd[1453]: time="2025-07-14T22:17:24.341316632Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:17:24.342324 containerd[1453]: time="2025-07-14T22:17:24.342259315Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 14 22:17:24.344345 containerd[1453]: time="2025-07-14T22:17:24.344269850Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:17:24.346727 containerd[1453]: time="2025-07-14T22:17:24.346688055Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:17:24.347277 containerd[1453]: time="2025-07-14T22:17:24.347233679Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 3.487041782s" Jul 14 22:17:24.348091 containerd[1453]: time="2025-07-14T22:17:24.347278063Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 14 22:17:24.349649 containerd[1453]: time="2025-07-14T22:17:24.348888974Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 14 22:17:24.354211 containerd[1453]: time="2025-07-14T22:17:24.354173832Z" level=info msg="CreateContainer within sandbox \"40ddff8c457a524aa4dab38c2a16c3367ecb75b0e9238d19d4d4157775902535\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 14 22:17:24.364850 containerd[1453]: time="2025-07-14T22:17:24.364781478Z" level=info msg="CreateContainer within sandbox \"40ddff8c457a524aa4dab38c2a16c3367ecb75b0e9238d19d4d4157775902535\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"edb015428f0c76c171b26a934983460eb260fa20fc28eb5384044fbb5fcf3c55\"" Jul 14 22:17:24.366417 containerd[1453]: time="2025-07-14T22:17:24.365333975Z" level=info msg="StartContainer for \"edb015428f0c76c171b26a934983460eb260fa20fc28eb5384044fbb5fcf3c55\"" Jul 14 22:17:24.457074 systemd[1]: Started cri-containerd-edb015428f0c76c171b26a934983460eb260fa20fc28eb5384044fbb5fcf3c55.scope - libcontainer container edb015428f0c76c171b26a934983460eb260fa20fc28eb5384044fbb5fcf3c55. Jul 14 22:17:24.502033 containerd[1453]: time="2025-07-14T22:17:24.501978588Z" level=info msg="StartContainer for \"edb015428f0c76c171b26a934983460eb260fa20fc28eb5384044fbb5fcf3c55\" returns successfully" Jul 14 22:17:24.983156 kubelet[2516]: I0714 22:17:24.982775 2516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-78b748b8f7-7ljwr" podStartSLOduration=1.6886783950000002 podStartE2EDuration="20.982758408s" podCreationTimestamp="2025-07-14 22:17:04 +0000 UTC" firstStartedPulling="2025-07-14 22:17:05.054197051 +0000 UTC m=+46.426749947" lastFinishedPulling="2025-07-14 22:17:24.348277064 +0000 UTC m=+65.720829960" observedRunningTime="2025-07-14 22:17:24.982076064 +0000 UTC m=+66.354628960" watchObservedRunningTime="2025-07-14 22:17:24.982758408 +0000 UTC m=+66.355311305" Jul 14 22:17:26.924070 containerd[1453]: time="2025-07-14T22:17:26.924016326Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:17:26.925331 containerd[1453]: time="2025-07-14T22:17:26.925029562Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 14 22:17:26.926548 containerd[1453]: time="2025-07-14T22:17:26.926520703Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:17:26.958132 containerd[1453]: time="2025-07-14T22:17:26.958066706Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:17:26.959052 containerd[1453]: time="2025-07-14T22:17:26.959016652Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 2.610099113s" Jul 14 22:17:26.959114 containerd[1453]: time="2025-07-14T22:17:26.959052410Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 14 22:17:26.974960 containerd[1453]: time="2025-07-14T22:17:26.974897024Z" level=info msg="CreateContainer within sandbox \"cfb2e22ab15aa000b6df817192a2589b9fb1c88e35db8613220bf4a727e7487c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 14 22:17:27.011202 containerd[1453]: time="2025-07-14T22:17:27.011110461Z" level=info msg="CreateContainer within sandbox \"cfb2e22ab15aa000b6df817192a2589b9fb1c88e35db8613220bf4a727e7487c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"af318bc0a0ce4fe6df7729fcf1a5a77bc2ef56c289abfce04f281b9586a258c2\"" Jul 14 22:17:27.012042 containerd[1453]: time="2025-07-14T22:17:27.012003919Z" level=info msg="StartContainer for \"af318bc0a0ce4fe6df7729fcf1a5a77bc2ef56c289abfce04f281b9586a258c2\"" Jul 14 22:17:27.073977 systemd[1]: Started cri-containerd-af318bc0a0ce4fe6df7729fcf1a5a77bc2ef56c289abfce04f281b9586a258c2.scope - libcontainer container af318bc0a0ce4fe6df7729fcf1a5a77bc2ef56c289abfce04f281b9586a258c2. Jul 14 22:17:27.114697 containerd[1453]: time="2025-07-14T22:17:27.114631791Z" level=info msg="StartContainer for \"af318bc0a0ce4fe6df7729fcf1a5a77bc2ef56c289abfce04f281b9586a258c2\" returns successfully" Jul 14 22:17:27.852099 kubelet[2516]: I0714 22:17:27.852056 2516 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 14 22:17:27.853057 kubelet[2516]: I0714 22:17:27.853035 2516 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 14 22:17:28.011474 kubelet[2516]: I0714 22:17:28.011414 2516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-wq2kw" podStartSLOduration=28.128588665 podStartE2EDuration="48.011399486s" podCreationTimestamp="2025-07-14 22:16:40 +0000 UTC" firstStartedPulling="2025-07-14 22:17:07.078138545 +0000 UTC m=+48.450691441" lastFinishedPulling="2025-07-14 22:17:26.960949366 +0000 UTC m=+68.333502262" observedRunningTime="2025-07-14 22:17:28.010551136 +0000 UTC m=+69.383104032" watchObservedRunningTime="2025-07-14 22:17:28.011399486 +0000 UTC m=+69.383952382" Jul 14 22:17:28.129637 systemd[1]: Started sshd@7-10.0.0.96:22-10.0.0.1:60006.service - OpenSSH per-connection server daemon (10.0.0.1:60006). Jul 14 22:17:28.237162 sshd[5863]: Accepted publickey for core from 10.0.0.1 port 60006 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:17:28.239285 sshd[5863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:17:28.244237 systemd-logind[1434]: New session 8 of user core. Jul 14 22:17:28.250976 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 14 22:17:28.610785 sshd[5863]: pam_unix(sshd:session): session closed for user core Jul 14 22:17:28.617212 systemd[1]: sshd@7-10.0.0.96:22-10.0.0.1:60006.service: Deactivated successfully. Jul 14 22:17:28.619566 systemd[1]: session-8.scope: Deactivated successfully. Jul 14 22:17:28.620997 systemd-logind[1434]: Session 8 logged out. Waiting for processes to exit. Jul 14 22:17:28.622029 systemd-logind[1434]: Removed session 8. Jul 14 22:17:31.720975 kubelet[2516]: E0714 22:17:31.720935 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:17:33.623017 systemd[1]: Started sshd@8-10.0.0.96:22-10.0.0.1:52172.service - OpenSSH per-connection server daemon (10.0.0.1:52172). Jul 14 22:17:33.657618 sshd[5883]: Accepted publickey for core from 10.0.0.1 port 52172 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:17:33.659264 sshd[5883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:17:33.663329 systemd-logind[1434]: New session 9 of user core. Jul 14 22:17:33.671981 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 14 22:17:33.792357 sshd[5883]: pam_unix(sshd:session): session closed for user core Jul 14 22:17:33.796152 systemd[1]: sshd@8-10.0.0.96:22-10.0.0.1:52172.service: Deactivated successfully. Jul 14 22:17:33.798329 systemd[1]: session-9.scope: Deactivated successfully. Jul 14 22:17:33.799839 systemd-logind[1434]: Session 9 logged out. Waiting for processes to exit. Jul 14 22:17:33.800861 systemd-logind[1434]: Removed session 9. Jul 14 22:17:38.802895 systemd[1]: Started sshd@9-10.0.0.96:22-10.0.0.1:52174.service - OpenSSH per-connection server daemon (10.0.0.1:52174). Jul 14 22:17:38.838405 sshd[5900]: Accepted publickey for core from 10.0.0.1 port 52174 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:17:38.839873 sshd[5900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:17:38.843307 systemd-logind[1434]: New session 10 of user core. Jul 14 22:17:38.852965 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 14 22:17:38.975445 sshd[5900]: pam_unix(sshd:session): session closed for user core Jul 14 22:17:38.979497 systemd-logind[1434]: Session 10 logged out. Waiting for processes to exit. Jul 14 22:17:38.979609 systemd[1]: sshd@9-10.0.0.96:22-10.0.0.1:52174.service: Deactivated successfully. Jul 14 22:17:38.982480 systemd[1]: session-10.scope: Deactivated successfully. Jul 14 22:17:38.984798 systemd-logind[1434]: Removed session 10. Jul 14 22:17:40.126956 systemd[1]: run-containerd-runc-k8s.io-29cbdd2c67da307da2dd2aab4e3fb8d9f6866b152aed315104ea1839b2efcd41-runc.PG8DjM.mount: Deactivated successfully. Jul 14 22:17:43.719569 kubelet[2516]: E0714 22:17:43.719523 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:17:43.990051 systemd[1]: Started sshd@10-10.0.0.96:22-10.0.0.1:56198.service - OpenSSH per-connection server daemon (10.0.0.1:56198). Jul 14 22:17:44.031557 sshd[5958]: Accepted publickey for core from 10.0.0.1 port 56198 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:17:44.033594 sshd[5958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:17:44.037493 systemd-logind[1434]: New session 11 of user core. Jul 14 22:17:44.047000 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 14 22:17:44.184612 sshd[5958]: pam_unix(sshd:session): session closed for user core Jul 14 22:17:44.188476 systemd[1]: sshd@10-10.0.0.96:22-10.0.0.1:56198.service: Deactivated successfully. Jul 14 22:17:44.190395 systemd[1]: session-11.scope: Deactivated successfully. Jul 14 22:17:44.191094 systemd-logind[1434]: Session 11 logged out. Waiting for processes to exit. Jul 14 22:17:44.192035 systemd-logind[1434]: Removed session 11. Jul 14 22:17:48.606623 kubelet[2516]: I0714 22:17:48.606577 2516 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 22:17:49.199899 systemd[1]: Started sshd@11-10.0.0.96:22-10.0.0.1:51858.service - OpenSSH per-connection server daemon (10.0.0.1:51858). Jul 14 22:17:49.236530 sshd[5985]: Accepted publickey for core from 10.0.0.1 port 51858 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:17:49.238072 sshd[5985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:17:49.242275 systemd-logind[1434]: New session 12 of user core. Jul 14 22:17:49.252035 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 14 22:17:49.373383 sshd[5985]: pam_unix(sshd:session): session closed for user core Jul 14 22:17:49.377798 systemd[1]: sshd@11-10.0.0.96:22-10.0.0.1:51858.service: Deactivated successfully. Jul 14 22:17:49.379899 systemd[1]: session-12.scope: Deactivated successfully. Jul 14 22:17:49.380545 systemd-logind[1434]: Session 12 logged out. Waiting for processes to exit. Jul 14 22:17:49.381453 systemd-logind[1434]: Removed session 12. Jul 14 22:17:49.719485 kubelet[2516]: E0714 22:17:49.719436 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:17:49.776849 kubelet[2516]: I0714 22:17:49.774993 2516 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 22:17:54.385308 systemd[1]: Started sshd@12-10.0.0.96:22-10.0.0.1:51868.service - OpenSSH per-connection server daemon (10.0.0.1:51868). Jul 14 22:17:54.455163 sshd[6025]: Accepted publickey for core from 10.0.0.1 port 51868 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:17:54.457242 sshd[6025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:17:54.461677 systemd-logind[1434]: New session 13 of user core. Jul 14 22:17:54.468956 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 14 22:17:54.602616 sshd[6025]: pam_unix(sshd:session): session closed for user core Jul 14 22:17:54.612065 systemd[1]: sshd@12-10.0.0.96:22-10.0.0.1:51868.service: Deactivated successfully. Jul 14 22:17:54.613992 systemd[1]: session-13.scope: Deactivated successfully. Jul 14 22:17:54.615634 systemd-logind[1434]: Session 13 logged out. Waiting for processes to exit. Jul 14 22:17:54.617079 systemd[1]: Started sshd@13-10.0.0.96:22-10.0.0.1:51870.service - OpenSSH per-connection server daemon (10.0.0.1:51870). Jul 14 22:17:54.618306 systemd-logind[1434]: Removed session 13. Jul 14 22:17:54.663837 sshd[6041]: Accepted publickey for core from 10.0.0.1 port 51870 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:17:54.664710 sshd[6041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:17:54.675683 systemd-logind[1434]: New session 14 of user core. Jul 14 22:17:54.682957 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 14 22:17:54.910031 sshd[6041]: pam_unix(sshd:session): session closed for user core Jul 14 22:17:54.918609 systemd[1]: sshd@13-10.0.0.96:22-10.0.0.1:51870.service: Deactivated successfully. Jul 14 22:17:54.920251 systemd[1]: session-14.scope: Deactivated successfully. Jul 14 22:17:54.921599 systemd-logind[1434]: Session 14 logged out. Waiting for processes to exit. Jul 14 22:17:54.928758 systemd[1]: Started sshd@14-10.0.0.96:22-10.0.0.1:51876.service - OpenSSH per-connection server daemon (10.0.0.1:51876). Jul 14 22:17:54.929728 systemd-logind[1434]: Removed session 14. Jul 14 22:17:54.957986 sshd[6053]: Accepted publickey for core from 10.0.0.1 port 51876 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:17:54.959632 sshd[6053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:17:54.963610 systemd-logind[1434]: New session 15 of user core. Jul 14 22:17:54.969995 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 14 22:17:55.197550 sshd[6053]: pam_unix(sshd:session): session closed for user core Jul 14 22:17:55.200500 systemd-logind[1434]: Session 15 logged out. Waiting for processes to exit. Jul 14 22:17:55.201745 systemd[1]: sshd@14-10.0.0.96:22-10.0.0.1:51876.service: Deactivated successfully. Jul 14 22:17:55.206548 systemd[1]: session-15.scope: Deactivated successfully. Jul 14 22:17:55.209758 systemd-logind[1434]: Removed session 15. Jul 14 22:17:58.719624 kubelet[2516]: E0714 22:17:58.719570 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:18:00.211415 systemd[1]: Started sshd@15-10.0.0.96:22-10.0.0.1:35752.service - OpenSSH per-connection server daemon (10.0.0.1:35752). Jul 14 22:18:00.250611 sshd[6093]: Accepted publickey for core from 10.0.0.1 port 35752 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:18:00.252253 sshd[6093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:18:00.256322 systemd-logind[1434]: New session 16 of user core. Jul 14 22:18:00.268945 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 14 22:18:00.383126 sshd[6093]: pam_unix(sshd:session): session closed for user core Jul 14 22:18:00.387132 systemd[1]: sshd@15-10.0.0.96:22-10.0.0.1:35752.service: Deactivated successfully. Jul 14 22:18:00.389204 systemd[1]: session-16.scope: Deactivated successfully. Jul 14 22:18:00.389838 systemd-logind[1434]: Session 16 logged out. Waiting for processes to exit. Jul 14 22:18:00.390732 systemd-logind[1434]: Removed session 16. Jul 14 22:18:05.395266 systemd[1]: Started sshd@16-10.0.0.96:22-10.0.0.1:35762.service - OpenSSH per-connection server daemon (10.0.0.1:35762). Jul 14 22:18:05.435606 sshd[6127]: Accepted publickey for core from 10.0.0.1 port 35762 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:18:05.437504 sshd[6127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:18:05.442437 systemd-logind[1434]: New session 17 of user core. Jul 14 22:18:05.448943 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 14 22:18:05.589464 sshd[6127]: pam_unix(sshd:session): session closed for user core Jul 14 22:18:05.597048 systemd[1]: sshd@16-10.0.0.96:22-10.0.0.1:35762.service: Deactivated successfully. Jul 14 22:18:05.599096 systemd[1]: session-17.scope: Deactivated successfully. Jul 14 22:18:05.599858 systemd-logind[1434]: Session 17 logged out. Waiting for processes to exit. Jul 14 22:18:05.606213 systemd[1]: Started sshd@17-10.0.0.96:22-10.0.0.1:35770.service - OpenSSH per-connection server daemon (10.0.0.1:35770). Jul 14 22:18:05.606886 systemd-logind[1434]: Removed session 17. Jul 14 22:18:05.638384 sshd[6142]: Accepted publickey for core from 10.0.0.1 port 35770 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:18:05.640138 sshd[6142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:18:05.644472 systemd-logind[1434]: New session 18 of user core. Jul 14 22:18:05.654036 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 14 22:18:08.719675 kubelet[2516]: E0714 22:18:08.719626 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:18:09.943239 systemd[1]: run-containerd-runc-k8s.io-a5e5aec2ba6f681f87482740497db87776cf9e202b23343dbfee6b678c37180e-runc.ZlExW6.mount: Deactivated successfully. Jul 14 22:18:10.719695 kubelet[2516]: E0714 22:18:10.719655 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:18:15.878394 sshd[6142]: pam_unix(sshd:session): session closed for user core Jul 14 22:18:15.887111 systemd[1]: sshd@17-10.0.0.96:22-10.0.0.1:35770.service: Deactivated successfully. Jul 14 22:18:15.889036 systemd[1]: session-18.scope: Deactivated successfully. Jul 14 22:18:15.889843 systemd-logind[1434]: Session 18 logged out. Waiting for processes to exit. Jul 14 22:18:15.898316 systemd[1]: Started sshd@18-10.0.0.96:22-10.0.0.1:39024.service - OpenSSH per-connection server daemon (10.0.0.1:39024). Jul 14 22:18:15.899706 systemd-logind[1434]: Removed session 18. Jul 14 22:18:15.953864 sshd[6198]: Accepted publickey for core from 10.0.0.1 port 39024 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:18:15.955419 sshd[6198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:18:15.959893 systemd-logind[1434]: New session 19 of user core. Jul 14 22:18:15.966945 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 14 22:18:22.719489 kubelet[2516]: E0714 22:18:22.719375 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:18:38.521884 sshd[6198]: pam_unix(sshd:session): session closed for user core Jul 14 22:18:38.538623 systemd[1]: sshd@18-10.0.0.96:22-10.0.0.1:39024.service: Deactivated successfully. Jul 14 22:18:38.544011 systemd[1]: session-19.scope: Deactivated successfully. Jul 14 22:18:38.546367 systemd-logind[1434]: Session 19 logged out. Waiting for processes to exit. Jul 14 22:18:38.556260 systemd[1]: Started sshd@19-10.0.0.96:22-10.0.0.1:45126.service - OpenSSH per-connection server daemon (10.0.0.1:45126). Jul 14 22:18:38.567197 systemd-logind[1434]: Removed session 19. Jul 14 22:18:38.623449 sshd[6257]: Accepted publickey for core from 10.0.0.1 port 45126 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:18:38.626286 sshd[6257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:18:38.631055 systemd-logind[1434]: New session 20 of user core. Jul 14 22:18:38.635999 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 14 22:18:39.715395 sshd[6257]: pam_unix(sshd:session): session closed for user core Jul 14 22:18:39.725979 systemd[1]: sshd@19-10.0.0.96:22-10.0.0.1:45126.service: Deactivated successfully. Jul 14 22:18:39.727717 systemd[1]: session-20.scope: Deactivated successfully. Jul 14 22:18:39.729394 systemd-logind[1434]: Session 20 logged out. Waiting for processes to exit. Jul 14 22:18:39.730787 systemd[1]: Started sshd@20-10.0.0.96:22-10.0.0.1:58946.service - OpenSSH per-connection server daemon (10.0.0.1:58946). Jul 14 22:18:39.731672 systemd-logind[1434]: Removed session 20. Jul 14 22:18:39.792402 sshd[6282]: Accepted publickey for core from 10.0.0.1 port 58946 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:18:39.794184 sshd[6282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:18:39.798604 systemd-logind[1434]: New session 21 of user core. Jul 14 22:18:39.809958 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 14 22:18:39.945950 systemd[1]: run-containerd-runc-k8s.io-a5e5aec2ba6f681f87482740497db87776cf9e202b23343dbfee6b678c37180e-runc.DGs5lf.mount: Deactivated successfully. Jul 14 22:18:40.196005 sshd[6282]: pam_unix(sshd:session): session closed for user core Jul 14 22:18:40.200776 systemd[1]: sshd@20-10.0.0.96:22-10.0.0.1:58946.service: Deactivated successfully. Jul 14 22:18:40.206897 systemd[1]: session-21.scope: Deactivated successfully. Jul 14 22:18:40.211225 systemd-logind[1434]: Session 21 logged out. Waiting for processes to exit. Jul 14 22:18:40.213893 systemd-logind[1434]: Removed session 21. Jul 14 22:18:43.720538 kubelet[2516]: E0714 22:18:43.720485 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:18:45.207015 systemd[1]: Started sshd@21-10.0.0.96:22-10.0.0.1:58956.service - OpenSSH per-connection server daemon (10.0.0.1:58956). Jul 14 22:18:45.244028 sshd[6343]: Accepted publickey for core from 10.0.0.1 port 58956 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:18:45.246273 sshd[6343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:18:45.250565 systemd-logind[1434]: New session 22 of user core. Jul 14 22:18:45.258962 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 14 22:18:45.394304 sshd[6343]: pam_unix(sshd:session): session closed for user core Jul 14 22:18:45.397787 systemd[1]: sshd@21-10.0.0.96:22-10.0.0.1:58956.service: Deactivated successfully. Jul 14 22:18:45.399991 systemd[1]: session-22.scope: Deactivated successfully. Jul 14 22:18:45.401902 systemd-logind[1434]: Session 22 logged out. Waiting for processes to exit. Jul 14 22:18:45.403154 systemd-logind[1434]: Removed session 22. Jul 14 22:18:50.408987 systemd[1]: Started sshd@22-10.0.0.96:22-10.0.0.1:52714.service - OpenSSH per-connection server daemon (10.0.0.1:52714). Jul 14 22:18:50.444228 sshd[6357]: Accepted publickey for core from 10.0.0.1 port 52714 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:18:50.446099 sshd[6357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:18:50.450274 systemd-logind[1434]: New session 23 of user core. Jul 14 22:18:50.455949 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 14 22:18:50.592070 sshd[6357]: pam_unix(sshd:session): session closed for user core Jul 14 22:18:50.596871 systemd[1]: sshd@22-10.0.0.96:22-10.0.0.1:52714.service: Deactivated successfully. Jul 14 22:18:50.598938 systemd[1]: session-23.scope: Deactivated successfully. Jul 14 22:18:50.599698 systemd-logind[1434]: Session 23 logged out. Waiting for processes to exit. Jul 14 22:18:50.600609 systemd-logind[1434]: Removed session 23. Jul 14 22:18:54.718940 kubelet[2516]: E0714 22:18:54.718897 2516 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:18:55.603983 systemd[1]: Started sshd@23-10.0.0.96:22-10.0.0.1:52728.service - OpenSSH per-connection server daemon (10.0.0.1:52728). Jul 14 22:18:55.638304 sshd[6393]: Accepted publickey for core from 10.0.0.1 port 52728 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:18:55.639955 sshd[6393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:18:55.644002 systemd-logind[1434]: New session 24 of user core. Jul 14 22:18:55.650953 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 14 22:18:55.756310 sshd[6393]: pam_unix(sshd:session): session closed for user core Jul 14 22:18:55.760240 systemd[1]: sshd@23-10.0.0.96:22-10.0.0.1:52728.service: Deactivated successfully. Jul 14 22:18:55.762024 systemd[1]: session-24.scope: Deactivated successfully. Jul 14 22:18:55.762652 systemd-logind[1434]: Session 24 logged out. Waiting for processes to exit. Jul 14 22:18:55.763533 systemd-logind[1434]: Removed session 24.