Jul 11 00:25:34.988946 kernel: Linux version 6.6.96-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Jul 10 22:46:23 -00 2025 Jul 11 00:25:34.988987 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=836a88100946313ecc17069f63287f86e7d91f7c389df4bcd3f3e02beb9683e1 Jul 11 00:25:34.989004 kernel: BIOS-provided physical RAM map: Jul 11 00:25:34.989013 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 11 00:25:34.989022 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 11 00:25:34.989031 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 11 00:25:34.989041 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jul 11 00:25:34.989050 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jul 11 00:25:34.989059 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 11 00:25:34.989073 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 11 00:25:34.989082 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 11 00:25:34.989091 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 11 00:25:34.989105 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 11 00:25:34.989115 kernel: NX (Execute Disable) protection: active Jul 11 00:25:34.989126 kernel: APIC: Static calls initialized Jul 11 00:25:34.989143 kernel: SMBIOS 2.8 present. Jul 11 00:25:34.989154 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jul 11 00:25:34.989163 kernel: Hypervisor detected: KVM Jul 11 00:25:34.989173 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 11 00:25:34.989183 kernel: kvm-clock: using sched offset of 3248592911 cycles Jul 11 00:25:34.989207 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 11 00:25:34.989218 kernel: tsc: Detected 2794.748 MHz processor Jul 11 00:25:34.989229 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 11 00:25:34.989239 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 11 00:25:34.989255 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jul 11 00:25:34.989265 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 11 00:25:34.989275 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 11 00:25:34.989285 kernel: Using GB pages for direct mapping Jul 11 00:25:34.989295 kernel: ACPI: Early table checksum verification disabled Jul 11 00:25:34.989305 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jul 11 00:25:34.989327 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:25:34.989337 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:25:34.989350 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:25:34.989365 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jul 11 00:25:34.989375 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:25:34.989386 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:25:34.989396 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:25:34.989406 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:25:34.989416 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jul 11 00:25:34.989427 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jul 11 00:25:34.989442 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jul 11 00:25:34.989456 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jul 11 00:25:34.989466 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jul 11 00:25:34.989477 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jul 11 00:25:34.989487 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jul 11 00:25:34.989498 kernel: No NUMA configuration found Jul 11 00:25:34.989508 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jul 11 00:25:34.989522 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jul 11 00:25:34.989563 kernel: Zone ranges: Jul 11 00:25:34.989577 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 11 00:25:34.989587 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jul 11 00:25:34.989598 kernel: Normal empty Jul 11 00:25:34.989632 kernel: Movable zone start for each node Jul 11 00:25:34.989644 kernel: Early memory node ranges Jul 11 00:25:34.989654 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 11 00:25:34.989665 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jul 11 00:25:34.989675 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jul 11 00:25:34.989697 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 11 00:25:34.989713 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 11 00:25:34.989723 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jul 11 00:25:34.989734 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 11 00:25:34.989745 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 11 00:25:34.989755 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 11 00:25:34.989766 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 11 00:25:34.989776 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 11 00:25:34.989787 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 11 00:25:34.989803 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 11 00:25:34.989814 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 11 00:25:34.989824 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 11 00:25:34.989835 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 11 00:25:34.989845 kernel: TSC deadline timer available Jul 11 00:25:34.989856 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jul 11 00:25:34.989867 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 11 00:25:34.989877 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 11 00:25:34.989891 kernel: kvm-guest: setup PV sched yield Jul 11 00:25:34.989906 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 11 00:25:34.989917 kernel: Booting paravirtualized kernel on KVM Jul 11 00:25:34.989928 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 11 00:25:34.989938 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 11 00:25:34.989949 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u524288 Jul 11 00:25:34.989968 kernel: pcpu-alloc: s197096 r8192 d32280 u524288 alloc=1*2097152 Jul 11 00:25:34.989979 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 11 00:25:34.989989 kernel: kvm-guest: PV spinlocks enabled Jul 11 00:25:34.990000 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 11 00:25:34.990016 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=836a88100946313ecc17069f63287f86e7d91f7c389df4bcd3f3e02beb9683e1 Jul 11 00:25:34.990027 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 11 00:25:34.990038 kernel: random: crng init done Jul 11 00:25:34.990048 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 11 00:25:34.990059 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 11 00:25:34.990070 kernel: Fallback order for Node 0: 0 Jul 11 00:25:34.990081 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jul 11 00:25:34.990091 kernel: Policy zone: DMA32 Jul 11 00:25:34.990105 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 11 00:25:34.990117 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2295K rwdata, 22744K rodata, 42872K init, 2320K bss, 136900K reserved, 0K cma-reserved) Jul 11 00:25:34.990127 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 11 00:25:34.990138 kernel: ftrace: allocating 37966 entries in 149 pages Jul 11 00:25:34.990148 kernel: ftrace: allocated 149 pages with 4 groups Jul 11 00:25:34.990159 kernel: Dynamic Preempt: voluntary Jul 11 00:25:34.990169 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 11 00:25:34.990181 kernel: rcu: RCU event tracing is enabled. Jul 11 00:25:34.990191 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 11 00:25:34.990206 kernel: Trampoline variant of Tasks RCU enabled. Jul 11 00:25:34.990217 kernel: Rude variant of Tasks RCU enabled. Jul 11 00:25:34.990227 kernel: Tracing variant of Tasks RCU enabled. Jul 11 00:25:34.990238 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 11 00:25:34.990252 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 11 00:25:34.990263 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 11 00:25:34.990274 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 11 00:25:34.990284 kernel: Console: colour VGA+ 80x25 Jul 11 00:25:34.990294 kernel: printk: console [ttyS0] enabled Jul 11 00:25:34.990307 kernel: ACPI: Core revision 20230628 Jul 11 00:25:34.990317 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 11 00:25:34.990327 kernel: APIC: Switch to symmetric I/O mode setup Jul 11 00:25:34.990336 kernel: x2apic enabled Jul 11 00:25:34.990346 kernel: APIC: Switched APIC routing to: physical x2apic Jul 11 00:25:34.990355 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 11 00:25:34.990365 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 11 00:25:34.990375 kernel: kvm-guest: setup PV IPIs Jul 11 00:25:34.990397 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 11 00:25:34.990407 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 11 00:25:34.990417 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jul 11 00:25:34.990427 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 11 00:25:34.990440 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 11 00:25:34.990450 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 11 00:25:34.990460 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 11 00:25:34.990470 kernel: Spectre V2 : Mitigation: Retpolines Jul 11 00:25:34.990480 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 11 00:25:34.990493 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 11 00:25:34.990503 kernel: RETBleed: Mitigation: untrained return thunk Jul 11 00:25:34.990517 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 11 00:25:34.990528 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 11 00:25:34.990538 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 11 00:25:34.990549 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 11 00:25:34.990559 kernel: x86/bugs: return thunk changed Jul 11 00:25:34.990569 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 11 00:25:34.990582 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 11 00:25:34.990592 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 11 00:25:34.990602 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 11 00:25:34.990658 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 11 00:25:34.990668 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 11 00:25:34.990678 kernel: Freeing SMP alternatives memory: 32K Jul 11 00:25:34.990688 kernel: pid_max: default: 32768 minimum: 301 Jul 11 00:25:34.990698 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 11 00:25:34.990709 kernel: landlock: Up and running. Jul 11 00:25:34.990724 kernel: SELinux: Initializing. Jul 11 00:25:34.990735 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 11 00:25:34.990746 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 11 00:25:34.990756 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 11 00:25:34.990767 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 00:25:34.990777 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 00:25:34.990787 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 00:25:34.990798 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 11 00:25:34.990811 kernel: ... version: 0 Jul 11 00:25:34.990824 kernel: ... bit width: 48 Jul 11 00:25:34.990834 kernel: ... generic registers: 6 Jul 11 00:25:34.990844 kernel: ... value mask: 0000ffffffffffff Jul 11 00:25:34.990855 kernel: ... max period: 00007fffffffffff Jul 11 00:25:34.990864 kernel: ... fixed-purpose events: 0 Jul 11 00:25:34.990874 kernel: ... event mask: 000000000000003f Jul 11 00:25:34.990885 kernel: signal: max sigframe size: 1776 Jul 11 00:25:34.990895 kernel: rcu: Hierarchical SRCU implementation. Jul 11 00:25:34.990906 kernel: rcu: Max phase no-delay instances is 400. Jul 11 00:25:34.990919 kernel: smp: Bringing up secondary CPUs ... Jul 11 00:25:34.990929 kernel: smpboot: x86: Booting SMP configuration: Jul 11 00:25:34.990939 kernel: .... node #0, CPUs: #1 #2 #3 Jul 11 00:25:34.990951 kernel: smp: Brought up 1 node, 4 CPUs Jul 11 00:25:34.990972 kernel: smpboot: Max logical packages: 1 Jul 11 00:25:34.990984 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jul 11 00:25:34.990995 kernel: devtmpfs: initialized Jul 11 00:25:34.991006 kernel: x86/mm: Memory block size: 128MB Jul 11 00:25:34.991017 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 11 00:25:34.991032 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 11 00:25:34.991043 kernel: pinctrl core: initialized pinctrl subsystem Jul 11 00:25:34.991055 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 11 00:25:34.991066 kernel: audit: initializing netlink subsys (disabled) Jul 11 00:25:34.991078 kernel: audit: type=2000 audit(1752193533.694:1): state=initialized audit_enabled=0 res=1 Jul 11 00:25:34.991088 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 11 00:25:34.991099 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 11 00:25:34.991111 kernel: cpuidle: using governor menu Jul 11 00:25:34.991122 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 11 00:25:34.991137 kernel: dca service started, version 1.12.1 Jul 11 00:25:34.991148 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jul 11 00:25:34.991160 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jul 11 00:25:34.991171 kernel: PCI: Using configuration type 1 for base access Jul 11 00:25:34.991182 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 11 00:25:34.991194 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 11 00:25:34.991205 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 11 00:25:34.991216 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 11 00:25:34.991227 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 11 00:25:34.991242 kernel: ACPI: Added _OSI(Module Device) Jul 11 00:25:34.991253 kernel: ACPI: Added _OSI(Processor Device) Jul 11 00:25:34.991264 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 11 00:25:34.991275 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 11 00:25:34.991287 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 11 00:25:34.991297 kernel: ACPI: Interpreter enabled Jul 11 00:25:34.991308 kernel: ACPI: PM: (supports S0 S3 S5) Jul 11 00:25:34.991320 kernel: ACPI: Using IOAPIC for interrupt routing Jul 11 00:25:34.991331 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 11 00:25:34.991350 kernel: PCI: Using E820 reservations for host bridge windows Jul 11 00:25:34.991362 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 11 00:25:34.991373 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 11 00:25:34.991675 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 11 00:25:34.991837 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 11 00:25:34.992025 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 11 00:25:34.992042 kernel: PCI host bridge to bus 0000:00 Jul 11 00:25:34.992233 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 11 00:25:34.992390 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 11 00:25:34.992533 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 11 00:25:34.992689 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jul 11 00:25:34.992821 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 11 00:25:34.992952 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jul 11 00:25:34.993093 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 11 00:25:34.993297 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jul 11 00:25:34.993505 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jul 11 00:25:34.993707 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jul 11 00:25:34.993885 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jul 11 00:25:34.994071 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jul 11 00:25:34.994244 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 11 00:25:34.994423 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jul 11 00:25:34.994576 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jul 11 00:25:34.994750 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jul 11 00:25:34.994927 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jul 11 00:25:34.995179 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jul 11 00:25:34.995350 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jul 11 00:25:34.995500 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jul 11 00:25:34.995703 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jul 11 00:25:34.995894 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jul 11 00:25:34.996064 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jul 11 00:25:34.996237 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jul 11 00:25:34.996410 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jul 11 00:25:34.996565 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jul 11 00:25:34.996827 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jul 11 00:25:34.997007 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 11 00:25:34.997184 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jul 11 00:25:34.997332 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jul 11 00:25:34.997484 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jul 11 00:25:34.997673 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jul 11 00:25:34.997838 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jul 11 00:25:34.997854 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 11 00:25:34.997870 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 11 00:25:34.997881 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 11 00:25:34.997892 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 11 00:25:34.997902 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 11 00:25:34.997913 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 11 00:25:34.997924 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 11 00:25:34.997938 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 11 00:25:34.997948 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 11 00:25:34.997968 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 11 00:25:34.997982 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 11 00:25:34.997993 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 11 00:25:34.998003 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 11 00:25:34.998014 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 11 00:25:34.998025 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 11 00:25:34.998035 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 11 00:25:34.998046 kernel: iommu: Default domain type: Translated Jul 11 00:25:34.998056 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 11 00:25:34.998067 kernel: PCI: Using ACPI for IRQ routing Jul 11 00:25:34.998081 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 11 00:25:34.998091 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 11 00:25:34.998102 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jul 11 00:25:34.998261 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 11 00:25:34.998416 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 11 00:25:34.998569 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 11 00:25:34.998583 kernel: vgaarb: loaded Jul 11 00:25:34.998594 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 11 00:25:34.998664 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 11 00:25:34.998675 kernel: clocksource: Switched to clocksource kvm-clock Jul 11 00:25:34.998686 kernel: VFS: Disk quotas dquot_6.6.0 Jul 11 00:25:34.998697 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 11 00:25:34.998708 kernel: pnp: PnP ACPI init Jul 11 00:25:34.998888 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 11 00:25:34.998905 kernel: pnp: PnP ACPI: found 6 devices Jul 11 00:25:34.998917 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 11 00:25:34.998932 kernel: NET: Registered PF_INET protocol family Jul 11 00:25:34.998943 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 11 00:25:34.998955 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 11 00:25:34.998975 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 11 00:25:34.998986 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 11 00:25:34.998997 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 11 00:25:34.999007 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 11 00:25:34.999018 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 11 00:25:34.999028 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 11 00:25:34.999042 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 11 00:25:34.999053 kernel: NET: Registered PF_XDP protocol family Jul 11 00:25:34.999197 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 11 00:25:34.999339 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 11 00:25:34.999481 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 11 00:25:34.999637 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jul 11 00:25:34.999779 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 11 00:25:34.999924 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jul 11 00:25:34.999944 kernel: PCI: CLS 0 bytes, default 64 Jul 11 00:25:34.999955 kernel: Initialise system trusted keyrings Jul 11 00:25:34.999977 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 11 00:25:34.999988 kernel: Key type asymmetric registered Jul 11 00:25:34.999999 kernel: Asymmetric key parser 'x509' registered Jul 11 00:25:35.000010 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 11 00:25:35.000021 kernel: io scheduler mq-deadline registered Jul 11 00:25:35.000031 kernel: io scheduler kyber registered Jul 11 00:25:35.000042 kernel: io scheduler bfq registered Jul 11 00:25:35.000053 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 11 00:25:35.000068 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 11 00:25:35.000079 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 11 00:25:35.000090 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 11 00:25:35.000101 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 11 00:25:35.000112 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 11 00:25:35.000122 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 11 00:25:35.000133 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 11 00:25:35.000144 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 11 00:25:35.000332 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 11 00:25:35.000353 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 11 00:25:35.000505 kernel: rtc_cmos 00:04: registered as rtc0 Jul 11 00:25:35.000684 kernel: rtc_cmos 00:04: setting system clock to 2025-07-11T00:25:34 UTC (1752193534) Jul 11 00:25:35.000835 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 11 00:25:35.000850 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 11 00:25:35.000861 kernel: NET: Registered PF_INET6 protocol family Jul 11 00:25:35.000872 kernel: Segment Routing with IPv6 Jul 11 00:25:35.000887 kernel: In-situ OAM (IOAM) with IPv6 Jul 11 00:25:35.000898 kernel: NET: Registered PF_PACKET protocol family Jul 11 00:25:35.000909 kernel: Key type dns_resolver registered Jul 11 00:25:35.000920 kernel: IPI shorthand broadcast: enabled Jul 11 00:25:35.000931 kernel: sched_clock: Marking stable (1064003767, 152787028)->(1248444349, -31653554) Jul 11 00:25:35.000941 kernel: registered taskstats version 1 Jul 11 00:25:35.000952 kernel: Loading compiled-in X.509 certificates Jul 11 00:25:35.000973 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.96-flatcar: 5956f0842928c96096c398e9db55919cd236a39f' Jul 11 00:25:35.000984 kernel: Key type .fscrypt registered Jul 11 00:25:35.000995 kernel: Key type fscrypt-provisioning registered Jul 11 00:25:35.001009 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 11 00:25:35.001020 kernel: ima: Allocated hash algorithm: sha1 Jul 11 00:25:35.001031 kernel: ima: No architecture policies found Jul 11 00:25:35.001042 kernel: clk: Disabling unused clocks Jul 11 00:25:35.001053 kernel: Freeing unused kernel image (initmem) memory: 42872K Jul 11 00:25:35.001063 kernel: Write protecting the kernel read-only data: 36864k Jul 11 00:25:35.001075 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Jul 11 00:25:35.001085 kernel: Run /init as init process Jul 11 00:25:35.001099 kernel: with arguments: Jul 11 00:25:35.001110 kernel: /init Jul 11 00:25:35.001120 kernel: with environment: Jul 11 00:25:35.001131 kernel: HOME=/ Jul 11 00:25:35.001142 kernel: TERM=linux Jul 11 00:25:35.001153 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 11 00:25:35.001166 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 11 00:25:35.001180 systemd[1]: Detected virtualization kvm. Jul 11 00:25:35.001195 systemd[1]: Detected architecture x86-64. Jul 11 00:25:35.001206 systemd[1]: Running in initrd. Jul 11 00:25:35.001217 systemd[1]: No hostname configured, using default hostname. Jul 11 00:25:35.001228 systemd[1]: Hostname set to . Jul 11 00:25:35.001240 systemd[1]: Initializing machine ID from VM UUID. Jul 11 00:25:35.001251 systemd[1]: Queued start job for default target initrd.target. Jul 11 00:25:35.001263 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 00:25:35.001274 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 00:25:35.001290 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 11 00:25:35.001303 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 11 00:25:35.001329 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 11 00:25:35.001344 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 11 00:25:35.001365 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 11 00:25:35.001386 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 11 00:25:35.001398 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 00:25:35.001409 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 11 00:25:35.001420 systemd[1]: Reached target paths.target - Path Units. Jul 11 00:25:35.001432 systemd[1]: Reached target slices.target - Slice Units. Jul 11 00:25:35.001443 systemd[1]: Reached target swap.target - Swaps. Jul 11 00:25:35.001455 systemd[1]: Reached target timers.target - Timer Units. Jul 11 00:25:35.001466 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 11 00:25:35.001481 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 11 00:25:35.001493 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 11 00:25:35.001505 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 11 00:25:35.001517 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 11 00:25:35.001529 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 11 00:25:35.001541 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 00:25:35.001553 systemd[1]: Reached target sockets.target - Socket Units. Jul 11 00:25:35.001565 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 11 00:25:35.001576 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 11 00:25:35.001591 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 11 00:25:35.001619 systemd[1]: Starting systemd-fsck-usr.service... Jul 11 00:25:35.001630 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 11 00:25:35.001643 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 11 00:25:35.001654 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:25:35.001666 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 11 00:25:35.001678 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 00:25:35.001689 systemd[1]: Finished systemd-fsck-usr.service. Jul 11 00:25:35.001706 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 11 00:25:35.001746 systemd-journald[192]: Collecting audit messages is disabled. Jul 11 00:25:35.001777 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 11 00:25:35.001789 systemd-journald[192]: Journal started Jul 11 00:25:35.001819 systemd-journald[192]: Runtime Journal (/run/log/journal/e6a1f9fbf18744139526c2b13c09176c) is 6.0M, max 48.4M, 42.3M free. Jul 11 00:25:34.987434 systemd-modules-load[193]: Inserted module 'overlay' Jul 11 00:25:35.026773 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 11 00:25:35.026805 kernel: Bridge firewalling registered Jul 11 00:25:35.017424 systemd-modules-load[193]: Inserted module 'br_netfilter' Jul 11 00:25:35.030185 systemd[1]: Started systemd-journald.service - Journal Service. Jul 11 00:25:35.030861 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 11 00:25:35.033768 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:25:35.052812 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 11 00:25:35.056667 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 00:25:35.059982 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 11 00:25:35.061633 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 11 00:25:35.078009 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:25:35.084258 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 00:25:35.114266 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:25:35.125774 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 11 00:25:35.128188 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 11 00:25:35.129652 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 00:25:35.140890 dracut-cmdline[225]: dracut-dracut-053 Jul 11 00:25:35.147117 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=836a88100946313ecc17069f63287f86e7d91f7c389df4bcd3f3e02beb9683e1 Jul 11 00:25:35.183028 systemd-resolved[228]: Positive Trust Anchors: Jul 11 00:25:35.183048 systemd-resolved[228]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 00:25:35.183082 systemd-resolved[228]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 11 00:25:35.186039 systemd-resolved[228]: Defaulting to hostname 'linux'. Jul 11 00:25:35.187388 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 11 00:25:35.194649 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 11 00:25:35.250670 kernel: SCSI subsystem initialized Jul 11 00:25:35.260637 kernel: Loading iSCSI transport class v2.0-870. Jul 11 00:25:35.271648 kernel: iscsi: registered transport (tcp) Jul 11 00:25:35.299676 kernel: iscsi: registered transport (qla4xxx) Jul 11 00:25:35.299765 kernel: QLogic iSCSI HBA Driver Jul 11 00:25:35.366030 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 11 00:25:35.378836 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 11 00:25:35.407651 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 11 00:25:35.407747 kernel: device-mapper: uevent: version 1.0.3 Jul 11 00:25:35.409397 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 11 00:25:35.454655 kernel: raid6: avx2x4 gen() 29957 MB/s Jul 11 00:25:35.471652 kernel: raid6: avx2x2 gen() 29542 MB/s Jul 11 00:25:35.488749 kernel: raid6: avx2x1 gen() 23422 MB/s Jul 11 00:25:35.488799 kernel: raid6: using algorithm avx2x4 gen() 29957 MB/s Jul 11 00:25:35.506752 kernel: raid6: .... xor() 6958 MB/s, rmw enabled Jul 11 00:25:35.506803 kernel: raid6: using avx2x2 recovery algorithm Jul 11 00:25:35.527633 kernel: xor: automatically using best checksumming function avx Jul 11 00:25:35.695674 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 11 00:25:35.712326 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 11 00:25:35.722953 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 00:25:35.737020 systemd-udevd[414]: Using default interface naming scheme 'v255'. Jul 11 00:25:35.742182 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 00:25:35.766953 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 11 00:25:35.788563 dracut-pre-trigger[426]: rd.md=0: removing MD RAID activation Jul 11 00:25:35.832637 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 11 00:25:35.845935 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 11 00:25:35.937020 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 00:25:35.945835 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 11 00:25:35.959464 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 11 00:25:35.965506 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 11 00:25:35.976211 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 00:25:35.977660 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 11 00:25:35.986802 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 11 00:25:35.987074 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 11 00:25:35.992786 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 11 00:25:35.992814 kernel: GPT:9289727 != 19775487 Jul 11 00:25:35.992826 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 11 00:25:35.992836 kernel: GPT:9289727 != 19775487 Jul 11 00:25:35.992846 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 11 00:25:35.992864 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:25:35.989989 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 11 00:25:35.998227 kernel: cryptd: max_cpu_qlen set to 1000 Jul 11 00:25:36.005641 kernel: libata version 3.00 loaded. Jul 11 00:25:36.029920 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 11 00:25:36.033395 kernel: AVX2 version of gcm_enc/dec engaged. Jul 11 00:25:36.033449 kernel: AES CTR mode by8 optimization enabled Jul 11 00:25:36.035795 kernel: ahci 0000:00:1f.2: version 3.0 Jul 11 00:25:36.036156 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 11 00:25:36.040016 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jul 11 00:25:36.040273 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 11 00:25:36.047760 kernel: scsi host0: ahci Jul 11 00:25:36.051636 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (475) Jul 11 00:25:36.081686 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 11 00:25:36.098826 kernel: BTRFS: device fsid 54fb9359-b495-4b0c-b313-b0e2955e4a38 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (463) Jul 11 00:25:36.098856 kernel: scsi host1: ahci Jul 11 00:25:36.099152 kernel: scsi host2: ahci Jul 11 00:25:36.100537 kernel: scsi host3: ahci Jul 11 00:25:36.101646 kernel: scsi host4: ahci Jul 11 00:25:36.102793 kernel: scsi host5: ahci Jul 11 00:25:36.105412 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jul 11 00:25:36.105451 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jul 11 00:25:36.105463 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jul 11 00:25:36.105474 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jul 11 00:25:36.107010 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jul 11 00:25:36.107039 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jul 11 00:25:36.110352 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 11 00:25:36.126381 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 11 00:25:36.138637 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 11 00:25:36.139887 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 11 00:25:36.156851 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 11 00:25:36.158035 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 11 00:25:36.158108 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:25:36.160661 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 11 00:25:36.162908 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 00:25:36.163011 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:25:36.165307 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:25:36.184865 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:25:36.269627 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:25:36.299950 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 11 00:25:36.319585 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:25:36.420637 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 11 00:25:36.420708 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 11 00:25:36.420721 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 11 00:25:36.421661 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 11 00:25:36.422637 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 11 00:25:36.423648 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 11 00:25:36.423683 kernel: ata3.00: applying bridge limits Jul 11 00:25:36.424637 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 11 00:25:36.425647 kernel: ata3.00: configured for UDMA/100 Jul 11 00:25:36.451638 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 11 00:25:36.507935 disk-uuid[551]: Primary Header is updated. Jul 11 00:25:36.507935 disk-uuid[551]: Secondary Entries is updated. Jul 11 00:25:36.507935 disk-uuid[551]: Secondary Header is updated. Jul 11 00:25:36.515627 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 11 00:25:36.515901 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 11 00:25:36.521631 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:25:36.528627 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:25:36.533626 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 11 00:25:37.537634 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:25:37.537693 disk-uuid[577]: The operation has completed successfully. Jul 11 00:25:37.574123 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 11 00:25:37.574271 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 11 00:25:37.592888 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 11 00:25:37.596726 sh[592]: Success Jul 11 00:25:37.619669 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 11 00:25:37.656664 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 11 00:25:37.672536 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 11 00:25:37.678475 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 11 00:25:37.691164 kernel: BTRFS info (device dm-0): first mount of filesystem 54fb9359-b495-4b0c-b313-b0e2955e4a38 Jul 11 00:25:37.691233 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 11 00:25:37.691250 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 11 00:25:37.692145 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 11 00:25:37.692853 kernel: BTRFS info (device dm-0): using free space tree Jul 11 00:25:37.699039 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 11 00:25:37.701563 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 11 00:25:37.717852 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 11 00:25:37.720692 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 11 00:25:37.731637 kernel: BTRFS info (device vda6): first mount of filesystem 71430075-b555-475f-bdad-1d4a4e6e1dbe Jul 11 00:25:37.731685 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 11 00:25:37.731697 kernel: BTRFS info (device vda6): using free space tree Jul 11 00:25:37.746642 kernel: BTRFS info (device vda6): auto enabling async discard Jul 11 00:25:37.757298 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 11 00:25:37.759214 kernel: BTRFS info (device vda6): last unmount of filesystem 71430075-b555-475f-bdad-1d4a4e6e1dbe Jul 11 00:25:37.809703 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 11 00:25:37.816946 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 11 00:25:37.861095 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 11 00:25:37.928878 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 11 00:25:37.957166 systemd-networkd[773]: lo: Link UP Jul 11 00:25:37.957182 systemd-networkd[773]: lo: Gained carrier Jul 11 00:25:37.959522 systemd-networkd[773]: Enumeration completed Jul 11 00:25:37.959928 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 11 00:25:37.960183 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:25:37.960188 systemd-networkd[773]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 11 00:25:37.964374 systemd-networkd[773]: eth0: Link UP Jul 11 00:25:37.964378 systemd-networkd[773]: eth0: Gained carrier Jul 11 00:25:37.965745 ignition[736]: Ignition 2.19.0 Jul 11 00:25:37.964386 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:25:37.965753 ignition[736]: Stage: fetch-offline Jul 11 00:25:37.972436 systemd[1]: Reached target network.target - Network. Jul 11 00:25:37.965797 ignition[736]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:25:37.965807 ignition[736]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:25:37.965960 ignition[736]: parsed url from cmdline: "" Jul 11 00:25:37.965964 ignition[736]: no config URL provided Jul 11 00:25:37.965970 ignition[736]: reading system config file "/usr/lib/ignition/user.ign" Jul 11 00:25:37.965980 ignition[736]: no config at "/usr/lib/ignition/user.ign" Jul 11 00:25:37.966014 ignition[736]: op(1): [started] loading QEMU firmware config module Jul 11 00:25:37.966019 ignition[736]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 11 00:25:37.978173 ignition[736]: op(1): [finished] loading QEMU firmware config module Jul 11 00:25:37.992757 systemd-networkd[773]: eth0: DHCPv4 address 10.0.0.159/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 11 00:25:38.021479 ignition[736]: parsing config with SHA512: 29b2dc862602efb82b89647ca2a2615f05249b698049f6f0e4a9da4456b6e34adba03e16a892b62c4e2b71bb33512d8df5402aae1bc82984074db3048fc0b4d9 Jul 11 00:25:38.027466 unknown[736]: fetched base config from "system" Jul 11 00:25:38.027483 unknown[736]: fetched user config from "qemu" Jul 11 00:25:38.027940 ignition[736]: fetch-offline: fetch-offline passed Jul 11 00:25:38.028858 systemd-resolved[228]: Detected conflict on linux IN A 10.0.0.159 Jul 11 00:25:38.028016 ignition[736]: Ignition finished successfully Jul 11 00:25:38.028868 systemd-resolved[228]: Hostname conflict, changing published hostname from 'linux' to 'linux7'. Jul 11 00:25:38.031882 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 11 00:25:38.037022 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 11 00:25:38.045838 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 11 00:25:38.160039 ignition[784]: Ignition 2.19.0 Jul 11 00:25:38.160054 ignition[784]: Stage: kargs Jul 11 00:25:38.160340 ignition[784]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:25:38.160353 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:25:38.161675 ignition[784]: kargs: kargs passed Jul 11 00:25:38.161737 ignition[784]: Ignition finished successfully Jul 11 00:25:38.169923 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 11 00:25:38.181932 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 11 00:25:38.209418 ignition[792]: Ignition 2.19.0 Jul 11 00:25:38.209441 ignition[792]: Stage: disks Jul 11 00:25:38.209688 ignition[792]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:25:38.209702 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:25:38.214903 ignition[792]: disks: disks passed Jul 11 00:25:38.215729 ignition[792]: Ignition finished successfully Jul 11 00:25:38.219410 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 11 00:25:38.220727 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 11 00:25:38.222641 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 11 00:25:38.224009 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 11 00:25:38.226027 systemd[1]: Reached target sysinit.target - System Initialization. Jul 11 00:25:38.227020 systemd[1]: Reached target basic.target - Basic System. Jul 11 00:25:38.237829 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 11 00:25:38.253870 systemd-fsck[802]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 11 00:25:38.733273 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 11 00:25:38.740725 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 11 00:25:38.883639 kernel: EXT4-fs (vda9): mounted filesystem 66ba5133-8c5a-461b-b2c1-a823c72af79b r/w with ordered data mode. Quota mode: none. Jul 11 00:25:38.884271 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 11 00:25:38.885381 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 11 00:25:38.901715 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 11 00:25:38.904086 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 11 00:25:38.904458 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 11 00:25:38.904522 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 11 00:25:38.904562 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 11 00:25:38.914134 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (810) Jul 11 00:25:38.916159 kernel: BTRFS info (device vda6): first mount of filesystem 71430075-b555-475f-bdad-1d4a4e6e1dbe Jul 11 00:25:38.916191 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 11 00:25:38.916203 kernel: BTRFS info (device vda6): using free space tree Jul 11 00:25:38.919634 kernel: BTRFS info (device vda6): auto enabling async discard Jul 11 00:25:38.920246 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 11 00:25:38.925144 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 11 00:25:38.936747 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 11 00:25:39.003407 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Jul 11 00:25:39.009232 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Jul 11 00:25:39.014515 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory Jul 11 00:25:39.020048 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory Jul 11 00:25:39.126775 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 11 00:25:39.140832 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 11 00:25:39.143351 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 11 00:25:39.151290 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 11 00:25:39.152876 kernel: BTRFS info (device vda6): last unmount of filesystem 71430075-b555-475f-bdad-1d4a4e6e1dbe Jul 11 00:25:39.215106 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 11 00:25:39.251811 systemd-networkd[773]: eth0: Gained IPv6LL Jul 11 00:25:39.298767 ignition[928]: INFO : Ignition 2.19.0 Jul 11 00:25:39.298767 ignition[928]: INFO : Stage: mount Jul 11 00:25:39.300734 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:25:39.300734 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:25:39.303349 ignition[928]: INFO : mount: mount passed Jul 11 00:25:39.304207 ignition[928]: INFO : Ignition finished successfully Jul 11 00:25:39.307512 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 11 00:25:39.319764 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 11 00:25:39.892768 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 11 00:25:39.902560 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (937) Jul 11 00:25:39.902868 kernel: BTRFS info (device vda6): first mount of filesystem 71430075-b555-475f-bdad-1d4a4e6e1dbe Jul 11 00:25:39.902883 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 11 00:25:39.903410 kernel: BTRFS info (device vda6): using free space tree Jul 11 00:25:39.907669 kernel: BTRFS info (device vda6): auto enabling async discard Jul 11 00:25:39.909996 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 11 00:25:39.947007 ignition[954]: INFO : Ignition 2.19.0 Jul 11 00:25:39.947007 ignition[954]: INFO : Stage: files Jul 11 00:25:39.948967 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:25:39.948967 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:25:39.948967 ignition[954]: DEBUG : files: compiled without relabeling support, skipping Jul 11 00:25:39.952522 ignition[954]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 11 00:25:39.952522 ignition[954]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 11 00:25:39.955552 ignition[954]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 11 00:25:39.955552 ignition[954]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 11 00:25:39.955552 ignition[954]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 11 00:25:39.953850 unknown[954]: wrote ssh authorized keys file for user: core Jul 11 00:25:39.961694 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 11 00:25:39.961694 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jul 11 00:25:40.066914 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 11 00:25:40.273980 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 11 00:25:40.273980 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 11 00:25:40.279003 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 11 00:25:40.279003 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 11 00:25:40.279003 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 11 00:25:40.279003 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 11 00:25:40.279003 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 11 00:25:40.279003 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 11 00:25:40.279003 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 11 00:25:40.279003 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 00:25:40.279003 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 00:25:40.279003 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 11 00:25:40.279003 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 11 00:25:40.279003 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 11 00:25:40.279003 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jul 11 00:25:40.972052 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 11 00:25:41.776725 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 11 00:25:41.776725 ignition[954]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 11 00:25:41.781382 ignition[954]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 11 00:25:41.781382 ignition[954]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 11 00:25:41.781382 ignition[954]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 11 00:25:41.781382 ignition[954]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 11 00:25:41.781382 ignition[954]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 11 00:25:41.781382 ignition[954]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 11 00:25:41.781382 ignition[954]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 11 00:25:41.781382 ignition[954]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 11 00:25:41.807599 ignition[954]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 11 00:25:41.813767 ignition[954]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 11 00:25:41.815989 ignition[954]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 11 00:25:41.815989 ignition[954]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 11 00:25:41.815989 ignition[954]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 11 00:25:41.815989 ignition[954]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 11 00:25:41.815989 ignition[954]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 11 00:25:41.815989 ignition[954]: INFO : files: files passed Jul 11 00:25:41.815989 ignition[954]: INFO : Ignition finished successfully Jul 11 00:25:41.817087 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 11 00:25:41.827868 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 11 00:25:41.830128 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 11 00:25:41.832851 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 11 00:25:41.832975 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 11 00:25:41.841747 initrd-setup-root-after-ignition[982]: grep: /sysroot/oem/oem-release: No such file or directory Jul 11 00:25:41.844438 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:25:41.844438 initrd-setup-root-after-ignition[984]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:25:41.849172 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:25:41.847557 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 11 00:25:41.849811 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 11 00:25:41.858772 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 11 00:25:41.888537 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 11 00:25:41.888742 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 11 00:25:41.891385 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 11 00:25:41.892364 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 11 00:25:41.896099 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 11 00:25:41.897919 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 11 00:25:41.922768 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 11 00:25:41.935779 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 11 00:25:41.945785 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 11 00:25:41.947140 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 00:25:41.947284 systemd[1]: Stopped target timers.target - Timer Units. Jul 11 00:25:41.947635 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 11 00:25:41.947786 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 11 00:25:41.948436 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 11 00:25:41.994128 ignition[1008]: INFO : Ignition 2.19.0 Jul 11 00:25:41.994128 ignition[1008]: INFO : Stage: umount Jul 11 00:25:41.949003 systemd[1]: Stopped target basic.target - Basic System. Jul 11 00:25:41.949316 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 11 00:25:41.949695 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 11 00:25:41.950156 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 11 00:25:41.950464 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 11 00:25:41.950994 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 11 00:25:41.951280 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 11 00:25:41.951674 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 11 00:25:41.952020 systemd[1]: Stopped target swap.target - Swaps. Jul 11 00:25:41.952306 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 11 00:25:42.011909 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:25:42.011909 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:25:41.952478 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 11 00:25:42.015583 ignition[1008]: INFO : umount: umount passed Jul 11 00:25:42.015583 ignition[1008]: INFO : Ignition finished successfully Jul 11 00:25:41.953601 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 11 00:25:41.953990 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 00:25:41.954347 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 11 00:25:41.954534 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 00:25:41.954925 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 11 00:25:41.955040 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 11 00:25:41.955529 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 11 00:25:41.955690 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 11 00:25:41.956293 systemd[1]: Stopped target paths.target - Path Units. Jul 11 00:25:41.957013 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 11 00:25:41.960729 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 00:25:41.961366 systemd[1]: Stopped target slices.target - Slice Units. Jul 11 00:25:41.962227 systemd[1]: Stopped target sockets.target - Socket Units. Jul 11 00:25:41.963035 systemd[1]: iscsid.socket: Deactivated successfully. Jul 11 00:25:41.963195 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 11 00:25:41.963717 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 11 00:25:41.963937 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 11 00:25:41.964501 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 11 00:25:41.964738 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 11 00:25:41.965294 systemd[1]: ignition-files.service: Deactivated successfully. Jul 11 00:25:41.965453 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 11 00:25:41.967329 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 11 00:25:41.967965 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 11 00:25:41.968158 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 00:25:41.969904 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 11 00:25:41.970203 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 11 00:25:41.970372 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 00:25:41.971145 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 11 00:25:41.971309 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 11 00:25:41.977452 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 11 00:25:41.977660 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 11 00:25:42.005086 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 11 00:25:42.018756 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 11 00:25:42.018958 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 11 00:25:42.036735 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 11 00:25:42.042665 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 11 00:25:42.050180 systemd[1]: Stopped target network.target - Network. Jul 11 00:25:42.062475 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 11 00:25:42.062592 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 11 00:25:42.062770 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 11 00:25:42.062840 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 11 00:25:42.065778 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 11 00:25:42.065845 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 11 00:25:42.068031 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 11 00:25:42.068105 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 11 00:25:42.070356 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 11 00:25:42.070417 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 11 00:25:42.072859 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 11 00:25:42.073908 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 11 00:25:42.080525 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 11 00:25:42.080849 systemd-networkd[773]: eth0: DHCPv6 lease lost Jul 11 00:25:42.080909 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 11 00:25:42.084969 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 11 00:25:42.085179 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 11 00:25:42.088693 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 11 00:25:42.088760 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 11 00:25:42.101885 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 11 00:25:42.103005 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 11 00:25:42.103103 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 11 00:25:42.105527 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 11 00:25:42.105593 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:25:42.108155 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 11 00:25:42.108211 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 11 00:25:42.111066 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 11 00:25:42.111123 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 00:25:42.112661 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 00:25:42.135892 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 11 00:25:42.136257 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 00:25:42.139028 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 11 00:25:42.139084 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 11 00:25:42.141551 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 11 00:25:42.141600 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 00:25:42.143855 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 11 00:25:42.143912 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 11 00:25:42.146032 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 11 00:25:42.146086 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 11 00:25:42.147952 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 11 00:25:42.148004 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:25:42.158945 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 11 00:25:42.160385 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 11 00:25:42.160485 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 00:25:42.161823 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 00:25:42.161880 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:25:42.162569 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 11 00:25:42.162714 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 11 00:25:42.169082 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 11 00:25:42.169245 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 11 00:25:42.172305 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 11 00:25:42.175233 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 11 00:25:42.189962 systemd[1]: Switching root. Jul 11 00:25:42.224452 systemd-journald[192]: Journal stopped Jul 11 00:25:43.591475 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Jul 11 00:25:43.591560 kernel: SELinux: policy capability network_peer_controls=1 Jul 11 00:25:43.591590 kernel: SELinux: policy capability open_perms=1 Jul 11 00:25:43.591631 kernel: SELinux: policy capability extended_socket_class=1 Jul 11 00:25:43.591648 kernel: SELinux: policy capability always_check_network=0 Jul 11 00:25:43.591663 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 11 00:25:43.591678 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 11 00:25:43.591695 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 11 00:25:43.591713 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 11 00:25:43.591732 kernel: audit: type=1403 audit(1752193542.724:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 11 00:25:43.591785 systemd[1]: Successfully loaded SELinux policy in 43.031ms. Jul 11 00:25:43.591812 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.628ms. Jul 11 00:25:43.591835 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 11 00:25:43.591852 systemd[1]: Detected virtualization kvm. Jul 11 00:25:43.591873 systemd[1]: Detected architecture x86-64. Jul 11 00:25:43.591889 systemd[1]: Detected first boot. Jul 11 00:25:43.591905 systemd[1]: Initializing machine ID from VM UUID. Jul 11 00:25:43.591921 zram_generator::config[1054]: No configuration found. Jul 11 00:25:43.591937 systemd[1]: Populated /etc with preset unit settings. Jul 11 00:25:43.591954 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 11 00:25:43.591979 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 11 00:25:43.591997 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 11 00:25:43.592014 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 11 00:25:43.592030 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 11 00:25:43.592047 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 11 00:25:43.592064 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 11 00:25:43.592080 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 11 00:25:43.592097 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 11 00:25:43.592113 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 11 00:25:43.592137 systemd[1]: Created slice user.slice - User and Session Slice. Jul 11 00:25:43.592153 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 00:25:43.592180 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 00:25:43.592198 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 11 00:25:43.592214 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 11 00:25:43.592231 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 11 00:25:43.592248 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 11 00:25:43.592266 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 11 00:25:43.592292 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 00:25:43.592309 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 11 00:25:43.592325 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 11 00:25:43.592342 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 11 00:25:43.592358 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 11 00:25:43.592374 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 00:25:43.592390 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 11 00:25:43.592406 systemd[1]: Reached target slices.target - Slice Units. Jul 11 00:25:43.592429 systemd[1]: Reached target swap.target - Swaps. Jul 11 00:25:43.592444 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 11 00:25:43.592461 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 11 00:25:43.592478 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 11 00:25:43.592494 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 11 00:25:43.592510 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 00:25:43.592531 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 11 00:25:43.592556 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 11 00:25:43.592574 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 11 00:25:43.592599 systemd[1]: Mounting media.mount - External Media Directory... Jul 11 00:25:43.592632 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:25:43.592648 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 11 00:25:43.592665 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 11 00:25:43.592681 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 11 00:25:43.592697 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 11 00:25:43.592714 systemd[1]: Reached target machines.target - Containers. Jul 11 00:25:43.592730 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 11 00:25:43.592746 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:25:43.592780 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 11 00:25:43.592797 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 11 00:25:43.592814 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:25:43.592829 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 11 00:25:43.592845 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:25:43.592862 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 11 00:25:43.592879 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:25:43.592896 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 11 00:25:43.592922 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 11 00:25:43.592940 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 11 00:25:43.592961 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 11 00:25:43.592983 systemd[1]: Stopped systemd-fsck-usr.service. Jul 11 00:25:43.593000 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 11 00:25:43.593015 kernel: loop: module loaded Jul 11 00:25:43.593030 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 11 00:25:43.593046 kernel: fuse: init (API version 7.39) Jul 11 00:25:43.593062 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 11 00:25:43.593084 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 11 00:25:43.593100 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 11 00:25:43.593141 systemd-journald[1117]: Collecting audit messages is disabled. Jul 11 00:25:43.593168 systemd[1]: verity-setup.service: Deactivated successfully. Jul 11 00:25:43.593185 systemd[1]: Stopped verity-setup.service. Jul 11 00:25:43.593202 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:25:43.593218 systemd-journald[1117]: Journal started Jul 11 00:25:43.593256 systemd-journald[1117]: Runtime Journal (/run/log/journal/e6a1f9fbf18744139526c2b13c09176c) is 6.0M, max 48.4M, 42.3M free. Jul 11 00:25:43.303973 systemd[1]: Queued start job for default target multi-user.target. Jul 11 00:25:43.325779 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 11 00:25:43.326279 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 11 00:25:43.595776 kernel: ACPI: bus type drm_connector registered Jul 11 00:25:43.599415 systemd[1]: Started systemd-journald.service - Journal Service. Jul 11 00:25:43.600307 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 11 00:25:43.601565 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 11 00:25:43.602882 systemd[1]: Mounted media.mount - External Media Directory. Jul 11 00:25:43.604110 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 11 00:25:43.605365 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 11 00:25:43.606643 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 11 00:25:43.608070 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 00:25:43.609758 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 11 00:25:43.609961 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 11 00:25:43.611511 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:25:43.611757 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:25:43.613245 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 00:25:43.613555 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 11 00:25:43.615030 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:25:43.615238 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:25:43.616882 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 11 00:25:43.617071 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 11 00:25:43.618507 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:25:43.618801 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:25:43.620245 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 11 00:25:43.621908 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 11 00:25:43.624017 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 11 00:25:43.640241 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 11 00:25:43.703781 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 11 00:25:43.706952 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 11 00:25:43.708353 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 11 00:25:43.708400 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 11 00:25:43.711050 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 11 00:25:43.714228 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 11 00:25:43.717132 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 11 00:25:43.718373 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:25:43.720129 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 11 00:25:43.722532 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 11 00:25:43.723976 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 00:25:43.725343 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 11 00:25:43.726708 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 11 00:25:43.728580 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 00:25:43.735813 systemd-journald[1117]: Time spent on flushing to /var/log/journal/e6a1f9fbf18744139526c2b13c09176c is 20.491ms for 948 entries. Jul 11 00:25:43.735813 systemd-journald[1117]: System Journal (/var/log/journal/e6a1f9fbf18744139526c2b13c09176c) is 8.0M, max 195.6M, 187.6M free. Jul 11 00:25:44.331835 systemd-journald[1117]: Received client request to flush runtime journal. Jul 11 00:25:44.331901 kernel: loop0: detected capacity change from 0 to 142488 Jul 11 00:25:44.331933 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 11 00:25:44.331989 kernel: loop1: detected capacity change from 0 to 229808 Jul 11 00:25:44.332020 kernel: loop2: detected capacity change from 0 to 140768 Jul 11 00:25:43.736782 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 11 00:25:43.740099 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 00:25:43.742025 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 11 00:25:43.743483 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 11 00:25:43.748091 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 11 00:25:43.782894 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 11 00:25:43.821088 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:25:43.839212 udevadm[1159]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 11 00:25:44.218854 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 11 00:25:44.220682 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 11 00:25:44.258029 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 11 00:25:44.334085 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 11 00:25:44.530685 kernel: loop3: detected capacity change from 0 to 142488 Jul 11 00:25:44.549673 kernel: loop4: detected capacity change from 0 to 229808 Jul 11 00:25:44.656650 kernel: loop5: detected capacity change from 0 to 140768 Jul 11 00:25:44.665391 (sd-merge)[1184]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 11 00:25:44.666240 (sd-merge)[1184]: Merged extensions into '/usr'. Jul 11 00:25:44.672091 systemd[1]: Reloading requested from client PID 1146 ('systemd-sysext') (unit systemd-sysext.service)... Jul 11 00:25:44.672109 systemd[1]: Reloading... Jul 11 00:25:44.805978 zram_generator::config[1208]: No configuration found. Jul 11 00:25:44.982785 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:25:45.041898 systemd[1]: Reloading finished in 369 ms. Jul 11 00:25:45.131345 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 11 00:25:45.133140 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 11 00:25:45.155019 systemd[1]: Starting ensure-sysext.service... Jul 11 00:25:45.215623 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 11 00:25:45.289817 systemd[1]: Reloading requested from client PID 1249 ('systemctl') (unit ensure-sysext.service)... Jul 11 00:25:45.289842 systemd[1]: Reloading... Jul 11 00:25:45.370979 ldconfig[1141]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 11 00:25:45.409989 zram_generator::config[1278]: No configuration found. Jul 11 00:25:45.572948 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:25:45.645494 systemd[1]: Reloading finished in 355 ms. Jul 11 00:25:45.670121 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 11 00:25:45.671788 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 11 00:25:45.696294 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 11 00:25:45.704413 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 11 00:25:45.706928 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 11 00:25:45.711545 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:25:45.711777 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:25:45.713023 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:25:45.846736 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:25:45.893958 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:25:45.896299 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:25:45.896548 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:25:45.900467 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:25:45.900767 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:25:45.902584 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:25:45.902823 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:25:45.933055 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:25:45.933352 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:25:45.942485 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:25:45.942909 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:25:45.949542 systemd-tmpfiles[1319]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 11 00:25:45.950115 systemd-tmpfiles[1319]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 11 00:25:45.952187 systemd-tmpfiles[1318]: ACLs are not supported, ignoring. Jul 11 00:25:45.952208 systemd-tmpfiles[1318]: ACLs are not supported, ignoring. Jul 11 00:25:45.952356 systemd-tmpfiles[1319]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 11 00:25:45.952782 systemd-tmpfiles[1319]: ACLs are not supported, ignoring. Jul 11 00:25:45.952899 systemd-tmpfiles[1319]: ACLs are not supported, ignoring. Jul 11 00:25:45.953024 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:25:45.957886 systemd-tmpfiles[1319]: Detected autofs mount point /boot during canonicalization of boot. Jul 11 00:25:45.957895 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 11 00:25:45.957906 systemd-tmpfiles[1319]: Skipping /boot Jul 11 00:25:45.960258 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:25:45.971342 systemd-tmpfiles[1319]: Detected autofs mount point /boot during canonicalization of boot. Jul 11 00:25:45.971361 systemd-tmpfiles[1319]: Skipping /boot Jul 11 00:25:45.979480 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:25:45.979796 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:25:45.979943 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:25:45.981115 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 11 00:25:46.004240 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 00:25:46.005384 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 00:25:46.006515 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:25:46.006790 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:25:46.007634 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:25:46.007843 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:25:46.008620 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:25:46.008832 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:25:46.013508 systemd[1]: Finished ensure-sysext.service. Jul 11 00:25:46.014962 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 00:25:46.015147 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 11 00:25:46.044253 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 11 00:25:46.048337 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 11 00:25:46.053422 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 11 00:25:46.054882 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 00:25:46.054965 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 11 00:25:46.059878 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 11 00:25:46.064788 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 11 00:25:46.069836 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 11 00:25:46.079550 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 11 00:25:46.090987 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 11 00:25:46.094463 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 11 00:25:46.104835 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 00:25:46.110647 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 11 00:25:46.124334 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 11 00:25:46.126622 augenrules[1368]: No rules Jul 11 00:25:46.129148 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 11 00:25:46.135353 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 11 00:25:46.138815 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 11 00:25:46.158584 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 11 00:25:46.160726 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 11 00:25:46.161323 systemd-udevd[1360]: Using default interface naming scheme 'v255'. Jul 11 00:25:46.183890 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 00:25:46.200214 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 11 00:25:46.243528 systemd-resolved[1350]: Positive Trust Anchors: Jul 11 00:25:46.243554 systemd-resolved[1350]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 00:25:46.243586 systemd-resolved[1350]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 11 00:25:46.254080 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 11 00:25:46.258841 systemd-resolved[1350]: Defaulting to hostname 'linux'. Jul 11 00:25:46.259628 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1396) Jul 11 00:25:46.261676 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 11 00:25:46.263966 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 11 00:25:46.265756 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 11 00:25:46.268660 systemd[1]: Reached target time-set.target - System Time Set. Jul 11 00:25:46.286824 systemd-networkd[1391]: lo: Link UP Jul 11 00:25:46.286838 systemd-networkd[1391]: lo: Gained carrier Jul 11 00:25:46.288541 systemd-networkd[1391]: Enumeration completed Jul 11 00:25:46.289098 systemd-networkd[1391]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:25:46.289109 systemd-networkd[1391]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 11 00:25:46.289799 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 11 00:25:46.293321 systemd-networkd[1391]: eth0: Link UP Jul 11 00:25:46.293385 systemd-networkd[1391]: eth0: Gained carrier Jul 11 00:25:46.293439 systemd-networkd[1391]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:25:46.320282 systemd[1]: Reached target network.target - Network. Jul 11 00:25:46.324919 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 11 00:25:46.334039 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 11 00:25:46.337639 systemd-networkd[1391]: eth0: DHCPv4 address 10.0.0.159/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 11 00:25:46.341155 systemd-networkd[1391]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:25:46.341334 systemd-timesyncd[1351]: Network configuration changed, trying to establish connection. Jul 11 00:25:46.342289 systemd-timesyncd[1351]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 11 00:25:46.342352 systemd-timesyncd[1351]: Initial clock synchronization to Fri 2025-07-11 00:25:46.609935 UTC. Jul 11 00:25:46.347897 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 11 00:25:46.348163 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jul 11 00:25:46.348353 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 11 00:25:46.350626 kernel: ACPI: button: Power Button [PWRF] Jul 11 00:25:46.358646 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jul 11 00:25:46.363044 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 11 00:25:46.371898 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 11 00:25:46.399251 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 11 00:25:46.469406 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:25:46.469627 kernel: mousedev: PS/2 mouse device common for all mice Jul 11 00:25:46.559139 kernel: kvm_amd: TSC scaling supported Jul 11 00:25:46.559256 kernel: kvm_amd: Nested Virtualization enabled Jul 11 00:25:46.559273 kernel: kvm_amd: Nested Paging enabled Jul 11 00:25:46.559290 kernel: kvm_amd: LBR virtualization supported Jul 11 00:25:46.559781 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 11 00:25:46.560994 kernel: kvm_amd: Virtual GIF supported Jul 11 00:25:46.584632 kernel: EDAC MC: Ver: 3.0.0 Jul 11 00:25:46.626759 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 11 00:25:46.642949 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 11 00:25:46.644903 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:25:46.652778 lvm[1425]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 11 00:25:46.694146 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 11 00:25:46.695739 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 11 00:25:46.696844 systemd[1]: Reached target sysinit.target - System Initialization. Jul 11 00:25:46.698011 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 11 00:25:46.699392 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 11 00:25:46.701052 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 11 00:25:46.702395 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 11 00:25:46.703833 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 11 00:25:46.705246 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 11 00:25:46.705280 systemd[1]: Reached target paths.target - Path Units. Jul 11 00:25:46.706314 systemd[1]: Reached target timers.target - Timer Units. Jul 11 00:25:46.708637 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 11 00:25:46.712142 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 11 00:25:46.719573 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 11 00:25:46.722586 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 11 00:25:46.724553 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 11 00:25:46.726054 systemd[1]: Reached target sockets.target - Socket Units. Jul 11 00:25:46.727115 systemd[1]: Reached target basic.target - Basic System. Jul 11 00:25:46.728221 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 11 00:25:46.728267 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 11 00:25:46.738750 systemd[1]: Starting containerd.service - containerd container runtime... Jul 11 00:25:46.741577 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 11 00:25:46.743892 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 11 00:25:46.746461 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 11 00:25:46.749826 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 11 00:25:46.751284 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 11 00:25:46.754856 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 11 00:25:46.756182 jq[1433]: false Jul 11 00:25:46.762816 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 11 00:25:46.768937 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 11 00:25:46.772847 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 11 00:25:46.780893 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 11 00:25:46.782889 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 11 00:25:46.784239 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 11 00:25:46.785456 systemd[1]: Starting update-engine.service - Update Engine... Jul 11 00:25:46.786751 extend-filesystems[1434]: Found loop3 Jul 11 00:25:46.790692 extend-filesystems[1434]: Found loop4 Jul 11 00:25:46.790692 extend-filesystems[1434]: Found loop5 Jul 11 00:25:46.790692 extend-filesystems[1434]: Found sr0 Jul 11 00:25:46.790692 extend-filesystems[1434]: Found vda Jul 11 00:25:46.790692 extend-filesystems[1434]: Found vda1 Jul 11 00:25:46.790692 extend-filesystems[1434]: Found vda2 Jul 11 00:25:46.790692 extend-filesystems[1434]: Found vda3 Jul 11 00:25:46.790692 extend-filesystems[1434]: Found usr Jul 11 00:25:46.790692 extend-filesystems[1434]: Found vda4 Jul 11 00:25:46.790692 extend-filesystems[1434]: Found vda6 Jul 11 00:25:46.790692 extend-filesystems[1434]: Found vda7 Jul 11 00:25:46.790692 extend-filesystems[1434]: Found vda9 Jul 11 00:25:46.790692 extend-filesystems[1434]: Checking size of /dev/vda9 Jul 11 00:25:46.817746 extend-filesystems[1434]: Resized partition /dev/vda9 Jul 11 00:25:46.821598 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 11 00:25:46.790925 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 11 00:25:46.821785 extend-filesystems[1458]: resize2fs 1.47.1 (20-May-2024) Jul 11 00:25:46.795241 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 11 00:25:46.824968 jq[1450]: true Jul 11 00:25:46.799983 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 11 00:25:46.800298 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 11 00:25:46.800823 systemd[1]: motdgen.service: Deactivated successfully. Jul 11 00:25:46.801447 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 11 00:25:46.803888 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 11 00:25:46.804155 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 11 00:25:46.830794 dbus-daemon[1432]: [system] SELinux support is enabled Jul 11 00:25:46.836211 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1399) Jul 11 00:25:46.834771 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 11 00:25:46.843006 (ntainerd)[1465]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 11 00:25:46.844144 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 11 00:25:46.844288 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 11 00:25:46.847785 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 11 00:25:46.857854 jq[1463]: true Jul 11 00:25:46.862372 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 11 00:25:46.847813 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 11 00:25:46.885798 tar[1455]: linux-amd64/LICENSE Jul 11 00:25:46.930787 update_engine[1446]: I20250711 00:25:46.862731 1446 main.cc:92] Flatcar Update Engine starting Jul 11 00:25:46.930787 update_engine[1446]: I20250711 00:25:46.877415 1446 update_check_scheduler.cc:74] Next update check in 11m55s Jul 11 00:25:46.877648 systemd[1]: Started update-engine.service - Update Engine. Jul 11 00:25:46.931178 extend-filesystems[1458]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 11 00:25:46.931178 extend-filesystems[1458]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 11 00:25:46.931178 extend-filesystems[1458]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 11 00:25:46.944163 tar[1455]: linux-amd64/helm Jul 11 00:25:46.930923 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 11 00:25:46.944292 extend-filesystems[1434]: Resized filesystem in /dev/vda9 Jul 11 00:25:46.931166 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 11 00:25:46.946282 systemd-logind[1445]: Watching system buttons on /dev/input/event1 (Power Button) Jul 11 00:25:46.946307 systemd-logind[1445]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 11 00:25:46.951930 systemd-logind[1445]: New seat seat0. Jul 11 00:25:46.961926 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 11 00:25:46.964507 sshd_keygen[1452]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 11 00:25:47.022907 systemd[1]: Started systemd-logind.service - User Login Management. Jul 11 00:25:47.035722 bash[1492]: Updated "/home/core/.ssh/authorized_keys" Jul 11 00:25:47.041252 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 11 00:25:47.046564 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 11 00:25:47.063398 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 11 00:25:47.076285 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 11 00:25:47.094168 locksmithd[1478]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 11 00:25:47.154363 systemd[1]: issuegen.service: Deactivated successfully. Jul 11 00:25:47.154897 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 11 00:25:47.165933 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 11 00:25:47.193425 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 11 00:25:47.269461 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 11 00:25:47.273053 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 11 00:25:47.274980 systemd[1]: Reached target getty.target - Login Prompts. Jul 11 00:25:47.446965 systemd-networkd[1391]: eth0: Gained IPv6LL Jul 11 00:25:47.453988 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 11 00:25:47.454557 containerd[1465]: time="2025-07-11T00:25:47.451594771Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 11 00:25:47.460028 systemd[1]: Reached target network-online.target - Network is Online. Jul 11 00:25:47.470227 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 11 00:25:47.476912 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:25:47.480050 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 11 00:25:47.488755 containerd[1465]: time="2025-07-11T00:25:47.488703444Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:25:47.491306 containerd[1465]: time="2025-07-11T00:25:47.491239611Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.96-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:25:47.491681 containerd[1465]: time="2025-07-11T00:25:47.491483482Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 11 00:25:47.491681 containerd[1465]: time="2025-07-11T00:25:47.491523687Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 11 00:25:47.493552 containerd[1465]: time="2025-07-11T00:25:47.491913673Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 11 00:25:47.493552 containerd[1465]: time="2025-07-11T00:25:47.491947025Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 11 00:25:47.493552 containerd[1465]: time="2025-07-11T00:25:47.492065942Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:25:47.493552 containerd[1465]: time="2025-07-11T00:25:47.492086536Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:25:47.493552 containerd[1465]: time="2025-07-11T00:25:47.492353516Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:25:47.493552 containerd[1465]: time="2025-07-11T00:25:47.492377113Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 11 00:25:47.493552 containerd[1465]: time="2025-07-11T00:25:47.492401983Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:25:47.493552 containerd[1465]: time="2025-07-11T00:25:47.492416718Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 11 00:25:47.493552 containerd[1465]: time="2025-07-11T00:25:47.492572465Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:25:47.493552 containerd[1465]: time="2025-07-11T00:25:47.492962451Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:25:47.493552 containerd[1465]: time="2025-07-11T00:25:47.493128283Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:25:47.493937 containerd[1465]: time="2025-07-11T00:25:47.493148308Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 11 00:25:47.493937 containerd[1465]: time="2025-07-11T00:25:47.493284765Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 11 00:25:47.493937 containerd[1465]: time="2025-07-11T00:25:47.493362441Z" level=info msg="metadata content store policy set" policy=shared Jul 11 00:25:47.503911 containerd[1465]: time="2025-07-11T00:25:47.503847748Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 11 00:25:47.504105 containerd[1465]: time="2025-07-11T00:25:47.504080623Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 11 00:25:47.504512 containerd[1465]: time="2025-07-11T00:25:47.504492043Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 11 00:25:47.504600 containerd[1465]: time="2025-07-11T00:25:47.504586224Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 11 00:25:47.504721 containerd[1465]: time="2025-07-11T00:25:47.504697489Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 11 00:25:47.505419 containerd[1465]: time="2025-07-11T00:25:47.505394103Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 11 00:25:47.506024 containerd[1465]: time="2025-07-11T00:25:47.506001009Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 11 00:25:47.506275 containerd[1465]: time="2025-07-11T00:25:47.506250295Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 11 00:25:47.506380 containerd[1465]: time="2025-07-11T00:25:47.506358796Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 11 00:25:47.506461 containerd[1465]: time="2025-07-11T00:25:47.506439796Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 11 00:25:47.506568 containerd[1465]: time="2025-07-11T00:25:47.506547478Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 11 00:25:47.506683 containerd[1465]: time="2025-07-11T00:25:47.506660225Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 11 00:25:47.506773 containerd[1465]: time="2025-07-11T00:25:47.506751413Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 11 00:25:47.506867 containerd[1465]: time="2025-07-11T00:25:47.506846506Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 11 00:25:47.506964 containerd[1465]: time="2025-07-11T00:25:47.506933604Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 11 00:25:47.507064 containerd[1465]: time="2025-07-11T00:25:47.507043285Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 11 00:25:47.507140 containerd[1465]: time="2025-07-11T00:25:47.507122526Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 11 00:25:47.507214 containerd[1465]: time="2025-07-11T00:25:47.507196940Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 11 00:25:47.507339 containerd[1465]: time="2025-07-11T00:25:47.507319222Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 11 00:25:47.507854 containerd[1465]: time="2025-07-11T00:25:47.507399850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 11 00:25:47.507854 containerd[1465]: time="2025-07-11T00:25:47.507423540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 11 00:25:47.507854 containerd[1465]: time="2025-07-11T00:25:47.507440542Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 11 00:25:47.507854 containerd[1465]: time="2025-07-11T00:25:47.507457346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 11 00:25:47.507854 containerd[1465]: time="2025-07-11T00:25:47.507476936Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 11 00:25:47.507854 containerd[1465]: time="2025-07-11T00:25:47.507494155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 11 00:25:47.507854 containerd[1465]: time="2025-07-11T00:25:47.507513900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 11 00:25:47.507854 containerd[1465]: time="2025-07-11T00:25:47.507532402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 11 00:25:47.507854 containerd[1465]: time="2025-07-11T00:25:47.507552407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 11 00:25:47.507854 containerd[1465]: time="2025-07-11T00:25:47.507578685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 11 00:25:47.507854 containerd[1465]: time="2025-07-11T00:25:47.507597903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 11 00:25:47.507854 containerd[1465]: time="2025-07-11T00:25:47.507636006Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 11 00:25:47.507854 containerd[1465]: time="2025-07-11T00:25:47.507689392Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 11 00:25:47.507854 containerd[1465]: time="2025-07-11T00:25:47.507732196Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 11 00:25:47.507854 containerd[1465]: time="2025-07-11T00:25:47.507755141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 11 00:25:47.508277 containerd[1465]: time="2025-07-11T00:25:47.507774409Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 11 00:25:47.508361 containerd[1465]: time="2025-07-11T00:25:47.508338014Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 11 00:25:47.508633 containerd[1465]: time="2025-07-11T00:25:47.508523447Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 11 00:25:47.508633 containerd[1465]: time="2025-07-11T00:25:47.508547137Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 11 00:25:47.508633 containerd[1465]: time="2025-07-11T00:25:47.508564677Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 11 00:25:47.508633 containerd[1465]: time="2025-07-11T00:25:47.508578282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 11 00:25:47.508633 containerd[1465]: time="2025-07-11T00:25:47.508595708Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 11 00:25:47.510048 containerd[1465]: time="2025-07-11T00:25:47.508610545Z" level=info msg="NRI interface is disabled by configuration." Jul 11 00:25:47.510048 containerd[1465]: time="2025-07-11T00:25:47.508836462Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 11 00:25:47.510137 containerd[1465]: time="2025-07-11T00:25:47.509308060Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 11 00:25:47.510137 containerd[1465]: time="2025-07-11T00:25:47.509389744Z" level=info msg="Connect containerd service" Jul 11 00:25:47.510137 containerd[1465]: time="2025-07-11T00:25:47.509437704Z" level=info msg="using legacy CRI server" Jul 11 00:25:47.510137 containerd[1465]: time="2025-07-11T00:25:47.509446380Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 11 00:25:47.510137 containerd[1465]: time="2025-07-11T00:25:47.509623249Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 11 00:25:47.511166 containerd[1465]: time="2025-07-11T00:25:47.511119750Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 11 00:25:47.511382 containerd[1465]: time="2025-07-11T00:25:47.511271965Z" level=info msg="Start subscribing containerd event" Jul 11 00:25:47.511382 containerd[1465]: time="2025-07-11T00:25:47.511360555Z" level=info msg="Start recovering state" Jul 11 00:25:47.512853 containerd[1465]: time="2025-07-11T00:25:47.511480995Z" level=info msg="Start event monitor" Jul 11 00:25:47.512853 containerd[1465]: time="2025-07-11T00:25:47.511523581Z" level=info msg="Start snapshots syncer" Jul 11 00:25:47.512853 containerd[1465]: time="2025-07-11T00:25:47.511544796Z" level=info msg="Start cni network conf syncer for default" Jul 11 00:25:47.512853 containerd[1465]: time="2025-07-11T00:25:47.511556890Z" level=info msg="Start streaming server" Jul 11 00:25:47.513613 containerd[1465]: time="2025-07-11T00:25:47.513581823Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 11 00:25:47.513876 containerd[1465]: time="2025-07-11T00:25:47.513854953Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 11 00:25:47.514732 systemd[1]: Started containerd.service - containerd container runtime. Jul 11 00:25:47.515031 containerd[1465]: time="2025-07-11T00:25:47.514887061Z" level=info msg="containerd successfully booted in 0.092939s" Jul 11 00:25:47.529234 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 11 00:25:47.531986 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 11 00:25:47.532278 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 11 00:25:47.537806 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 11 00:25:47.680866 tar[1455]: linux-amd64/README.md Jul 11 00:25:47.703090 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 11 00:25:48.219514 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 11 00:25:48.229972 systemd[1]: Started sshd@0-10.0.0.159:22-10.0.0.1:53346.service - OpenSSH per-connection server daemon (10.0.0.1:53346). Jul 11 00:25:48.313177 sshd[1541]: Accepted publickey for core from 10.0.0.1 port 53346 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:25:48.316335 sshd[1541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:25:48.326152 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 11 00:25:48.341993 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 11 00:25:48.376733 systemd-logind[1445]: New session 1 of user core. Jul 11 00:25:48.406609 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 11 00:25:48.423958 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 11 00:25:48.429723 (systemd)[1545]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:25:48.589688 systemd[1545]: Queued start job for default target default.target. Jul 11 00:25:48.602265 systemd[1545]: Created slice app.slice - User Application Slice. Jul 11 00:25:48.602292 systemd[1545]: Reached target paths.target - Paths. Jul 11 00:25:48.602306 systemd[1545]: Reached target timers.target - Timers. Jul 11 00:25:48.604335 systemd[1545]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 11 00:25:48.624579 systemd[1545]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 11 00:25:48.624764 systemd[1545]: Reached target sockets.target - Sockets. Jul 11 00:25:48.624779 systemd[1545]: Reached target basic.target - Basic System. Jul 11 00:25:48.624820 systemd[1545]: Reached target default.target - Main User Target. Jul 11 00:25:48.624856 systemd[1545]: Startup finished in 182ms. Jul 11 00:25:48.625512 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 11 00:25:48.628829 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 11 00:25:48.695888 systemd[1]: Started sshd@1-10.0.0.159:22-10.0.0.1:53352.service - OpenSSH per-connection server daemon (10.0.0.1:53352). Jul 11 00:25:48.843690 sshd[1556]: Accepted publickey for core from 10.0.0.1 port 53352 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:25:48.846093 sshd[1556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:25:48.851157 systemd-logind[1445]: New session 2 of user core. Jul 11 00:25:48.859773 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 11 00:25:48.981496 sshd[1556]: pam_unix(sshd:session): session closed for user core Jul 11 00:25:48.991583 systemd[1]: sshd@1-10.0.0.159:22-10.0.0.1:53352.service: Deactivated successfully. Jul 11 00:25:48.993278 systemd[1]: session-2.scope: Deactivated successfully. Jul 11 00:25:48.994842 systemd-logind[1445]: Session 2 logged out. Waiting for processes to exit. Jul 11 00:25:48.996185 systemd[1]: Started sshd@2-10.0.0.159:22-10.0.0.1:53360.service - OpenSSH per-connection server daemon (10.0.0.1:53360). Jul 11 00:25:49.032685 systemd-logind[1445]: Removed session 2. Jul 11 00:25:49.062836 sshd[1563]: Accepted publickey for core from 10.0.0.1 port 53360 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:25:49.064763 sshd[1563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:25:49.069413 systemd-logind[1445]: New session 3 of user core. Jul 11 00:25:49.077767 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 11 00:25:49.174675 sshd[1563]: pam_unix(sshd:session): session closed for user core Jul 11 00:25:49.177694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:25:49.179821 systemd[1]: sshd@2-10.0.0.159:22-10.0.0.1:53360.service: Deactivated successfully. Jul 11 00:25:49.182282 systemd[1]: session-3.scope: Deactivated successfully. Jul 11 00:25:49.182943 (kubelet)[1572]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:25:49.184329 systemd-logind[1445]: Session 3 logged out. Waiting for processes to exit. Jul 11 00:25:49.184368 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 11 00:25:49.187206 systemd[1]: Startup finished in 1.235s (kernel) + 7.981s (initrd) + 6.504s (userspace) = 15.721s. Jul 11 00:25:49.188550 systemd-logind[1445]: Removed session 3. Jul 11 00:25:49.935918 kubelet[1572]: E0711 00:25:49.935818 1572 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:25:49.942192 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:25:49.942479 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:25:49.942931 systemd[1]: kubelet.service: Consumed 2.124s CPU time. Jul 11 00:25:59.337344 systemd[1]: Started sshd@3-10.0.0.159:22-10.0.0.1:32768.service - OpenSSH per-connection server daemon (10.0.0.1:32768). Jul 11 00:25:59.371010 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 32768 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:25:59.372685 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:25:59.377302 systemd-logind[1445]: New session 4 of user core. Jul 11 00:25:59.396920 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 11 00:25:59.453635 sshd[1587]: pam_unix(sshd:session): session closed for user core Jul 11 00:25:59.466763 systemd[1]: sshd@3-10.0.0.159:22-10.0.0.1:32768.service: Deactivated successfully. Jul 11 00:25:59.468818 systemd[1]: session-4.scope: Deactivated successfully. Jul 11 00:25:59.470660 systemd-logind[1445]: Session 4 logged out. Waiting for processes to exit. Jul 11 00:25:59.482056 systemd[1]: Started sshd@4-10.0.0.159:22-10.0.0.1:59464.service - OpenSSH per-connection server daemon (10.0.0.1:59464). Jul 11 00:25:59.483229 systemd-logind[1445]: Removed session 4. Jul 11 00:25:59.510136 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 59464 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:25:59.511736 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:25:59.516413 systemd-logind[1445]: New session 5 of user core. Jul 11 00:25:59.525792 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 11 00:25:59.577045 sshd[1594]: pam_unix(sshd:session): session closed for user core Jul 11 00:25:59.585915 systemd[1]: sshd@4-10.0.0.159:22-10.0.0.1:59464.service: Deactivated successfully. Jul 11 00:25:59.588098 systemd[1]: session-5.scope: Deactivated successfully. Jul 11 00:25:59.589710 systemd-logind[1445]: Session 5 logged out. Waiting for processes to exit. Jul 11 00:25:59.596874 systemd[1]: Started sshd@5-10.0.0.159:22-10.0.0.1:59466.service - OpenSSH per-connection server daemon (10.0.0.1:59466). Jul 11 00:25:59.597950 systemd-logind[1445]: Removed session 5. Jul 11 00:25:59.626213 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 59466 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:25:59.627906 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:25:59.632684 systemd-logind[1445]: New session 6 of user core. Jul 11 00:25:59.642801 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 11 00:25:59.699789 sshd[1601]: pam_unix(sshd:session): session closed for user core Jul 11 00:25:59.710662 systemd[1]: sshd@5-10.0.0.159:22-10.0.0.1:59466.service: Deactivated successfully. Jul 11 00:25:59.712489 systemd[1]: session-6.scope: Deactivated successfully. Jul 11 00:25:59.714444 systemd-logind[1445]: Session 6 logged out. Waiting for processes to exit. Jul 11 00:25:59.734918 systemd[1]: Started sshd@6-10.0.0.159:22-10.0.0.1:59478.service - OpenSSH per-connection server daemon (10.0.0.1:59478). Jul 11 00:25:59.735901 systemd-logind[1445]: Removed session 6. Jul 11 00:25:59.764167 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 59478 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:25:59.766377 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:25:59.771663 systemd-logind[1445]: New session 7 of user core. Jul 11 00:25:59.782820 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 11 00:25:59.846525 sudo[1611]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 11 00:25:59.846970 sudo[1611]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:25:59.868145 sudo[1611]: pam_unix(sudo:session): session closed for user root Jul 11 00:25:59.870719 sshd[1608]: pam_unix(sshd:session): session closed for user core Jul 11 00:25:59.885979 systemd[1]: sshd@6-10.0.0.159:22-10.0.0.1:59478.service: Deactivated successfully. Jul 11 00:25:59.888331 systemd[1]: session-7.scope: Deactivated successfully. Jul 11 00:25:59.891021 systemd-logind[1445]: Session 7 logged out. Waiting for processes to exit. Jul 11 00:25:59.892779 systemd[1]: Started sshd@7-10.0.0.159:22-10.0.0.1:59482.service - OpenSSH per-connection server daemon (10.0.0.1:59482). Jul 11 00:25:59.893794 systemd-logind[1445]: Removed session 7. Jul 11 00:25:59.930838 sshd[1616]: Accepted publickey for core from 10.0.0.1 port 59482 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:25:59.932605 sshd[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:25:59.937192 systemd-logind[1445]: New session 8 of user core. Jul 11 00:25:59.946772 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 11 00:25:59.947512 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 11 00:25:59.949137 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:26:00.004792 sudo[1623]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 11 00:26:00.005177 sudo[1623]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:26:00.011365 sudo[1623]: pam_unix(sudo:session): session closed for user root Jul 11 00:26:00.019839 sudo[1622]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 11 00:26:00.020306 sudo[1622]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:26:00.042095 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 11 00:26:00.045222 auditctl[1626]: No rules Jul 11 00:26:00.045756 systemd[1]: audit-rules.service: Deactivated successfully. Jul 11 00:26:00.046069 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 11 00:26:00.049546 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 11 00:26:00.091724 augenrules[1644]: No rules Jul 11 00:26:00.094062 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 11 00:26:00.095855 sudo[1622]: pam_unix(sudo:session): session closed for user root Jul 11 00:26:00.098038 sshd[1616]: pam_unix(sshd:session): session closed for user core Jul 11 00:26:00.111509 systemd[1]: sshd@7-10.0.0.159:22-10.0.0.1:59482.service: Deactivated successfully. Jul 11 00:26:00.113881 systemd[1]: session-8.scope: Deactivated successfully. Jul 11 00:26:00.116013 systemd-logind[1445]: Session 8 logged out. Waiting for processes to exit. Jul 11 00:26:00.124089 systemd[1]: Started sshd@8-10.0.0.159:22-10.0.0.1:59498.service - OpenSSH per-connection server daemon (10.0.0.1:59498). Jul 11 00:26:00.125604 systemd-logind[1445]: Removed session 8. Jul 11 00:26:00.152526 sshd[1652]: Accepted publickey for core from 10.0.0.1 port 59498 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:26:00.155185 sshd[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:26:00.161708 systemd-logind[1445]: New session 9 of user core. Jul 11 00:26:00.175842 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 11 00:26:00.195869 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:26:00.201977 (kubelet)[1660]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:26:00.234317 sudo[1667]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 11 00:26:00.234709 sudo[1667]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:26:00.263348 kubelet[1660]: E0711 00:26:00.263248 1660 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:26:00.272189 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:26:00.272478 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:26:00.760852 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 11 00:26:00.761580 (dockerd)[1687]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 11 00:26:01.290272 dockerd[1687]: time="2025-07-11T00:26:01.290174150Z" level=info msg="Starting up" Jul 11 00:26:01.760679 systemd[1]: var-lib-docker-metacopy\x2dcheck1457642339-merged.mount: Deactivated successfully. Jul 11 00:26:01.786343 dockerd[1687]: time="2025-07-11T00:26:01.786265812Z" level=info msg="Loading containers: start." Jul 11 00:26:02.274654 kernel: Initializing XFRM netlink socket Jul 11 00:26:02.366338 systemd-networkd[1391]: docker0: Link UP Jul 11 00:26:02.387624 dockerd[1687]: time="2025-07-11T00:26:02.387544733Z" level=info msg="Loading containers: done." Jul 11 00:26:02.410694 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1050805150-merged.mount: Deactivated successfully. Jul 11 00:26:02.413032 dockerd[1687]: time="2025-07-11T00:26:02.412984390Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 11 00:26:02.413167 dockerd[1687]: time="2025-07-11T00:26:02.413145615Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 11 00:26:02.413316 dockerd[1687]: time="2025-07-11T00:26:02.413299394Z" level=info msg="Daemon has completed initialization" Jul 11 00:26:02.459095 dockerd[1687]: time="2025-07-11T00:26:02.458949554Z" level=info msg="API listen on /run/docker.sock" Jul 11 00:26:02.459287 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 11 00:26:03.202312 containerd[1465]: time="2025-07-11T00:26:03.202258986Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 11 00:26:04.121919 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3228782877.mount: Deactivated successfully. Jul 11 00:26:05.819207 containerd[1465]: time="2025-07-11T00:26:05.819015887Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:05.824548 containerd[1465]: time="2025-07-11T00:26:05.824439185Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=30079099" Jul 11 00:26:05.840118 containerd[1465]: time="2025-07-11T00:26:05.840016004Z" level=info msg="ImageCreate event name:\"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:05.845154 containerd[1465]: time="2025-07-11T00:26:05.845067687Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:05.846791 containerd[1465]: time="2025-07-11T00:26:05.846733637Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"30075899\" in 2.644420397s" Jul 11 00:26:05.846846 containerd[1465]: time="2025-07-11T00:26:05.846798343Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\"" Jul 11 00:26:05.847848 containerd[1465]: time="2025-07-11T00:26:05.847797846Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 11 00:26:07.070170 containerd[1465]: time="2025-07-11T00:26:07.070090882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:07.071157 containerd[1465]: time="2025-07-11T00:26:07.071102974Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=26018946" Jul 11 00:26:07.072570 containerd[1465]: time="2025-07-11T00:26:07.072473817Z" level=info msg="ImageCreate event name:\"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:07.075525 containerd[1465]: time="2025-07-11T00:26:07.075458068Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:07.076873 containerd[1465]: time="2025-07-11T00:26:07.076833139Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"27646507\" in 1.22897574s" Jul 11 00:26:07.076916 containerd[1465]: time="2025-07-11T00:26:07.076876088Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\"" Jul 11 00:26:07.077497 containerd[1465]: time="2025-07-11T00:26:07.077474430Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 11 00:26:09.105754 containerd[1465]: time="2025-07-11T00:26:09.105645768Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:09.153004 containerd[1465]: time="2025-07-11T00:26:09.152860170Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=20155055" Jul 11 00:26:09.256970 containerd[1465]: time="2025-07-11T00:26:09.256907708Z" level=info msg="ImageCreate event name:\"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:09.344140 containerd[1465]: time="2025-07-11T00:26:09.344039023Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:09.345429 containerd[1465]: time="2025-07-11T00:26:09.345365484Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"21782634\" in 2.267859258s" Jul 11 00:26:09.345429 containerd[1465]: time="2025-07-11T00:26:09.345417293Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\"" Jul 11 00:26:09.346112 containerd[1465]: time="2025-07-11T00:26:09.346057294Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 11 00:26:10.353164 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 11 00:26:10.364090 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:26:10.970171 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:26:10.976135 (kubelet)[1908]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:26:11.014468 kubelet[1908]: E0711 00:26:11.014301 1908 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:26:11.018893 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:26:11.019127 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:26:11.809809 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1029114603.mount: Deactivated successfully. Jul 11 00:26:12.574175 containerd[1465]: time="2025-07-11T00:26:12.574079676Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:12.575035 containerd[1465]: time="2025-07-11T00:26:12.574971044Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=31892746" Jul 11 00:26:12.576476 containerd[1465]: time="2025-07-11T00:26:12.576428282Z" level=info msg="ImageCreate event name:\"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:12.579441 containerd[1465]: time="2025-07-11T00:26:12.579395441Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:12.580334 containerd[1465]: time="2025-07-11T00:26:12.580279256Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"31891765\" in 3.234182284s" Jul 11 00:26:12.580334 containerd[1465]: time="2025-07-11T00:26:12.580323582Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\"" Jul 11 00:26:12.581283 containerd[1465]: time="2025-07-11T00:26:12.581242885Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 11 00:26:13.154972 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3971260083.mount: Deactivated successfully. Jul 11 00:26:15.501746 containerd[1465]: time="2025-07-11T00:26:15.501633179Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:15.527150 containerd[1465]: time="2025-07-11T00:26:15.527027229Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Jul 11 00:26:15.545093 containerd[1465]: time="2025-07-11T00:26:15.545012946Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:15.578649 containerd[1465]: time="2025-07-11T00:26:15.578514778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:15.580090 containerd[1465]: time="2025-07-11T00:26:15.580026473Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.998731885s" Jul 11 00:26:15.580090 containerd[1465]: time="2025-07-11T00:26:15.580082292Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jul 11 00:26:15.580753 containerd[1465]: time="2025-07-11T00:26:15.580708924Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 11 00:26:17.384118 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3611126910.mount: Deactivated successfully. Jul 11 00:26:17.441259 containerd[1465]: time="2025-07-11T00:26:17.441171962Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:17.522330 containerd[1465]: time="2025-07-11T00:26:17.522213847Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 11 00:26:17.621269 containerd[1465]: time="2025-07-11T00:26:17.621174727Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:17.795663 containerd[1465]: time="2025-07-11T00:26:17.795423476Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:17.796770 containerd[1465]: time="2025-07-11T00:26:17.796728868Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 2.21598376s" Jul 11 00:26:17.796838 containerd[1465]: time="2025-07-11T00:26:17.796772165Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 11 00:26:17.797339 containerd[1465]: time="2025-07-11T00:26:17.797298883Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 11 00:26:19.917864 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1921004623.mount: Deactivated successfully. Jul 11 00:26:21.103238 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 11 00:26:21.112918 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:26:21.372009 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:26:21.377786 (kubelet)[2033]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:26:21.502623 kubelet[2033]: E0711 00:26:21.502520 2033 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:26:21.507602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:26:21.507847 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:26:22.981341 containerd[1465]: time="2025-07-11T00:26:22.981224502Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:22.982289 containerd[1465]: time="2025-07-11T00:26:22.982173821Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58247175" Jul 11 00:26:22.983803 containerd[1465]: time="2025-07-11T00:26:22.983736815Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:22.987712 containerd[1465]: time="2025-07-11T00:26:22.987650725Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:22.989450 containerd[1465]: time="2025-07-11T00:26:22.989388638Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 5.191866101s" Jul 11 00:26:22.989501 containerd[1465]: time="2025-07-11T00:26:22.989466315Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jul 11 00:26:25.749092 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:26:25.760950 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:26:25.788151 systemd[1]: Reloading requested from client PID 2084 ('systemctl') (unit session-9.scope)... Jul 11 00:26:25.788170 systemd[1]: Reloading... Jul 11 00:26:25.878725 zram_generator::config[2121]: No configuration found. Jul 11 00:26:26.498206 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:26:26.580456 systemd[1]: Reloading finished in 791 ms. Jul 11 00:26:26.645225 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:26:26.649753 systemd[1]: kubelet.service: Deactivated successfully. Jul 11 00:26:26.650088 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:26:26.652229 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:26:26.844702 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:26:26.851585 (kubelet)[2173]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 11 00:26:26.897007 kubelet[2173]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:26:26.897007 kubelet[2173]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 11 00:26:26.897007 kubelet[2173]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:26:26.897486 kubelet[2173]: I0711 00:26:26.897051 2173 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 11 00:26:29.178455 kubelet[2173]: I0711 00:26:29.178378 2173 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 11 00:26:29.178455 kubelet[2173]: I0711 00:26:29.178421 2173 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 11 00:26:29.179085 kubelet[2173]: I0711 00:26:29.178666 2173 server.go:956] "Client rotation is on, will bootstrap in background" Jul 11 00:26:29.203789 kubelet[2173]: E0711 00:26:29.203730 2173 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.159:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.159:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 11 00:26:29.205225 kubelet[2173]: I0711 00:26:29.205187 2173 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 00:26:29.216185 kubelet[2173]: E0711 00:26:29.216120 2173 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 11 00:26:29.216185 kubelet[2173]: I0711 00:26:29.216178 2173 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 11 00:26:29.223956 kubelet[2173]: I0711 00:26:29.223911 2173 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 11 00:26:29.224350 kubelet[2173]: I0711 00:26:29.224296 2173 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 11 00:26:29.224594 kubelet[2173]: I0711 00:26:29.224335 2173 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 11 00:26:29.224753 kubelet[2173]: I0711 00:26:29.224620 2173 topology_manager.go:138] "Creating topology manager with none policy" Jul 11 00:26:29.224753 kubelet[2173]: I0711 00:26:29.224647 2173 container_manager_linux.go:303] "Creating device plugin manager" Jul 11 00:26:29.225709 kubelet[2173]: I0711 00:26:29.225673 2173 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:26:29.227750 kubelet[2173]: I0711 00:26:29.227711 2173 kubelet.go:480] "Attempting to sync node with API server" Jul 11 00:26:29.227750 kubelet[2173]: I0711 00:26:29.227742 2173 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 11 00:26:29.227841 kubelet[2173]: I0711 00:26:29.227796 2173 kubelet.go:386] "Adding apiserver pod source" Jul 11 00:26:29.229515 kubelet[2173]: I0711 00:26:29.229289 2173 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 11 00:26:29.236075 kubelet[2173]: I0711 00:26:29.236016 2173 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 11 00:26:29.236075 kubelet[2173]: E0711 00:26:29.236059 2173 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.159:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.159:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 11 00:26:29.236710 kubelet[2173]: I0711 00:26:29.236685 2173 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 11 00:26:29.237338 kubelet[2173]: E0711 00:26:29.237280 2173 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.159:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.159:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 11 00:26:29.237531 kubelet[2173]: W0711 00:26:29.237504 2173 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 11 00:26:29.241084 kubelet[2173]: I0711 00:26:29.241057 2173 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 11 00:26:29.241138 kubelet[2173]: I0711 00:26:29.241131 2173 server.go:1289] "Started kubelet" Jul 11 00:26:29.241724 kubelet[2173]: I0711 00:26:29.241219 2173 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 11 00:26:29.242626 kubelet[2173]: I0711 00:26:29.242547 2173 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 11 00:26:29.242788 kubelet[2173]: I0711 00:26:29.242653 2173 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 11 00:26:29.242788 kubelet[2173]: I0711 00:26:29.242726 2173 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 11 00:26:29.243697 kubelet[2173]: I0711 00:26:29.243579 2173 server.go:317] "Adding debug handlers to kubelet server" Jul 11 00:26:29.245071 kubelet[2173]: I0711 00:26:29.244498 2173 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 11 00:26:29.246740 kubelet[2173]: E0711 00:26:29.245562 2173 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:26:29.246740 kubelet[2173]: I0711 00:26:29.245655 2173 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 11 00:26:29.246740 kubelet[2173]: I0711 00:26:29.245913 2173 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 11 00:26:29.246740 kubelet[2173]: I0711 00:26:29.245984 2173 reconciler.go:26] "Reconciler: start to sync state" Jul 11 00:26:29.246740 kubelet[2173]: E0711 00:26:29.244863 2173 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.159:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.159:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18510ac9306e8348 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-11 00:26:29.241086792 +0000 UTC m=+2.384055772,LastTimestamp:2025-07-11 00:26:29.241086792 +0000 UTC m=+2.384055772,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 11 00:26:29.246740 kubelet[2173]: E0711 00:26:29.246378 2173 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.159:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.159:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 11 00:26:29.246740 kubelet[2173]: E0711 00:26:29.246584 2173 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.159:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.159:6443: connect: connection refused" interval="200ms" Jul 11 00:26:29.247709 kubelet[2173]: I0711 00:26:29.247487 2173 factory.go:223] Registration of the systemd container factory successfully Jul 11 00:26:29.247709 kubelet[2173]: I0711 00:26:29.247568 2173 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 11 00:26:29.247823 kubelet[2173]: E0711 00:26:29.247798 2173 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 11 00:26:29.249215 kubelet[2173]: I0711 00:26:29.249189 2173 factory.go:223] Registration of the containerd container factory successfully Jul 11 00:26:29.266216 kubelet[2173]: I0711 00:26:29.266171 2173 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 11 00:26:29.266216 kubelet[2173]: I0711 00:26:29.266196 2173 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 11 00:26:29.266216 kubelet[2173]: I0711 00:26:29.266221 2173 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:26:29.269368 kubelet[2173]: I0711 00:26:29.269304 2173 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 11 00:26:29.271002 kubelet[2173]: I0711 00:26:29.270963 2173 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 11 00:26:29.271063 kubelet[2173]: I0711 00:26:29.271011 2173 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 11 00:26:29.271063 kubelet[2173]: I0711 00:26:29.271040 2173 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 11 00:26:29.271063 kubelet[2173]: I0711 00:26:29.271053 2173 kubelet.go:2436] "Starting kubelet main sync loop" Jul 11 00:26:29.271136 kubelet[2173]: E0711 00:26:29.271106 2173 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 11 00:26:29.292517 kubelet[2173]: E0711 00:26:29.292444 2173 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.159:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.159:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 11 00:26:29.301515 kubelet[2173]: I0711 00:26:29.301481 2173 policy_none.go:49] "None policy: Start" Jul 11 00:26:29.301568 kubelet[2173]: I0711 00:26:29.301524 2173 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 11 00:26:29.301568 kubelet[2173]: I0711 00:26:29.301552 2173 state_mem.go:35] "Initializing new in-memory state store" Jul 11 00:26:29.337559 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 11 00:26:29.346548 kubelet[2173]: E0711 00:26:29.346492 2173 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:26:29.362834 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 11 00:26:29.366794 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 11 00:26:29.371704 kubelet[2173]: E0711 00:26:29.371648 2173 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 11 00:26:29.374882 kubelet[2173]: E0711 00:26:29.374832 2173 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 11 00:26:29.375196 kubelet[2173]: I0711 00:26:29.375176 2173 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 11 00:26:29.375254 kubelet[2173]: I0711 00:26:29.375195 2173 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 11 00:26:29.375835 kubelet[2173]: I0711 00:26:29.375563 2173 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 11 00:26:29.376671 kubelet[2173]: E0711 00:26:29.376640 2173 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 11 00:26:29.376869 kubelet[2173]: E0711 00:26:29.376830 2173 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 11 00:26:29.448491 kubelet[2173]: E0711 00:26:29.448307 2173 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.159:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.159:6443: connect: connection refused" interval="400ms" Jul 11 00:26:29.477027 kubelet[2173]: I0711 00:26:29.476961 2173 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:26:29.477504 kubelet[2173]: E0711 00:26:29.477440 2173 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.159:6443/api/v1/nodes\": dial tcp 10.0.0.159:6443: connect: connection refused" node="localhost" Jul 11 00:26:29.647205 kubelet[2173]: I0711 00:26:29.647137 2173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:26:29.647205 kubelet[2173]: I0711 00:26:29.647185 2173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:26:29.647569 kubelet[2173]: I0711 00:26:29.647224 2173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:26:29.647569 kubelet[2173]: I0711 00:26:29.647251 2173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:26:29.647569 kubelet[2173]: I0711 00:26:29.647270 2173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:26:29.647569 kubelet[2173]: I0711 00:26:29.647302 2173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f20b2ddeefcc43f5ad4bc15ffba56a75-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f20b2ddeefcc43f5ad4bc15ffba56a75\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:26:29.647569 kubelet[2173]: I0711 00:26:29.647349 2173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f20b2ddeefcc43f5ad4bc15ffba56a75-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f20b2ddeefcc43f5ad4bc15ffba56a75\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:26:29.647415 systemd[1]: Created slice kubepods-burstable-podf20b2ddeefcc43f5ad4bc15ffba56a75.slice - libcontainer container kubepods-burstable-podf20b2ddeefcc43f5ad4bc15ffba56a75.slice. Jul 11 00:26:29.647933 kubelet[2173]: I0711 00:26:29.647368 2173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f20b2ddeefcc43f5ad4bc15ffba56a75-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f20b2ddeefcc43f5ad4bc15ffba56a75\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:26:29.666029 kubelet[2173]: E0711 00:26:29.665990 2173 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:26:29.672522 systemd[1]: Created slice kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice - libcontainer container kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice. Jul 11 00:26:29.674420 kubelet[2173]: E0711 00:26:29.674387 2173 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:26:29.679765 kubelet[2173]: I0711 00:26:29.679729 2173 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:26:29.680186 kubelet[2173]: E0711 00:26:29.680148 2173 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.159:6443/api/v1/nodes\": dial tcp 10.0.0.159:6443: connect: connection refused" node="localhost" Jul 11 00:26:29.732565 systemd[1]: Created slice kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice - libcontainer container kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice. Jul 11 00:26:29.735009 kubelet[2173]: E0711 00:26:29.734973 2173 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:26:29.748658 kubelet[2173]: I0711 00:26:29.748565 2173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 11 00:26:29.849569 kubelet[2173]: E0711 00:26:29.849504 2173 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.159:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.159:6443: connect: connection refused" interval="800ms" Jul 11 00:26:29.967454 kubelet[2173]: E0711 00:26:29.967383 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:29.968468 containerd[1465]: time="2025-07-11T00:26:29.968401447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f20b2ddeefcc43f5ad4bc15ffba56a75,Namespace:kube-system,Attempt:0,}" Jul 11 00:26:29.975568 kubelet[2173]: E0711 00:26:29.975545 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:29.975997 containerd[1465]: time="2025-07-11T00:26:29.975958631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,}" Jul 11 00:26:30.036541 kubelet[2173]: E0711 00:26:30.036405 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:30.037074 containerd[1465]: time="2025-07-11T00:26:30.037015788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,}" Jul 11 00:26:30.081978 kubelet[2173]: I0711 00:26:30.081930 2173 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:26:30.082381 kubelet[2173]: E0711 00:26:30.082339 2173 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.159:6443/api/v1/nodes\": dial tcp 10.0.0.159:6443: connect: connection refused" node="localhost" Jul 11 00:26:30.227986 kubelet[2173]: E0711 00:26:30.227896 2173 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.159:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.159:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 11 00:26:30.273748 kubelet[2173]: E0711 00:26:30.273670 2173 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.159:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.159:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 11 00:26:30.333257 kubelet[2173]: E0711 00:26:30.333103 2173 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.159:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.159:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 11 00:26:30.650636 kubelet[2173]: E0711 00:26:30.650566 2173 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.159:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.159:6443: connect: connection refused" interval="1.6s" Jul 11 00:26:30.776781 kubelet[2173]: E0711 00:26:30.776709 2173 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.159:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.159:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 11 00:26:30.884214 kubelet[2173]: I0711 00:26:30.884110 2173 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:26:30.884806 kubelet[2173]: E0711 00:26:30.884716 2173 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.159:6443/api/v1/nodes\": dial tcp 10.0.0.159:6443: connect: connection refused" node="localhost" Jul 11 00:26:31.246531 kubelet[2173]: E0711 00:26:31.246374 2173 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.159:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.159:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18510ac9306e8348 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-11 00:26:29.241086792 +0000 UTC m=+2.384055772,LastTimestamp:2025-07-11 00:26:29.241086792 +0000 UTC m=+2.384055772,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 11 00:26:31.272986 kubelet[2173]: E0711 00:26:31.272912 2173 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.159:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.159:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 11 00:26:31.452439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1146552484.mount: Deactivated successfully. Jul 11 00:26:31.458958 containerd[1465]: time="2025-07-11T00:26:31.458907453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:26:31.460077 containerd[1465]: time="2025-07-11T00:26:31.460044515Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:26:31.460946 containerd[1465]: time="2025-07-11T00:26:31.460894195Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 11 00:26:31.461903 containerd[1465]: time="2025-07-11T00:26:31.461867847Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:26:31.463058 containerd[1465]: time="2025-07-11T00:26:31.463019220Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 11 00:26:31.464105 containerd[1465]: time="2025-07-11T00:26:31.464061253Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:26:31.464993 containerd[1465]: time="2025-07-11T00:26:31.464916525Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 11 00:26:31.468049 containerd[1465]: time="2025-07-11T00:26:31.468004059Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:26:31.469972 containerd[1465]: time="2025-07-11T00:26:31.469917960Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.432812824s" Jul 11 00:26:31.470892 containerd[1465]: time="2025-07-11T00:26:31.470859232Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.494843203s" Jul 11 00:26:31.471837 containerd[1465]: time="2025-07-11T00:26:31.471747346Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.503250135s" Jul 11 00:26:31.620533 containerd[1465]: time="2025-07-11T00:26:31.620368077Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:26:31.620533 containerd[1465]: time="2025-07-11T00:26:31.620528870Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:26:31.620960 containerd[1465]: time="2025-07-11T00:26:31.620564449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:26:31.620960 containerd[1465]: time="2025-07-11T00:26:31.620764709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:26:31.622488 containerd[1465]: time="2025-07-11T00:26:31.622255289Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:26:31.622488 containerd[1465]: time="2025-07-11T00:26:31.622318408Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:26:31.622488 containerd[1465]: time="2025-07-11T00:26:31.622333091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:26:31.622488 containerd[1465]: time="2025-07-11T00:26:31.622419561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:26:31.623833 containerd[1465]: time="2025-07-11T00:26:31.623761113Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:26:31.623993 containerd[1465]: time="2025-07-11T00:26:31.623938734Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:26:31.624363 containerd[1465]: time="2025-07-11T00:26:31.624087400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:26:31.624674 containerd[1465]: time="2025-07-11T00:26:31.624493774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:26:31.647770 systemd[1]: Started cri-containerd-0795492e84d50f30538cb9dd02f809ac12da1dc7b94bc06ac8dc43f38a7de88e.scope - libcontainer container 0795492e84d50f30538cb9dd02f809ac12da1dc7b94bc06ac8dc43f38a7de88e. Jul 11 00:26:31.649441 systemd[1]: Started cri-containerd-66fe9fbdb036456930ef05258f25e5262abbab508c3dddd790d8f5ff69fae362.scope - libcontainer container 66fe9fbdb036456930ef05258f25e5262abbab508c3dddd790d8f5ff69fae362. Jul 11 00:26:31.651433 systemd[1]: Started cri-containerd-74474666f27aff2a2c3e5315652cf1e9bdf6acc7a76012a6d68ab28adce6e9f6.scope - libcontainer container 74474666f27aff2a2c3e5315652cf1e9bdf6acc7a76012a6d68ab28adce6e9f6. Jul 11 00:26:31.698376 containerd[1465]: time="2025-07-11T00:26:31.698051198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"66fe9fbdb036456930ef05258f25e5262abbab508c3dddd790d8f5ff69fae362\"" Jul 11 00:26:31.700762 kubelet[2173]: E0711 00:26:31.700724 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:31.704180 containerd[1465]: time="2025-07-11T00:26:31.704130044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"74474666f27aff2a2c3e5315652cf1e9bdf6acc7a76012a6d68ab28adce6e9f6\"" Jul 11 00:26:31.705246 containerd[1465]: time="2025-07-11T00:26:31.705216024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f20b2ddeefcc43f5ad4bc15ffba56a75,Namespace:kube-system,Attempt:0,} returns sandbox id \"0795492e84d50f30538cb9dd02f809ac12da1dc7b94bc06ac8dc43f38a7de88e\"" Jul 11 00:26:31.707232 kubelet[2173]: E0711 00:26:31.707177 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:31.707762 kubelet[2173]: E0711 00:26:31.707727 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:31.710952 containerd[1465]: time="2025-07-11T00:26:31.710909361Z" level=info msg="CreateContainer within sandbox \"66fe9fbdb036456930ef05258f25e5262abbab508c3dddd790d8f5ff69fae362\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 11 00:26:31.712986 containerd[1465]: time="2025-07-11T00:26:31.712954911Z" level=info msg="CreateContainer within sandbox \"0795492e84d50f30538cb9dd02f809ac12da1dc7b94bc06ac8dc43f38a7de88e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 11 00:26:31.715190 containerd[1465]: time="2025-07-11T00:26:31.715087454Z" level=info msg="CreateContainer within sandbox \"74474666f27aff2a2c3e5315652cf1e9bdf6acc7a76012a6d68ab28adce6e9f6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 11 00:26:31.746949 containerd[1465]: time="2025-07-11T00:26:31.746874338Z" level=info msg="CreateContainer within sandbox \"66fe9fbdb036456930ef05258f25e5262abbab508c3dddd790d8f5ff69fae362\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3639ab96dcf0af11f8c8621c9af5fd6ec57d9ab00799d288e7fb709e34f7fa48\"" Jul 11 00:26:31.747698 containerd[1465]: time="2025-07-11T00:26:31.747655086Z" level=info msg="StartContainer for \"3639ab96dcf0af11f8c8621c9af5fd6ec57d9ab00799d288e7fb709e34f7fa48\"" Jul 11 00:26:31.748941 containerd[1465]: time="2025-07-11T00:26:31.748875480Z" level=info msg="CreateContainer within sandbox \"0795492e84d50f30538cb9dd02f809ac12da1dc7b94bc06ac8dc43f38a7de88e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9530ae664c34c8f96e564438c874ecb1cd2f88b93f389c2d160e4054b77b17ea\"" Jul 11 00:26:31.749501 containerd[1465]: time="2025-07-11T00:26:31.749372995Z" level=info msg="StartContainer for \"9530ae664c34c8f96e564438c874ecb1cd2f88b93f389c2d160e4054b77b17ea\"" Jul 11 00:26:31.750305 containerd[1465]: time="2025-07-11T00:26:31.750219989Z" level=info msg="CreateContainer within sandbox \"74474666f27aff2a2c3e5315652cf1e9bdf6acc7a76012a6d68ab28adce6e9f6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a5c2376adf283196b62034336b81567663578d78592fcf48196d40efb8618270\"" Jul 11 00:26:31.750826 containerd[1465]: time="2025-07-11T00:26:31.750787738Z" level=info msg="StartContainer for \"a5c2376adf283196b62034336b81567663578d78592fcf48196d40efb8618270\"" Jul 11 00:26:31.783790 systemd[1]: Started cri-containerd-3639ab96dcf0af11f8c8621c9af5fd6ec57d9ab00799d288e7fb709e34f7fa48.scope - libcontainer container 3639ab96dcf0af11f8c8621c9af5fd6ec57d9ab00799d288e7fb709e34f7fa48. Jul 11 00:26:31.785365 systemd[1]: Started cri-containerd-9530ae664c34c8f96e564438c874ecb1cd2f88b93f389c2d160e4054b77b17ea.scope - libcontainer container 9530ae664c34c8f96e564438c874ecb1cd2f88b93f389c2d160e4054b77b17ea. Jul 11 00:26:31.787975 systemd[1]: Started cri-containerd-a5c2376adf283196b62034336b81567663578d78592fcf48196d40efb8618270.scope - libcontainer container a5c2376adf283196b62034336b81567663578d78592fcf48196d40efb8618270. Jul 11 00:26:31.838470 containerd[1465]: time="2025-07-11T00:26:31.837390988Z" level=info msg="StartContainer for \"3639ab96dcf0af11f8c8621c9af5fd6ec57d9ab00799d288e7fb709e34f7fa48\" returns successfully" Jul 11 00:26:31.843891 containerd[1465]: time="2025-07-11T00:26:31.842373320Z" level=info msg="StartContainer for \"9530ae664c34c8f96e564438c874ecb1cd2f88b93f389c2d160e4054b77b17ea\" returns successfully" Jul 11 00:26:31.843891 containerd[1465]: time="2025-07-11T00:26:31.842438895Z" level=info msg="StartContainer for \"a5c2376adf283196b62034336b81567663578d78592fcf48196d40efb8618270\" returns successfully" Jul 11 00:26:32.287799 kubelet[2173]: E0711 00:26:32.287747 2173 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:26:32.288289 kubelet[2173]: E0711 00:26:32.287895 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:32.288289 kubelet[2173]: E0711 00:26:32.288041 2173 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:26:32.288289 kubelet[2173]: E0711 00:26:32.288138 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:32.295102 kubelet[2173]: E0711 00:26:32.294880 2173 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:26:32.295102 kubelet[2173]: E0711 00:26:32.295037 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:32.468649 update_engine[1446]: I20250711 00:26:32.466454 1446 update_attempter.cc:509] Updating boot flags... Jul 11 00:26:32.486209 kubelet[2173]: I0711 00:26:32.486173 2173 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:26:32.575645 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2458) Jul 11 00:26:32.670663 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2456) Jul 11 00:26:32.753714 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2456) Jul 11 00:26:33.290825 kubelet[2173]: E0711 00:26:33.290634 2173 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:26:33.290825 kubelet[2173]: E0711 00:26:33.290758 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:33.292351 kubelet[2173]: E0711 00:26:33.292217 2173 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:26:33.292351 kubelet[2173]: E0711 00:26:33.292314 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:33.511868 kubelet[2173]: E0711 00:26:33.511825 2173 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 11 00:26:33.603817 kubelet[2173]: I0711 00:26:33.603768 2173 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 11 00:26:33.646842 kubelet[2173]: I0711 00:26:33.646775 2173 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 11 00:26:33.652210 kubelet[2173]: E0711 00:26:33.652161 2173 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 11 00:26:33.652210 kubelet[2173]: I0711 00:26:33.652193 2173 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 11 00:26:33.653562 kubelet[2173]: E0711 00:26:33.653530 2173 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 11 00:26:33.653562 kubelet[2173]: I0711 00:26:33.653564 2173 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 11 00:26:33.654731 kubelet[2173]: E0711 00:26:33.654706 2173 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 11 00:26:34.238363 kubelet[2173]: I0711 00:26:34.238313 2173 apiserver.go:52] "Watching apiserver" Jul 11 00:26:34.246691 kubelet[2173]: I0711 00:26:34.246659 2173 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 11 00:26:34.291419 kubelet[2173]: I0711 00:26:34.291388 2173 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 11 00:26:34.293134 kubelet[2173]: E0711 00:26:34.293111 2173 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 11 00:26:34.293275 kubelet[2173]: E0711 00:26:34.293261 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:35.811999 systemd[1]: Reloading requested from client PID 2476 ('systemctl') (unit session-9.scope)... Jul 11 00:26:35.812014 systemd[1]: Reloading... Jul 11 00:26:35.885776 zram_generator::config[2518]: No configuration found. Jul 11 00:26:36.007150 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:26:36.110338 systemd[1]: Reloading finished in 297 ms. Jul 11 00:26:36.161383 kubelet[2173]: I0711 00:26:36.161203 2173 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 00:26:36.161291 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:26:36.185668 systemd[1]: kubelet.service: Deactivated successfully. Jul 11 00:26:36.186028 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:26:36.186099 systemd[1]: kubelet.service: Consumed 1.944s CPU time, 135.3M memory peak, 0B memory swap peak. Jul 11 00:26:36.196924 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:26:36.393759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:26:36.403042 (kubelet)[2560]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 11 00:26:36.441623 kubelet[2560]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:26:36.441623 kubelet[2560]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 11 00:26:36.441623 kubelet[2560]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:26:36.442065 kubelet[2560]: I0711 00:26:36.441692 2560 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 11 00:26:36.449523 kubelet[2560]: I0711 00:26:36.449477 2560 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 11 00:26:36.449523 kubelet[2560]: I0711 00:26:36.449507 2560 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 11 00:26:36.449762 kubelet[2560]: I0711 00:26:36.449745 2560 server.go:956] "Client rotation is on, will bootstrap in background" Jul 11 00:26:36.450831 kubelet[2560]: I0711 00:26:36.450806 2560 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 11 00:26:36.453153 kubelet[2560]: I0711 00:26:36.453115 2560 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 00:26:36.455924 kubelet[2560]: E0711 00:26:36.455881 2560 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 11 00:26:36.455924 kubelet[2560]: I0711 00:26:36.455908 2560 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 11 00:26:36.463009 kubelet[2560]: I0711 00:26:36.462950 2560 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 11 00:26:36.463209 kubelet[2560]: I0711 00:26:36.463181 2560 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 11 00:26:36.463365 kubelet[2560]: I0711 00:26:36.463206 2560 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 11 00:26:36.463449 kubelet[2560]: I0711 00:26:36.463370 2560 topology_manager.go:138] "Creating topology manager with none policy" Jul 11 00:26:36.463449 kubelet[2560]: I0711 00:26:36.463379 2560 container_manager_linux.go:303] "Creating device plugin manager" Jul 11 00:26:36.463449 kubelet[2560]: I0711 00:26:36.463421 2560 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:26:36.463590 kubelet[2560]: I0711 00:26:36.463575 2560 kubelet.go:480] "Attempting to sync node with API server" Jul 11 00:26:36.463590 kubelet[2560]: I0711 00:26:36.463590 2560 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 11 00:26:36.463708 kubelet[2560]: I0711 00:26:36.463647 2560 kubelet.go:386] "Adding apiserver pod source" Jul 11 00:26:36.463708 kubelet[2560]: I0711 00:26:36.463680 2560 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 11 00:26:36.465915 kubelet[2560]: I0711 00:26:36.465737 2560 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 11 00:26:36.466650 kubelet[2560]: I0711 00:26:36.466625 2560 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 11 00:26:36.470287 kubelet[2560]: I0711 00:26:36.470270 2560 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 11 00:26:36.470401 kubelet[2560]: I0711 00:26:36.470386 2560 server.go:1289] "Started kubelet" Jul 11 00:26:36.470727 kubelet[2560]: I0711 00:26:36.470643 2560 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 11 00:26:36.470727 kubelet[2560]: I0711 00:26:36.470708 2560 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 11 00:26:36.471016 kubelet[2560]: I0711 00:26:36.470991 2560 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 11 00:26:36.473516 kubelet[2560]: I0711 00:26:36.473484 2560 server.go:317] "Adding debug handlers to kubelet server" Jul 11 00:26:36.475346 kubelet[2560]: I0711 00:26:36.475244 2560 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 11 00:26:36.475565 kubelet[2560]: I0711 00:26:36.475480 2560 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 11 00:26:36.476496 kubelet[2560]: I0711 00:26:36.476477 2560 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 11 00:26:36.476587 kubelet[2560]: I0711 00:26:36.475476 2560 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 11 00:26:36.477364 kubelet[2560]: I0711 00:26:36.477101 2560 reconciler.go:26] "Reconciler: start to sync state" Jul 11 00:26:36.479276 kubelet[2560]: E0711 00:26:36.478935 2560 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 11 00:26:36.481212 kubelet[2560]: I0711 00:26:36.481168 2560 factory.go:223] Registration of the containerd container factory successfully Jul 11 00:26:36.481212 kubelet[2560]: I0711 00:26:36.481192 2560 factory.go:223] Registration of the systemd container factory successfully Jul 11 00:26:36.481395 kubelet[2560]: I0711 00:26:36.481365 2560 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 11 00:26:36.495339 kubelet[2560]: I0711 00:26:36.495269 2560 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 11 00:26:36.497554 kubelet[2560]: I0711 00:26:36.497436 2560 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 11 00:26:36.497554 kubelet[2560]: I0711 00:26:36.497462 2560 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 11 00:26:36.497554 kubelet[2560]: I0711 00:26:36.497509 2560 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 11 00:26:36.497554 kubelet[2560]: I0711 00:26:36.497521 2560 kubelet.go:2436] "Starting kubelet main sync loop" Jul 11 00:26:36.497780 kubelet[2560]: E0711 00:26:36.497753 2560 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 11 00:26:36.517488 kubelet[2560]: I0711 00:26:36.517454 2560 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 11 00:26:36.517488 kubelet[2560]: I0711 00:26:36.517477 2560 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 11 00:26:36.517488 kubelet[2560]: I0711 00:26:36.517501 2560 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:26:36.517746 kubelet[2560]: I0711 00:26:36.517692 2560 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 11 00:26:36.517746 kubelet[2560]: I0711 00:26:36.517706 2560 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 11 00:26:36.517746 kubelet[2560]: I0711 00:26:36.517725 2560 policy_none.go:49] "None policy: Start" Jul 11 00:26:36.517746 kubelet[2560]: I0711 00:26:36.517736 2560 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 11 00:26:36.517746 kubelet[2560]: I0711 00:26:36.517748 2560 state_mem.go:35] "Initializing new in-memory state store" Jul 11 00:26:36.517854 kubelet[2560]: I0711 00:26:36.517832 2560 state_mem.go:75] "Updated machine memory state" Jul 11 00:26:36.522252 kubelet[2560]: E0711 00:26:36.522143 2560 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 11 00:26:36.522470 kubelet[2560]: I0711 00:26:36.522418 2560 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 11 00:26:36.522504 kubelet[2560]: I0711 00:26:36.522459 2560 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 11 00:26:36.522866 kubelet[2560]: I0711 00:26:36.522825 2560 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 11 00:26:36.524774 kubelet[2560]: E0711 00:26:36.523926 2560 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 11 00:26:36.599450 kubelet[2560]: I0711 00:26:36.599385 2560 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 11 00:26:36.599450 kubelet[2560]: I0711 00:26:36.599427 2560 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 11 00:26:36.599727 kubelet[2560]: I0711 00:26:36.599400 2560 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 11 00:26:36.629626 kubelet[2560]: I0711 00:26:36.629575 2560 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:26:36.635173 kubelet[2560]: I0711 00:26:36.635147 2560 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 11 00:26:36.635265 kubelet[2560]: I0711 00:26:36.635231 2560 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 11 00:26:36.678380 kubelet[2560]: I0711 00:26:36.678229 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:26:36.678380 kubelet[2560]: I0711 00:26:36.678269 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f20b2ddeefcc43f5ad4bc15ffba56a75-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f20b2ddeefcc43f5ad4bc15ffba56a75\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:26:36.678380 kubelet[2560]: I0711 00:26:36.678295 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:26:36.678380 kubelet[2560]: I0711 00:26:36.678316 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:26:36.678380 kubelet[2560]: I0711 00:26:36.678336 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 11 00:26:36.678657 kubelet[2560]: I0711 00:26:36.678353 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f20b2ddeefcc43f5ad4bc15ffba56a75-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f20b2ddeefcc43f5ad4bc15ffba56a75\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:26:36.678657 kubelet[2560]: I0711 00:26:36.678370 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f20b2ddeefcc43f5ad4bc15ffba56a75-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f20b2ddeefcc43f5ad4bc15ffba56a75\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:26:36.678657 kubelet[2560]: I0711 00:26:36.678391 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:26:36.678657 kubelet[2560]: I0711 00:26:36.678410 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:26:36.903938 kubelet[2560]: E0711 00:26:36.903893 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:36.905100 kubelet[2560]: E0711 00:26:36.904886 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:36.905100 kubelet[2560]: E0711 00:26:36.904898 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:37.464259 kubelet[2560]: I0711 00:26:37.464226 2560 apiserver.go:52] "Watching apiserver" Jul 11 00:26:37.477199 kubelet[2560]: I0711 00:26:37.477151 2560 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 11 00:26:37.510093 kubelet[2560]: I0711 00:26:37.509855 2560 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 11 00:26:37.510093 kubelet[2560]: I0711 00:26:37.509998 2560 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 11 00:26:37.510965 kubelet[2560]: E0711 00:26:37.510840 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:37.534308 kubelet[2560]: E0711 00:26:37.534181 2560 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 11 00:26:37.535136 kubelet[2560]: E0711 00:26:37.534656 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:37.535136 kubelet[2560]: E0711 00:26:37.534970 2560 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 11 00:26:37.535136 kubelet[2560]: E0711 00:26:37.535082 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:37.542399 kubelet[2560]: I0711 00:26:37.542300 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.542244497 podStartE2EDuration="1.542244497s" podCreationTimestamp="2025-07-11 00:26:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:26:37.541979425 +0000 UTC m=+1.134472944" watchObservedRunningTime="2025-07-11 00:26:37.542244497 +0000 UTC m=+1.134738006" Jul 11 00:26:37.542633 kubelet[2560]: I0711 00:26:37.542499 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.542490889 podStartE2EDuration="1.542490889s" podCreationTimestamp="2025-07-11 00:26:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:26:37.534837817 +0000 UTC m=+1.127331336" watchObservedRunningTime="2025-07-11 00:26:37.542490889 +0000 UTC m=+1.134984398" Jul 11 00:26:37.558000 kubelet[2560]: I0711 00:26:37.557902 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.557882984 podStartE2EDuration="1.557882984s" podCreationTimestamp="2025-07-11 00:26:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:26:37.54952998 +0000 UTC m=+1.142023489" watchObservedRunningTime="2025-07-11 00:26:37.557882984 +0000 UTC m=+1.150376493" Jul 11 00:26:38.512751 kubelet[2560]: E0711 00:26:38.512286 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:38.512751 kubelet[2560]: E0711 00:26:38.512654 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:39.513673 kubelet[2560]: E0711 00:26:39.513585 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:40.564331 kubelet[2560]: E0711 00:26:40.564282 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:41.875278 kubelet[2560]: I0711 00:26:41.875225 2560 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 11 00:26:41.875908 containerd[1465]: time="2025-07-11T00:26:41.875759849Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 11 00:26:41.876178 kubelet[2560]: I0711 00:26:41.876066 2560 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 11 00:26:42.960184 systemd[1]: Created slice kubepods-besteffort-podc58dd439_74d3_4d7b_81e0_71fbb7bb99bb.slice - libcontainer container kubepods-besteffort-podc58dd439_74d3_4d7b_81e0_71fbb7bb99bb.slice. Jul 11 00:26:43.017224 kubelet[2560]: I0711 00:26:43.017187 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c58dd439-74d3-4d7b-81e0-71fbb7bb99bb-kube-proxy\") pod \"kube-proxy-vtz9b\" (UID: \"c58dd439-74d3-4d7b-81e0-71fbb7bb99bb\") " pod="kube-system/kube-proxy-vtz9b" Jul 11 00:26:43.017224 kubelet[2560]: I0711 00:26:43.017220 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqwnv\" (UniqueName: \"kubernetes.io/projected/c58dd439-74d3-4d7b-81e0-71fbb7bb99bb-kube-api-access-pqwnv\") pod \"kube-proxy-vtz9b\" (UID: \"c58dd439-74d3-4d7b-81e0-71fbb7bb99bb\") " pod="kube-system/kube-proxy-vtz9b" Jul 11 00:26:43.017700 kubelet[2560]: I0711 00:26:43.017241 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c58dd439-74d3-4d7b-81e0-71fbb7bb99bb-xtables-lock\") pod \"kube-proxy-vtz9b\" (UID: \"c58dd439-74d3-4d7b-81e0-71fbb7bb99bb\") " pod="kube-system/kube-proxy-vtz9b" Jul 11 00:26:43.017700 kubelet[2560]: I0711 00:26:43.017255 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c58dd439-74d3-4d7b-81e0-71fbb7bb99bb-lib-modules\") pod \"kube-proxy-vtz9b\" (UID: \"c58dd439-74d3-4d7b-81e0-71fbb7bb99bb\") " pod="kube-system/kube-proxy-vtz9b" Jul 11 00:26:43.117763 kubelet[2560]: I0711 00:26:43.117709 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7nnn\" (UniqueName: \"kubernetes.io/projected/e42a05fa-f1ad-4147-92e4-43dc26cff7d7-kube-api-access-z7nnn\") pod \"tigera-operator-747864d56d-gm6mn\" (UID: \"e42a05fa-f1ad-4147-92e4-43dc26cff7d7\") " pod="tigera-operator/tigera-operator-747864d56d-gm6mn" Jul 11 00:26:43.117763 kubelet[2560]: I0711 00:26:43.117745 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e42a05fa-f1ad-4147-92e4-43dc26cff7d7-var-lib-calico\") pod \"tigera-operator-747864d56d-gm6mn\" (UID: \"e42a05fa-f1ad-4147-92e4-43dc26cff7d7\") " pod="tigera-operator/tigera-operator-747864d56d-gm6mn" Jul 11 00:26:43.118503 systemd[1]: Created slice kubepods-besteffort-pode42a05fa_f1ad_4147_92e4_43dc26cff7d7.slice - libcontainer container kubepods-besteffort-pode42a05fa_f1ad_4147_92e4_43dc26cff7d7.slice. Jul 11 00:26:43.271297 kubelet[2560]: E0711 00:26:43.271157 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:43.272156 containerd[1465]: time="2025-07-11T00:26:43.272105652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vtz9b,Uid:c58dd439-74d3-4d7b-81e0-71fbb7bb99bb,Namespace:kube-system,Attempt:0,}" Jul 11 00:26:43.297912 containerd[1465]: time="2025-07-11T00:26:43.297793065Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:26:43.297912 containerd[1465]: time="2025-07-11T00:26:43.297870705Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:26:43.297912 containerd[1465]: time="2025-07-11T00:26:43.297887800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:26:43.298900 containerd[1465]: time="2025-07-11T00:26:43.298023791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:26:43.329781 systemd[1]: Started cri-containerd-1d0977f0f096555da181f47792a0af934c77cf40c78b794e1bc349a2e3a665ba.scope - libcontainer container 1d0977f0f096555da181f47792a0af934c77cf40c78b794e1bc349a2e3a665ba. Jul 11 00:26:43.354312 containerd[1465]: time="2025-07-11T00:26:43.354249440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vtz9b,Uid:c58dd439-74d3-4d7b-81e0-71fbb7bb99bb,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d0977f0f096555da181f47792a0af934c77cf40c78b794e1bc349a2e3a665ba\"" Jul 11 00:26:43.355333 kubelet[2560]: E0711 00:26:43.355290 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:43.362096 containerd[1465]: time="2025-07-11T00:26:43.361412465Z" level=info msg="CreateContainer within sandbox \"1d0977f0f096555da181f47792a0af934c77cf40c78b794e1bc349a2e3a665ba\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 11 00:26:43.379278 containerd[1465]: time="2025-07-11T00:26:43.379198109Z" level=info msg="CreateContainer within sandbox \"1d0977f0f096555da181f47792a0af934c77cf40c78b794e1bc349a2e3a665ba\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0102809fbfebbcbdbdd3c654eeec3e333b68c21ddea9dd7febcb757a90c57a37\"" Jul 11 00:26:43.380198 containerd[1465]: time="2025-07-11T00:26:43.380159482Z" level=info msg="StartContainer for \"0102809fbfebbcbdbdd3c654eeec3e333b68c21ddea9dd7febcb757a90c57a37\"" Jul 11 00:26:43.408754 systemd[1]: Started cri-containerd-0102809fbfebbcbdbdd3c654eeec3e333b68c21ddea9dd7febcb757a90c57a37.scope - libcontainer container 0102809fbfebbcbdbdd3c654eeec3e333b68c21ddea9dd7febcb757a90c57a37. Jul 11 00:26:43.423297 containerd[1465]: time="2025-07-11T00:26:43.423091703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-gm6mn,Uid:e42a05fa-f1ad-4147-92e4-43dc26cff7d7,Namespace:tigera-operator,Attempt:0,}" Jul 11 00:26:43.482760 containerd[1465]: time="2025-07-11T00:26:43.482692945Z" level=info msg="StartContainer for \"0102809fbfebbcbdbdd3c654eeec3e333b68c21ddea9dd7febcb757a90c57a37\" returns successfully" Jul 11 00:26:43.510419 containerd[1465]: time="2025-07-11T00:26:43.509504766Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:26:43.510419 containerd[1465]: time="2025-07-11T00:26:43.510369099Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:26:43.510419 containerd[1465]: time="2025-07-11T00:26:43.510383509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:26:43.510784 containerd[1465]: time="2025-07-11T00:26:43.510479757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:26:43.523814 kubelet[2560]: E0711 00:26:43.522906 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:43.532212 kubelet[2560]: I0711 00:26:43.532074 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vtz9b" podStartSLOduration=1.532051991 podStartE2EDuration="1.532051991s" podCreationTimestamp="2025-07-11 00:26:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:26:43.531250939 +0000 UTC m=+7.123744468" watchObservedRunningTime="2025-07-11 00:26:43.532051991 +0000 UTC m=+7.124545500" Jul 11 00:26:43.532952 systemd[1]: Started cri-containerd-a38fae9df91b61b6f988ee05ccc3e32f52459bd0f79640206fc83eff5c02eee2.scope - libcontainer container a38fae9df91b61b6f988ee05ccc3e32f52459bd0f79640206fc83eff5c02eee2. Jul 11 00:26:43.574599 containerd[1465]: time="2025-07-11T00:26:43.574539545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-gm6mn,Uid:e42a05fa-f1ad-4147-92e4-43dc26cff7d7,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a38fae9df91b61b6f988ee05ccc3e32f52459bd0f79640206fc83eff5c02eee2\"" Jul 11 00:26:43.576719 containerd[1465]: time="2025-07-11T00:26:43.576064972Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 11 00:26:44.085299 kubelet[2560]: E0711 00:26:44.085254 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:44.541960 kubelet[2560]: E0711 00:26:44.541910 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:44.916409 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1036857107.mount: Deactivated successfully. Jul 11 00:26:45.279228 containerd[1465]: time="2025-07-11T00:26:45.279094012Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:45.280109 containerd[1465]: time="2025-07-11T00:26:45.280053638Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 11 00:26:45.281451 containerd[1465]: time="2025-07-11T00:26:45.281425889Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:45.283807 containerd[1465]: time="2025-07-11T00:26:45.283763467Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:45.284391 containerd[1465]: time="2025-07-11T00:26:45.284346962Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 1.70825242s" Jul 11 00:26:45.284391 containerd[1465]: time="2025-07-11T00:26:45.284377624Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 11 00:26:45.289261 containerd[1465]: time="2025-07-11T00:26:45.289223100Z" level=info msg="CreateContainer within sandbox \"a38fae9df91b61b6f988ee05ccc3e32f52459bd0f79640206fc83eff5c02eee2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 11 00:26:45.302764 containerd[1465]: time="2025-07-11T00:26:45.302708602Z" level=info msg="CreateContainer within sandbox \"a38fae9df91b61b6f988ee05ccc3e32f52459bd0f79640206fc83eff5c02eee2\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8b3b69d0ec88eedd480465d3bb3d807a2a8a3dc6656e6a24da9d73cff997a116\"" Jul 11 00:26:45.303371 containerd[1465]: time="2025-07-11T00:26:45.303309483Z" level=info msg="StartContainer for \"8b3b69d0ec88eedd480465d3bb3d807a2a8a3dc6656e6a24da9d73cff997a116\"" Jul 11 00:26:45.334757 systemd[1]: Started cri-containerd-8b3b69d0ec88eedd480465d3bb3d807a2a8a3dc6656e6a24da9d73cff997a116.scope - libcontainer container 8b3b69d0ec88eedd480465d3bb3d807a2a8a3dc6656e6a24da9d73cff997a116. Jul 11 00:26:45.459578 containerd[1465]: time="2025-07-11T00:26:45.459524358Z" level=info msg="StartContainer for \"8b3b69d0ec88eedd480465d3bb3d807a2a8a3dc6656e6a24da9d73cff997a116\" returns successfully" Jul 11 00:26:45.554219 kubelet[2560]: I0711 00:26:45.554001 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-gm6mn" podStartSLOduration=0.844466494 podStartE2EDuration="2.553984101s" podCreationTimestamp="2025-07-11 00:26:43 +0000 UTC" firstStartedPulling="2025-07-11 00:26:43.575695218 +0000 UTC m=+7.168188727" lastFinishedPulling="2025-07-11 00:26:45.285212825 +0000 UTC m=+8.877706334" observedRunningTime="2025-07-11 00:26:45.553807819 +0000 UTC m=+9.146301328" watchObservedRunningTime="2025-07-11 00:26:45.553984101 +0000 UTC m=+9.146477610" Jul 11 00:26:46.248144 kubelet[2560]: E0711 00:26:46.248107 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:46.546585 kubelet[2560]: E0711 00:26:46.546209 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:47.548661 kubelet[2560]: E0711 00:26:47.548562 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:50.568598 kubelet[2560]: E0711 00:26:50.568546 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:50.602962 sudo[1667]: pam_unix(sudo:session): session closed for user root Jul 11 00:26:50.608010 sshd[1652]: pam_unix(sshd:session): session closed for user core Jul 11 00:26:50.613356 systemd[1]: sshd@8-10.0.0.159:22-10.0.0.1:59498.service: Deactivated successfully. Jul 11 00:26:50.616574 systemd[1]: session-9.scope: Deactivated successfully. Jul 11 00:26:50.617144 systemd[1]: session-9.scope: Consumed 5.504s CPU time, 164.0M memory peak, 0B memory swap peak. Jul 11 00:26:50.617717 systemd-logind[1445]: Session 9 logged out. Waiting for processes to exit. Jul 11 00:26:50.618777 systemd-logind[1445]: Removed session 9. Jul 11 00:26:51.561245 kubelet[2560]: E0711 00:26:51.561191 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:53.832598 systemd[1]: Created slice kubepods-besteffort-pod7f7decd9_d7f1_46eb_96ee_dca225076dd5.slice - libcontainer container kubepods-besteffort-pod7f7decd9_d7f1_46eb_96ee_dca225076dd5.slice. Jul 11 00:26:53.983331 kubelet[2560]: I0711 00:26:53.983262 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f7decd9-d7f1-46eb-96ee-dca225076dd5-tigera-ca-bundle\") pod \"calico-typha-6d8b659b74-jgrz6\" (UID: \"7f7decd9-d7f1-46eb-96ee-dca225076dd5\") " pod="calico-system/calico-typha-6d8b659b74-jgrz6" Jul 11 00:26:53.983331 kubelet[2560]: I0711 00:26:53.983340 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7f7decd9-d7f1-46eb-96ee-dca225076dd5-typha-certs\") pod \"calico-typha-6d8b659b74-jgrz6\" (UID: \"7f7decd9-d7f1-46eb-96ee-dca225076dd5\") " pod="calico-system/calico-typha-6d8b659b74-jgrz6" Jul 11 00:26:53.983933 kubelet[2560]: I0711 00:26:53.983370 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4jsd\" (UniqueName: \"kubernetes.io/projected/7f7decd9-d7f1-46eb-96ee-dca225076dd5-kube-api-access-s4jsd\") pod \"calico-typha-6d8b659b74-jgrz6\" (UID: \"7f7decd9-d7f1-46eb-96ee-dca225076dd5\") " pod="calico-system/calico-typha-6d8b659b74-jgrz6" Jul 11 00:26:54.136639 kubelet[2560]: E0711 00:26:54.136553 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:54.138120 containerd[1465]: time="2025-07-11T00:26:54.138067176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d8b659b74-jgrz6,Uid:7f7decd9-d7f1-46eb-96ee-dca225076dd5,Namespace:calico-system,Attempt:0,}" Jul 11 00:26:54.166086 containerd[1465]: time="2025-07-11T00:26:54.165830937Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:26:54.166086 containerd[1465]: time="2025-07-11T00:26:54.165915837Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:26:54.166086 containerd[1465]: time="2025-07-11T00:26:54.165929154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:26:54.166086 containerd[1465]: time="2025-07-11T00:26:54.166025597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:26:54.197807 systemd[1]: Started cri-containerd-eda559403209e459d9d7516965854b72b4cbc948100a4a7fa58def0c0e0a98f3.scope - libcontainer container eda559403209e459d9d7516965854b72b4cbc948100a4a7fa58def0c0e0a98f3. Jul 11 00:26:54.217545 systemd[1]: Created slice kubepods-besteffort-podb634d58b_dfcc_460f_9791_16f2de153c56.slice - libcontainer container kubepods-besteffort-podb634d58b_dfcc_460f_9791_16f2de153c56.slice. Jul 11 00:26:54.252694 containerd[1465]: time="2025-07-11T00:26:54.252572251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d8b659b74-jgrz6,Uid:7f7decd9-d7f1-46eb-96ee-dca225076dd5,Namespace:calico-system,Attempt:0,} returns sandbox id \"eda559403209e459d9d7516965854b72b4cbc948100a4a7fa58def0c0e0a98f3\"" Jul 11 00:26:54.256747 kubelet[2560]: E0711 00:26:54.256700 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:54.262504 containerd[1465]: time="2025-07-11T00:26:54.262468971Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 11 00:26:54.386128 kubelet[2560]: I0711 00:26:54.386063 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b634d58b-dfcc-460f-9791-16f2de153c56-tigera-ca-bundle\") pod \"calico-node-v2cmq\" (UID: \"b634d58b-dfcc-460f-9791-16f2de153c56\") " pod="calico-system/calico-node-v2cmq" Jul 11 00:26:54.386128 kubelet[2560]: I0711 00:26:54.386122 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b634d58b-dfcc-460f-9791-16f2de153c56-lib-modules\") pod \"calico-node-v2cmq\" (UID: \"b634d58b-dfcc-460f-9791-16f2de153c56\") " pod="calico-system/calico-node-v2cmq" Jul 11 00:26:54.386128 kubelet[2560]: I0711 00:26:54.386142 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b634d58b-dfcc-460f-9791-16f2de153c56-var-lib-calico\") pod \"calico-node-v2cmq\" (UID: \"b634d58b-dfcc-460f-9791-16f2de153c56\") " pod="calico-system/calico-node-v2cmq" Jul 11 00:26:54.386441 kubelet[2560]: I0711 00:26:54.386159 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b634d58b-dfcc-460f-9791-16f2de153c56-cni-net-dir\") pod \"calico-node-v2cmq\" (UID: \"b634d58b-dfcc-460f-9791-16f2de153c56\") " pod="calico-system/calico-node-v2cmq" Jul 11 00:26:54.386441 kubelet[2560]: I0711 00:26:54.386177 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b634d58b-dfcc-460f-9791-16f2de153c56-node-certs\") pod \"calico-node-v2cmq\" (UID: \"b634d58b-dfcc-460f-9791-16f2de153c56\") " pod="calico-system/calico-node-v2cmq" Jul 11 00:26:54.386441 kubelet[2560]: I0711 00:26:54.386192 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b634d58b-dfcc-460f-9791-16f2de153c56-policysync\") pod \"calico-node-v2cmq\" (UID: \"b634d58b-dfcc-460f-9791-16f2de153c56\") " pod="calico-system/calico-node-v2cmq" Jul 11 00:26:54.386441 kubelet[2560]: I0711 00:26:54.386209 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b634d58b-dfcc-460f-9791-16f2de153c56-flexvol-driver-host\") pod \"calico-node-v2cmq\" (UID: \"b634d58b-dfcc-460f-9791-16f2de153c56\") " pod="calico-system/calico-node-v2cmq" Jul 11 00:26:54.386441 kubelet[2560]: I0711 00:26:54.386231 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24v8j\" (UniqueName: \"kubernetes.io/projected/b634d58b-dfcc-460f-9791-16f2de153c56-kube-api-access-24v8j\") pod \"calico-node-v2cmq\" (UID: \"b634d58b-dfcc-460f-9791-16f2de153c56\") " pod="calico-system/calico-node-v2cmq" Jul 11 00:26:54.386566 kubelet[2560]: I0711 00:26:54.386305 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b634d58b-dfcc-460f-9791-16f2de153c56-cni-bin-dir\") pod \"calico-node-v2cmq\" (UID: \"b634d58b-dfcc-460f-9791-16f2de153c56\") " pod="calico-system/calico-node-v2cmq" Jul 11 00:26:54.386566 kubelet[2560]: I0711 00:26:54.386364 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b634d58b-dfcc-460f-9791-16f2de153c56-xtables-lock\") pod \"calico-node-v2cmq\" (UID: \"b634d58b-dfcc-460f-9791-16f2de153c56\") " pod="calico-system/calico-node-v2cmq" Jul 11 00:26:54.386566 kubelet[2560]: I0711 00:26:54.386379 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b634d58b-dfcc-460f-9791-16f2de153c56-cni-log-dir\") pod \"calico-node-v2cmq\" (UID: \"b634d58b-dfcc-460f-9791-16f2de153c56\") " pod="calico-system/calico-node-v2cmq" Jul 11 00:26:54.386566 kubelet[2560]: I0711 00:26:54.386396 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b634d58b-dfcc-460f-9791-16f2de153c56-var-run-calico\") pod \"calico-node-v2cmq\" (UID: \"b634d58b-dfcc-460f-9791-16f2de153c56\") " pod="calico-system/calico-node-v2cmq" Jul 11 00:26:54.497917 kubelet[2560]: E0711 00:26:54.495100 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.497917 kubelet[2560]: W0711 00:26:54.495132 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.497917 kubelet[2560]: E0711 00:26:54.495167 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.497917 kubelet[2560]: E0711 00:26:54.495785 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.497917 kubelet[2560]: W0711 00:26:54.495798 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.497917 kubelet[2560]: E0711 00:26:54.495811 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.500331 kubelet[2560]: E0711 00:26:54.500201 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.500331 kubelet[2560]: W0711 00:26:54.500235 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.500734 kubelet[2560]: E0711 00:26:54.500645 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.507967 kubelet[2560]: E0711 00:26:54.507919 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xd24s" podUID="7d661932-2475-4fb4-890b-1d7cc7f7d3fc" Jul 11 00:26:54.521675 containerd[1465]: time="2025-07-11T00:26:54.521602399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v2cmq,Uid:b634d58b-dfcc-460f-9791-16f2de153c56,Namespace:calico-system,Attempt:0,}" Jul 11 00:26:54.551397 containerd[1465]: time="2025-07-11T00:26:54.551255764Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:26:54.552086 containerd[1465]: time="2025-07-11T00:26:54.552008742Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:26:54.552158 containerd[1465]: time="2025-07-11T00:26:54.552118311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:26:54.552381 containerd[1465]: time="2025-07-11T00:26:54.552334745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:26:54.573778 systemd[1]: Started cri-containerd-b8b29f675371e73eefdf02ed2059860958f750e66a6b355614c1844c483bd13f.scope - libcontainer container b8b29f675371e73eefdf02ed2059860958f750e66a6b355614c1844c483bd13f. Jul 11 00:26:54.591078 kubelet[2560]: E0711 00:26:54.591041 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.591078 kubelet[2560]: W0711 00:26:54.591065 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.591078 kubelet[2560]: E0711 00:26:54.591088 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.591341 kubelet[2560]: E0711 00:26:54.591306 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.591341 kubelet[2560]: W0711 00:26:54.591321 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.591341 kubelet[2560]: E0711 00:26:54.591330 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.591747 kubelet[2560]: E0711 00:26:54.591725 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.591747 kubelet[2560]: W0711 00:26:54.591739 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.591747 kubelet[2560]: E0711 00:26:54.591749 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.592206 kubelet[2560]: E0711 00:26:54.592164 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.592206 kubelet[2560]: W0711 00:26:54.592182 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.592206 kubelet[2560]: E0711 00:26:54.592192 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.592508 kubelet[2560]: E0711 00:26:54.592488 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.592508 kubelet[2560]: W0711 00:26:54.592503 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.592590 kubelet[2560]: E0711 00:26:54.592515 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.592853 kubelet[2560]: E0711 00:26:54.592807 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.592853 kubelet[2560]: W0711 00:26:54.592823 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.592853 kubelet[2560]: E0711 00:26:54.592834 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.593255 kubelet[2560]: E0711 00:26:54.593226 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.593255 kubelet[2560]: W0711 00:26:54.593241 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.593255 kubelet[2560]: E0711 00:26:54.593253 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.593623 kubelet[2560]: E0711 00:26:54.593588 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.593623 kubelet[2560]: W0711 00:26:54.593617 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.593707 kubelet[2560]: E0711 00:26:54.593628 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.593881 kubelet[2560]: E0711 00:26:54.593858 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.593881 kubelet[2560]: W0711 00:26:54.593872 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.593881 kubelet[2560]: E0711 00:26:54.593882 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.594792 kubelet[2560]: E0711 00:26:54.594082 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.594792 kubelet[2560]: W0711 00:26:54.594095 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.594792 kubelet[2560]: E0711 00:26:54.594105 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.594792 kubelet[2560]: E0711 00:26:54.594340 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.594792 kubelet[2560]: W0711 00:26:54.594348 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.594792 kubelet[2560]: E0711 00:26:54.594357 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.594792 kubelet[2560]: E0711 00:26:54.594599 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.594792 kubelet[2560]: W0711 00:26:54.594633 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.594792 kubelet[2560]: E0711 00:26:54.594643 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.595216 kubelet[2560]: E0711 00:26:54.594876 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.595216 kubelet[2560]: W0711 00:26:54.594885 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.595216 kubelet[2560]: E0711 00:26:54.594894 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.595216 kubelet[2560]: E0711 00:26:54.595121 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.595216 kubelet[2560]: W0711 00:26:54.595130 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.595216 kubelet[2560]: E0711 00:26:54.595139 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.595497 kubelet[2560]: E0711 00:26:54.595458 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.595497 kubelet[2560]: W0711 00:26:54.595482 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.595497 kubelet[2560]: E0711 00:26:54.595493 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.596136 kubelet[2560]: E0711 00:26:54.596073 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.596136 kubelet[2560]: W0711 00:26:54.596089 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.596136 kubelet[2560]: E0711 00:26:54.596118 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.597666 kubelet[2560]: E0711 00:26:54.596458 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.597666 kubelet[2560]: W0711 00:26:54.596484 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.597666 kubelet[2560]: E0711 00:26:54.596495 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.597666 kubelet[2560]: E0711 00:26:54.596913 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.597666 kubelet[2560]: W0711 00:26:54.596923 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.597666 kubelet[2560]: E0711 00:26:54.596933 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.597666 kubelet[2560]: E0711 00:26:54.597468 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.597666 kubelet[2560]: W0711 00:26:54.597488 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.597666 kubelet[2560]: E0711 00:26:54.597497 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.598036 kubelet[2560]: E0711 00:26:54.597805 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.598036 kubelet[2560]: W0711 00:26:54.597817 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.598036 kubelet[2560]: E0711 00:26:54.597826 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.599141 containerd[1465]: time="2025-07-11T00:26:54.599078187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v2cmq,Uid:b634d58b-dfcc-460f-9791-16f2de153c56,Namespace:calico-system,Attempt:0,} returns sandbox id \"b8b29f675371e73eefdf02ed2059860958f750e66a6b355614c1844c483bd13f\"" Jul 11 00:26:54.689552 kubelet[2560]: E0711 00:26:54.689482 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.689552 kubelet[2560]: W0711 00:26:54.689528 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.689552 kubelet[2560]: E0711 00:26:54.689560 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.689839 kubelet[2560]: I0711 00:26:54.689624 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7d661932-2475-4fb4-890b-1d7cc7f7d3fc-registration-dir\") pod \"csi-node-driver-xd24s\" (UID: \"7d661932-2475-4fb4-890b-1d7cc7f7d3fc\") " pod="calico-system/csi-node-driver-xd24s" Jul 11 00:26:54.689960 kubelet[2560]: E0711 00:26:54.689929 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.689960 kubelet[2560]: W0711 00:26:54.689948 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.690016 kubelet[2560]: E0711 00:26:54.689962 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.690016 kubelet[2560]: I0711 00:26:54.689992 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7d661932-2475-4fb4-890b-1d7cc7f7d3fc-kubelet-dir\") pod \"csi-node-driver-xd24s\" (UID: \"7d661932-2475-4fb4-890b-1d7cc7f7d3fc\") " pod="calico-system/csi-node-driver-xd24s" Jul 11 00:26:54.690483 kubelet[2560]: E0711 00:26:54.690451 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.690483 kubelet[2560]: W0711 00:26:54.690478 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.690555 kubelet[2560]: E0711 00:26:54.690510 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.690803 kubelet[2560]: E0711 00:26:54.690776 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.690803 kubelet[2560]: W0711 00:26:54.690789 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.690803 kubelet[2560]: E0711 00:26:54.690798 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.691091 kubelet[2560]: E0711 00:26:54.691063 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.691091 kubelet[2560]: W0711 00:26:54.691074 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.691091 kubelet[2560]: E0711 00:26:54.691083 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.691188 kubelet[2560]: I0711 00:26:54.691114 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7d661932-2475-4fb4-890b-1d7cc7f7d3fc-varrun\") pod \"csi-node-driver-xd24s\" (UID: \"7d661932-2475-4fb4-890b-1d7cc7f7d3fc\") " pod="calico-system/csi-node-driver-xd24s" Jul 11 00:26:54.691430 kubelet[2560]: E0711 00:26:54.691406 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.691430 kubelet[2560]: W0711 00:26:54.691426 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.691494 kubelet[2560]: E0711 00:26:54.691441 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.691745 kubelet[2560]: E0711 00:26:54.691724 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.691745 kubelet[2560]: W0711 00:26:54.691740 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.691818 kubelet[2560]: E0711 00:26:54.691755 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.692046 kubelet[2560]: E0711 00:26:54.692025 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.692091 kubelet[2560]: W0711 00:26:54.692044 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.692091 kubelet[2560]: E0711 00:26:54.692058 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.692144 kubelet[2560]: I0711 00:26:54.692090 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7d661932-2475-4fb4-890b-1d7cc7f7d3fc-socket-dir\") pod \"csi-node-driver-xd24s\" (UID: \"7d661932-2475-4fb4-890b-1d7cc7f7d3fc\") " pod="calico-system/csi-node-driver-xd24s" Jul 11 00:26:54.692348 kubelet[2560]: E0711 00:26:54.692326 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.692348 kubelet[2560]: W0711 00:26:54.692341 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.692396 kubelet[2560]: E0711 00:26:54.692350 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.692594 kubelet[2560]: E0711 00:26:54.692576 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.692594 kubelet[2560]: W0711 00:26:54.692586 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.692675 kubelet[2560]: E0711 00:26:54.692596 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.692847 kubelet[2560]: E0711 00:26:54.692832 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.692847 kubelet[2560]: W0711 00:26:54.692843 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.692902 kubelet[2560]: E0711 00:26:54.692853 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.692902 kubelet[2560]: I0711 00:26:54.692873 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-976vl\" (UniqueName: \"kubernetes.io/projected/7d661932-2475-4fb4-890b-1d7cc7f7d3fc-kube-api-access-976vl\") pod \"csi-node-driver-xd24s\" (UID: \"7d661932-2475-4fb4-890b-1d7cc7f7d3fc\") " pod="calico-system/csi-node-driver-xd24s" Jul 11 00:26:54.693158 kubelet[2560]: E0711 00:26:54.693135 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.693158 kubelet[2560]: W0711 00:26:54.693154 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.693224 kubelet[2560]: E0711 00:26:54.693169 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.693426 kubelet[2560]: E0711 00:26:54.693407 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.693426 kubelet[2560]: W0711 00:26:54.693424 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.693475 kubelet[2560]: E0711 00:26:54.693436 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.693759 kubelet[2560]: E0711 00:26:54.693740 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.693759 kubelet[2560]: W0711 00:26:54.693756 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.693847 kubelet[2560]: E0711 00:26:54.693769 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.694152 kubelet[2560]: E0711 00:26:54.694107 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.694152 kubelet[2560]: W0711 00:26:54.694138 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.694215 kubelet[2560]: E0711 00:26:54.694166 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.794337 kubelet[2560]: E0711 00:26:54.794102 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.794337 kubelet[2560]: W0711 00:26:54.794131 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.794337 kubelet[2560]: E0711 00:26:54.794155 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.794591 kubelet[2560]: E0711 00:26:54.794476 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.794591 kubelet[2560]: W0711 00:26:54.794515 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.794591 kubelet[2560]: E0711 00:26:54.794543 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.795226 kubelet[2560]: E0711 00:26:54.795143 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.795226 kubelet[2560]: W0711 00:26:54.795211 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.795226 kubelet[2560]: E0711 00:26:54.795239 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.795822 kubelet[2560]: E0711 00:26:54.795776 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.795822 kubelet[2560]: W0711 00:26:54.795810 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.795822 kubelet[2560]: E0711 00:26:54.795838 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.796209 kubelet[2560]: E0711 00:26:54.796191 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.796209 kubelet[2560]: W0711 00:26:54.796206 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.796353 kubelet[2560]: E0711 00:26:54.796219 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.796580 kubelet[2560]: E0711 00:26:54.796561 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.796580 kubelet[2560]: W0711 00:26:54.796576 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.796666 kubelet[2560]: E0711 00:26:54.796588 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.796969 kubelet[2560]: E0711 00:26:54.796947 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.796969 kubelet[2560]: W0711 00:26:54.796960 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.796969 kubelet[2560]: E0711 00:26:54.796969 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.797303 kubelet[2560]: E0711 00:26:54.797279 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.797303 kubelet[2560]: W0711 00:26:54.797296 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.797303 kubelet[2560]: E0711 00:26:54.797305 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.797663 kubelet[2560]: E0711 00:26:54.797641 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.797663 kubelet[2560]: W0711 00:26:54.797659 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.797744 kubelet[2560]: E0711 00:26:54.797675 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.798182 kubelet[2560]: E0711 00:26:54.798166 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.798182 kubelet[2560]: W0711 00:26:54.798179 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.798256 kubelet[2560]: E0711 00:26:54.798190 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.798434 kubelet[2560]: E0711 00:26:54.798418 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.798434 kubelet[2560]: W0711 00:26:54.798430 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.798485 kubelet[2560]: E0711 00:26:54.798439 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.798716 kubelet[2560]: E0711 00:26:54.798698 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.798716 kubelet[2560]: W0711 00:26:54.798711 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.798793 kubelet[2560]: E0711 00:26:54.798720 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.799047 kubelet[2560]: E0711 00:26:54.799022 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.799047 kubelet[2560]: W0711 00:26:54.799043 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.799127 kubelet[2560]: E0711 00:26:54.799062 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.799361 kubelet[2560]: E0711 00:26:54.799334 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.799361 kubelet[2560]: W0711 00:26:54.799346 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.799361 kubelet[2560]: E0711 00:26:54.799356 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.799563 kubelet[2560]: E0711 00:26:54.799549 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.799563 kubelet[2560]: W0711 00:26:54.799560 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.799634 kubelet[2560]: E0711 00:26:54.799568 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.799812 kubelet[2560]: E0711 00:26:54.799794 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.799812 kubelet[2560]: W0711 00:26:54.799805 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.799812 kubelet[2560]: E0711 00:26:54.799813 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.800022 kubelet[2560]: E0711 00:26:54.800008 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.800022 kubelet[2560]: W0711 00:26:54.800019 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.800091 kubelet[2560]: E0711 00:26:54.800027 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.800230 kubelet[2560]: E0711 00:26:54.800211 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.800230 kubelet[2560]: W0711 00:26:54.800222 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.800230 kubelet[2560]: E0711 00:26:54.800231 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.800556 kubelet[2560]: E0711 00:26:54.800520 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.800556 kubelet[2560]: W0711 00:26:54.800545 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.800650 kubelet[2560]: E0711 00:26:54.800565 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.800888 kubelet[2560]: E0711 00:26:54.800856 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.800888 kubelet[2560]: W0711 00:26:54.800872 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.800888 kubelet[2560]: E0711 00:26:54.800883 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.801151 kubelet[2560]: E0711 00:26:54.801134 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.801151 kubelet[2560]: W0711 00:26:54.801145 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.801217 kubelet[2560]: E0711 00:26:54.801156 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.801420 kubelet[2560]: E0711 00:26:54.801399 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.801420 kubelet[2560]: W0711 00:26:54.801414 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.801536 kubelet[2560]: E0711 00:26:54.801426 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.801713 kubelet[2560]: E0711 00:26:54.801683 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.801713 kubelet[2560]: W0711 00:26:54.801706 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.801713 kubelet[2560]: E0711 00:26:54.801717 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.802006 kubelet[2560]: E0711 00:26:54.801984 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.802006 kubelet[2560]: W0711 00:26:54.801997 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.802006 kubelet[2560]: E0711 00:26:54.802008 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.802332 kubelet[2560]: E0711 00:26:54.802301 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.802332 kubelet[2560]: W0711 00:26:54.802329 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.802406 kubelet[2560]: E0711 00:26:54.802356 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:54.809970 kubelet[2560]: E0711 00:26:54.809929 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:54.809970 kubelet[2560]: W0711 00:26:54.809952 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:54.809970 kubelet[2560]: E0711 00:26:54.809972 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:55.745264 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount33628299.mount: Deactivated successfully. Jul 11 00:26:56.173492 containerd[1465]: time="2025-07-11T00:26:56.173395548Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:56.175558 containerd[1465]: time="2025-07-11T00:26:56.175522262Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 11 00:26:56.177882 containerd[1465]: time="2025-07-11T00:26:56.177831961Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:56.180344 containerd[1465]: time="2025-07-11T00:26:56.180305257Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:56.180931 containerd[1465]: time="2025-07-11T00:26:56.180898000Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 1.918220081s" Jul 11 00:26:56.180931 containerd[1465]: time="2025-07-11T00:26:56.180927849Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 11 00:26:56.182036 containerd[1465]: time="2025-07-11T00:26:56.181992905Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 11 00:26:56.199678 containerd[1465]: time="2025-07-11T00:26:56.199581706Z" level=info msg="CreateContainer within sandbox \"eda559403209e459d9d7516965854b72b4cbc948100a4a7fa58def0c0e0a98f3\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 11 00:26:56.211155 containerd[1465]: time="2025-07-11T00:26:56.211099722Z" level=info msg="CreateContainer within sandbox \"eda559403209e459d9d7516965854b72b4cbc948100a4a7fa58def0c0e0a98f3\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c6be556213980ad509dd6575299744356d845b6baa16a5c7e4689a0a3d082014\"" Jul 11 00:26:56.211805 containerd[1465]: time="2025-07-11T00:26:56.211762805Z" level=info msg="StartContainer for \"c6be556213980ad509dd6575299744356d845b6baa16a5c7e4689a0a3d082014\"" Jul 11 00:26:56.242776 systemd[1]: Started cri-containerd-c6be556213980ad509dd6575299744356d845b6baa16a5c7e4689a0a3d082014.scope - libcontainer container c6be556213980ad509dd6575299744356d845b6baa16a5c7e4689a0a3d082014. Jul 11 00:26:56.283010 containerd[1465]: time="2025-07-11T00:26:56.282960761Z" level=info msg="StartContainer for \"c6be556213980ad509dd6575299744356d845b6baa16a5c7e4689a0a3d082014\" returns successfully" Jul 11 00:26:56.502153 kubelet[2560]: E0711 00:26:56.501645 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xd24s" podUID="7d661932-2475-4fb4-890b-1d7cc7f7d3fc" Jul 11 00:26:56.579205 kubelet[2560]: E0711 00:26:56.579075 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:56.609845 kubelet[2560]: E0711 00:26:56.609785 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:56.609845 kubelet[2560]: W0711 00:26:56.609822 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:56.609845 kubelet[2560]: E0711 00:26:56.609852 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:56.610271 kubelet[2560]: E0711 00:26:56.610254 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:56.610271 kubelet[2560]: W0711 00:26:56.610267 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:56.610370 kubelet[2560]: E0711 00:26:56.610277 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:56.610588 kubelet[2560]: E0711 00:26:56.610545 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:56.610588 kubelet[2560]: W0711 00:26:56.610566 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:56.610588 kubelet[2560]: E0711 00:26:56.610577 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:56.610966 kubelet[2560]: E0711 00:26:56.610948 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:56.610966 kubelet[2560]: W0711 00:26:56.610962 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:56.611036 kubelet[2560]: E0711 00:26:56.610975 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:56.611249 kubelet[2560]: E0711 00:26:56.611231 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:56.611249 kubelet[2560]: W0711 00:26:56.611245 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:56.611327 kubelet[2560]: E0711 00:26:56.611256 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:56.611544 kubelet[2560]: E0711 00:26:56.611516 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:56.611544 kubelet[2560]: W0711 00:26:56.611529 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:56.611544 kubelet[2560]: E0711 00:26:56.611540 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:56.612081 kubelet[2560]: E0711 00:26:56.611849 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:56.612081 kubelet[2560]: W0711 00:26:56.611858 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:56.612081 kubelet[2560]: E0711 00:26:56.611882 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:56.612158 kubelet[2560]: E0711 00:26:56.612132 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:56.612158 kubelet[2560]: W0711 00:26:56.612141 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:56.612158 kubelet[2560]: E0711 00:26:56.612150 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:56.612444 kubelet[2560]: E0711 00:26:56.612427 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:56.612444 kubelet[2560]: W0711 00:26:56.612440 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:56.612514 kubelet[2560]: E0711 00:26:56.612450 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:56.612685 kubelet[2560]: E0711 00:26:56.612658 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:56.612685 kubelet[2560]: W0711 00:26:56.612668 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:56.612685 kubelet[2560]: E0711 00:26:56.612686 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:56.612878 kubelet[2560]: E0711 00:26:56.612859 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:56.612878 kubelet[2560]: W0711 00:26:56.612872 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:56.612878 kubelet[2560]: E0711 00:26:56.612881 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:56.613056 kubelet[2560]: E0711 00:26:56.613040 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:56.613056 kubelet[2560]: W0711 00:26:56.613051 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:56.613111 kubelet[2560]: E0711 00:26:56.613060 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:56.613247 kubelet[2560]: E0711 00:26:56.613223 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:56.613247 kubelet[2560]: W0711 00:26:56.613235 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:56.613247 kubelet[2560]: E0711 00:26:56.613243 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:56.613422 kubelet[2560]: E0711 00:26:56.613404 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:56.613422 kubelet[2560]: W0711 00:26:56.613414 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:56.613422 kubelet[2560]: E0711 00:26:56.613422 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:56.613596 kubelet[2560]: E0711 00:26:56.613580 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:56.613596 kubelet[2560]: W0711 00:26:56.613591 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:56.613830 kubelet[2560]: E0711 00:26:56.613598 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:56.710621 kubelet[2560]: E0711 00:26:56.710545 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:56.710621 kubelet[2560]: W0711 00:26:56.710577 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:56.710621 kubelet[2560]: E0711 00:26:56.710599 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:56.711184 kubelet[2560]: E0711 00:26:56.711146 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:56.711184 kubelet[2560]: W0711 00:26:56.711174 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:56.711262 kubelet[2560]: E0711 00:26:56.711203 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:56.711630 kubelet[2560]: E0711 00:26:56.711587 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:56.711630 kubelet[2560]: W0711 00:26:56.711618 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:56.711630 kubelet[2560]: E0711 00:26:56.711629 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:56.712065 kubelet[2560]: E0711 00:26:56.712038 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:56.712065 kubelet[2560]: W0711 00:26:56.712058 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:56.712165 kubelet[2560]: E0711 00:26:56.712073 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:56.712369 kubelet[2560]: E0711 00:26:56.712350 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:56.712369 kubelet[2560]: W0711 00:26:56.712362 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:56.712369 kubelet[2560]: E0711 00:26:56.712372 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:56.712592 kubelet[2560]: E0711 00:26:56.712575 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:56.712592 kubelet[2560]: W0711 00:26:56.712585 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:56.712592 kubelet[2560]: E0711 00:26:56.712594 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:56.712870 kubelet[2560]: E0711 00:26:56.712836 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:56.712870 kubelet[2560]: W0711 00:26:56.712854 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:56.712870 kubelet[2560]: E0711 00:26:56.712862 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:56.713078 kubelet[2560]: E0711 00:26:56.713062 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:56.713078 kubelet[2560]: W0711 00:26:56.713073 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:56.713232 kubelet[2560]: E0711 00:26:56.713081 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:56.713310 kubelet[2560]: E0711 00:26:56.713293 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:56.713310 kubelet[2560]: W0711 00:26:56.713304 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:56.713373 kubelet[2560]: E0711 00:26:56.713316 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:56.713761 kubelet[2560]: E0711 00:26:56.713726 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:56.713761 kubelet[2560]: W0711 00:26:56.713756 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:56.713863 kubelet[2560]: E0711 00:26:56.713781 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:56.714010 kubelet[2560]: E0711 00:26:56.713994 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:56.714010 kubelet[2560]: W0711 00:26:56.714005 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:56.714058 kubelet[2560]: E0711 00:26:56.714015 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:56.714271 kubelet[2560]: E0711 00:26:56.714255 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:56.714271 kubelet[2560]: W0711 00:26:56.714266 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:56.714271 kubelet[2560]: E0711 00:26:56.714277 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:56.714663 kubelet[2560]: E0711 00:26:56.714645 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:56.714663 kubelet[2560]: W0711 00:26:56.714659 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:56.714743 kubelet[2560]: E0711 00:26:56.714669 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:56.714925 kubelet[2560]: E0711 00:26:56.714908 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:56.714925 kubelet[2560]: W0711 00:26:56.714921 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:56.714980 kubelet[2560]: E0711 00:26:56.714930 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:56.715162 kubelet[2560]: E0711 00:26:56.715144 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:56.715162 kubelet[2560]: W0711 00:26:56.715155 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:56.715162 kubelet[2560]: E0711 00:26:56.715163 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:56.715384 kubelet[2560]: E0711 00:26:56.715366 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:56.715384 kubelet[2560]: W0711 00:26:56.715378 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:56.715384 kubelet[2560]: E0711 00:26:56.715387 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:56.715663 kubelet[2560]: E0711 00:26:56.715632 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:56.715663 kubelet[2560]: W0711 00:26:56.715645 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:56.715663 kubelet[2560]: E0711 00:26:56.715654 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:56.716074 kubelet[2560]: E0711 00:26:56.716049 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:56.716074 kubelet[2560]: W0711 00:26:56.716062 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:56.716074 kubelet[2560]: E0711 00:26:56.716071 2560 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:57.454883 containerd[1465]: time="2025-07-11T00:26:57.454823058Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:57.455564 containerd[1465]: time="2025-07-11T00:26:57.455476130Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 11 00:26:57.456691 containerd[1465]: time="2025-07-11T00:26:57.456654247Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:57.459286 containerd[1465]: time="2025-07-11T00:26:57.459236642Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:57.459908 containerd[1465]: time="2025-07-11T00:26:57.459877329Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.277851778s" Jul 11 00:26:57.459944 containerd[1465]: time="2025-07-11T00:26:57.459912829Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 11 00:26:57.464771 containerd[1465]: time="2025-07-11T00:26:57.464722242Z" level=info msg="CreateContainer within sandbox \"b8b29f675371e73eefdf02ed2059860958f750e66a6b355614c1844c483bd13f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 11 00:26:57.480956 containerd[1465]: time="2025-07-11T00:26:57.480900998Z" level=info msg="CreateContainer within sandbox \"b8b29f675371e73eefdf02ed2059860958f750e66a6b355614c1844c483bd13f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"29916a68630ac1edbd1d142e0930e458ee35eb9eaff69de66ccc9f5c9fdc275d\"" Jul 11 00:26:57.481442 containerd[1465]: time="2025-07-11T00:26:57.481414120Z" level=info msg="StartContainer for \"29916a68630ac1edbd1d142e0930e458ee35eb9eaff69de66ccc9f5c9fdc275d\"" Jul 11 00:26:57.512877 systemd[1]: Started cri-containerd-29916a68630ac1edbd1d142e0930e458ee35eb9eaff69de66ccc9f5c9fdc275d.scope - libcontainer container 29916a68630ac1edbd1d142e0930e458ee35eb9eaff69de66ccc9f5c9fdc275d. Jul 11 00:26:57.563500 systemd[1]: cri-containerd-29916a68630ac1edbd1d142e0930e458ee35eb9eaff69de66ccc9f5c9fdc275d.scope: Deactivated successfully. Jul 11 00:26:57.578828 containerd[1465]: time="2025-07-11T00:26:57.578783665Z" level=info msg="StartContainer for \"29916a68630ac1edbd1d142e0930e458ee35eb9eaff69de66ccc9f5c9fdc275d\" returns successfully" Jul 11 00:26:57.582094 kubelet[2560]: I0711 00:26:57.582053 2560 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:26:57.582659 kubelet[2560]: E0711 00:26:57.582446 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:57.829319 containerd[1465]: time="2025-07-11T00:26:57.829131674Z" level=info msg="shim disconnected" id=29916a68630ac1edbd1d142e0930e458ee35eb9eaff69de66ccc9f5c9fdc275d namespace=k8s.io Jul 11 00:26:57.829319 containerd[1465]: time="2025-07-11T00:26:57.829213387Z" level=warning msg="cleaning up after shim disconnected" id=29916a68630ac1edbd1d142e0930e458ee35eb9eaff69de66ccc9f5c9fdc275d namespace=k8s.io Jul 11 00:26:57.829319 containerd[1465]: time="2025-07-11T00:26:57.829226835Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:26:58.193254 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-29916a68630ac1edbd1d142e0930e458ee35eb9eaff69de66ccc9f5c9fdc275d-rootfs.mount: Deactivated successfully. Jul 11 00:26:58.499225 kubelet[2560]: E0711 00:26:58.499038 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xd24s" podUID="7d661932-2475-4fb4-890b-1d7cc7f7d3fc" Jul 11 00:26:58.588361 containerd[1465]: time="2025-07-11T00:26:58.588289538Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 11 00:26:58.610549 kubelet[2560]: I0711 00:26:58.610452 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6d8b659b74-jgrz6" podStartSLOduration=3.6908036969999998 podStartE2EDuration="5.610429907s" podCreationTimestamp="2025-07-11 00:26:53 +0000 UTC" firstStartedPulling="2025-07-11 00:26:54.262167137 +0000 UTC m=+17.854660646" lastFinishedPulling="2025-07-11 00:26:56.181793247 +0000 UTC m=+19.774286856" observedRunningTime="2025-07-11 00:26:56.596591497 +0000 UTC m=+20.189085006" watchObservedRunningTime="2025-07-11 00:26:58.610429907 +0000 UTC m=+22.202923416" Jul 11 00:27:00.498401 kubelet[2560]: E0711 00:27:00.498288 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xd24s" podUID="7d661932-2475-4fb4-890b-1d7cc7f7d3fc" Jul 11 00:27:01.701963 containerd[1465]: time="2025-07-11T00:27:01.701868818Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:27:01.702850 containerd[1465]: time="2025-07-11T00:27:01.702766577Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 11 00:27:01.703994 containerd[1465]: time="2025-07-11T00:27:01.703963067Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:27:01.706272 containerd[1465]: time="2025-07-11T00:27:01.706226983Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:27:01.707025 containerd[1465]: time="2025-07-11T00:27:01.706999225Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 3.1186468s" Jul 11 00:27:01.707096 containerd[1465]: time="2025-07-11T00:27:01.707028312Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 11 00:27:01.713621 containerd[1465]: time="2025-07-11T00:27:01.713565967Z" level=info msg="CreateContainer within sandbox \"b8b29f675371e73eefdf02ed2059860958f750e66a6b355614c1844c483bd13f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 11 00:27:01.732396 containerd[1465]: time="2025-07-11T00:27:01.732340901Z" level=info msg="CreateContainer within sandbox \"b8b29f675371e73eefdf02ed2059860958f750e66a6b355614c1844c483bd13f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3d07cc9958bb4a8f3fde628fea240086b41ef8486c18c59826dc25ccf13f0a89\"" Jul 11 00:27:01.732932 containerd[1465]: time="2025-07-11T00:27:01.732901382Z" level=info msg="StartContainer for \"3d07cc9958bb4a8f3fde628fea240086b41ef8486c18c59826dc25ccf13f0a89\"" Jul 11 00:27:01.773808 systemd[1]: Started cri-containerd-3d07cc9958bb4a8f3fde628fea240086b41ef8486c18c59826dc25ccf13f0a89.scope - libcontainer container 3d07cc9958bb4a8f3fde628fea240086b41ef8486c18c59826dc25ccf13f0a89. Jul 11 00:27:01.809717 containerd[1465]: time="2025-07-11T00:27:01.809592403Z" level=info msg="StartContainer for \"3d07cc9958bb4a8f3fde628fea240086b41ef8486c18c59826dc25ccf13f0a89\" returns successfully" Jul 11 00:27:02.498703 kubelet[2560]: E0711 00:27:02.498515 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xd24s" podUID="7d661932-2475-4fb4-890b-1d7cc7f7d3fc" Jul 11 00:27:03.366266 systemd[1]: cri-containerd-3d07cc9958bb4a8f3fde628fea240086b41ef8486c18c59826dc25ccf13f0a89.scope: Deactivated successfully. Jul 11 00:27:03.395674 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3d07cc9958bb4a8f3fde628fea240086b41ef8486c18c59826dc25ccf13f0a89-rootfs.mount: Deactivated successfully. Jul 11 00:27:03.415678 kubelet[2560]: I0711 00:27:03.415637 2560 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 11 00:27:03.563661 containerd[1465]: time="2025-07-11T00:27:03.562875981Z" level=info msg="shim disconnected" id=3d07cc9958bb4a8f3fde628fea240086b41ef8486c18c59826dc25ccf13f0a89 namespace=k8s.io Jul 11 00:27:03.563661 containerd[1465]: time="2025-07-11T00:27:03.563648730Z" level=warning msg="cleaning up after shim disconnected" id=3d07cc9958bb4a8f3fde628fea240086b41ef8486c18c59826dc25ccf13f0a89 namespace=k8s.io Jul 11 00:27:03.563661 containerd[1465]: time="2025-07-11T00:27:03.563666193Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:27:03.578874 systemd[1]: Created slice kubepods-burstable-pod73fc6509_dcba_4609_91c7_d051cb3bbfc4.slice - libcontainer container kubepods-burstable-pod73fc6509_dcba_4609_91c7_d051cb3bbfc4.slice. Jul 11 00:27:03.599920 systemd[1]: Created slice kubepods-burstable-pod532c872b_897c_4658_b37f_c0b4508abd55.slice - libcontainer container kubepods-burstable-pod532c872b_897c_4658_b37f_c0b4508abd55.slice. Jul 11 00:27:03.622160 systemd[1]: Created slice kubepods-besteffort-pod1d7b0523_a28a_4b28_9a16_dbf8c602e2f1.slice - libcontainer container kubepods-besteffort-pod1d7b0523_a28a_4b28_9a16_dbf8c602e2f1.slice. Jul 11 00:27:03.631785 systemd[1]: Created slice kubepods-besteffort-podd4787e60_b0e6_42f0_b414_39732f919000.slice - libcontainer container kubepods-besteffort-podd4787e60_b0e6_42f0_b414_39732f919000.slice. Jul 11 00:27:03.642405 systemd[1]: Created slice kubepods-besteffort-pod26a3e5b9_9cc0_4afc_9ba0_86cf4b152857.slice - libcontainer container kubepods-besteffort-pod26a3e5b9_9cc0_4afc_9ba0_86cf4b152857.slice. Jul 11 00:27:03.653094 systemd[1]: Created slice kubepods-besteffort-podca320139_04b8_474f_b513_d5dae70779c9.slice - libcontainer container kubepods-besteffort-podca320139_04b8_474f_b513_d5dae70779c9.slice. Jul 11 00:27:03.661859 systemd[1]: Created slice kubepods-besteffort-pod9bfd05fa_8a91_44eb_8f96_a9e542aaa056.slice - libcontainer container kubepods-besteffort-pod9bfd05fa_8a91_44eb_8f96_a9e542aaa056.slice. Jul 11 00:27:03.670152 systemd[1]: Created slice kubepods-besteffort-pod5b108b32_37cd_4ffd_8a58_a6fa67ebe9e5.slice - libcontainer container kubepods-besteffort-pod5b108b32_37cd_4ffd_8a58_a6fa67ebe9e5.slice. Jul 11 00:27:03.698088 kubelet[2560]: I0711 00:27:03.697960 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvsdn\" (UniqueName: \"kubernetes.io/projected/26a3e5b9-9cc0-4afc-9ba0-86cf4b152857-kube-api-access-gvsdn\") pod \"calico-apiserver-5f7b5d6c54-pbqrn\" (UID: \"26a3e5b9-9cc0-4afc-9ba0-86cf4b152857\") " pod="calico-apiserver/calico-apiserver-5f7b5d6c54-pbqrn" Jul 11 00:27:03.698088 kubelet[2560]: I0711 00:27:03.698028 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjsnh\" (UniqueName: \"kubernetes.io/projected/d4787e60-b0e6-42f0-b414-39732f919000-kube-api-access-bjsnh\") pod \"calico-apiserver-6f647b777b-qhnsp\" (UID: \"d4787e60-b0e6-42f0-b414-39732f919000\") " pod="calico-apiserver/calico-apiserver-6f647b777b-qhnsp" Jul 11 00:27:03.698088 kubelet[2560]: I0711 00:27:03.698076 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/26a3e5b9-9cc0-4afc-9ba0-86cf4b152857-calico-apiserver-certs\") pod \"calico-apiserver-5f7b5d6c54-pbqrn\" (UID: \"26a3e5b9-9cc0-4afc-9ba0-86cf4b152857\") " pod="calico-apiserver/calico-apiserver-5f7b5d6c54-pbqrn" Jul 11 00:27:03.698088 kubelet[2560]: I0711 00:27:03.698117 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d7b0523-a28a-4b28-9a16-dbf8c602e2f1-config\") pod \"goldmane-768f4c5c69-dnd8p\" (UID: \"1d7b0523-a28a-4b28-9a16-dbf8c602e2f1\") " pod="calico-system/goldmane-768f4c5c69-dnd8p" Jul 11 00:27:03.699132 kubelet[2560]: I0711 00:27:03.698156 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d7b0523-a28a-4b28-9a16-dbf8c602e2f1-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-dnd8p\" (UID: \"1d7b0523-a28a-4b28-9a16-dbf8c602e2f1\") " pod="calico-system/goldmane-768f4c5c69-dnd8p" Jul 11 00:27:03.699132 kubelet[2560]: I0711 00:27:03.698178 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/1d7b0523-a28a-4b28-9a16-dbf8c602e2f1-goldmane-key-pair\") pod \"goldmane-768f4c5c69-dnd8p\" (UID: \"1d7b0523-a28a-4b28-9a16-dbf8c602e2f1\") " pod="calico-system/goldmane-768f4c5c69-dnd8p" Jul 11 00:27:03.699132 kubelet[2560]: I0711 00:27:03.698193 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khlxg\" (UniqueName: \"kubernetes.io/projected/73fc6509-dcba-4609-91c7-d051cb3bbfc4-kube-api-access-khlxg\") pod \"coredns-674b8bbfcf-msvlj\" (UID: \"73fc6509-dcba-4609-91c7-d051cb3bbfc4\") " pod="kube-system/coredns-674b8bbfcf-msvlj" Jul 11 00:27:03.699132 kubelet[2560]: I0711 00:27:03.698208 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bfd05fa-8a91-44eb-8f96-a9e542aaa056-tigera-ca-bundle\") pod \"calico-kube-controllers-f7c9fffd4-rk9nh\" (UID: \"9bfd05fa-8a91-44eb-8f96-a9e542aaa056\") " pod="calico-system/calico-kube-controllers-f7c9fffd4-rk9nh" Jul 11 00:27:03.699132 kubelet[2560]: I0711 00:27:03.698228 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/73fc6509-dcba-4609-91c7-d051cb3bbfc4-config-volume\") pod \"coredns-674b8bbfcf-msvlj\" (UID: \"73fc6509-dcba-4609-91c7-d051cb3bbfc4\") " pod="kube-system/coredns-674b8bbfcf-msvlj" Jul 11 00:27:03.699353 kubelet[2560]: I0711 00:27:03.698242 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/532c872b-897c-4658-b37f-c0b4508abd55-config-volume\") pod \"coredns-674b8bbfcf-gvf85\" (UID: \"532c872b-897c-4658-b37f-c0b4508abd55\") " pod="kube-system/coredns-674b8bbfcf-gvf85" Jul 11 00:27:03.699353 kubelet[2560]: I0711 00:27:03.698307 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b108b32-37cd-4ffd-8a58-a6fa67ebe9e5-whisker-ca-bundle\") pod \"whisker-85d6c9788d-fh75b\" (UID: \"5b108b32-37cd-4ffd-8a58-a6fa67ebe9e5\") " pod="calico-system/whisker-85d6c9788d-fh75b" Jul 11 00:27:03.699353 kubelet[2560]: I0711 00:27:03.698376 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6sxw\" (UniqueName: \"kubernetes.io/projected/9bfd05fa-8a91-44eb-8f96-a9e542aaa056-kube-api-access-x6sxw\") pod \"calico-kube-controllers-f7c9fffd4-rk9nh\" (UID: \"9bfd05fa-8a91-44eb-8f96-a9e542aaa056\") " pod="calico-system/calico-kube-controllers-f7c9fffd4-rk9nh" Jul 11 00:27:03.699353 kubelet[2560]: I0711 00:27:03.698437 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc7sm\" (UniqueName: \"kubernetes.io/projected/ca320139-04b8-474f-b513-d5dae70779c9-kube-api-access-qc7sm\") pod \"calico-apiserver-6f647b777b-hj2zv\" (UID: \"ca320139-04b8-474f-b513-d5dae70779c9\") " pod="calico-apiserver/calico-apiserver-6f647b777b-hj2zv" Jul 11 00:27:03.699353 kubelet[2560]: I0711 00:27:03.698464 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5b108b32-37cd-4ffd-8a58-a6fa67ebe9e5-whisker-backend-key-pair\") pod \"whisker-85d6c9788d-fh75b\" (UID: \"5b108b32-37cd-4ffd-8a58-a6fa67ebe9e5\") " pod="calico-system/whisker-85d6c9788d-fh75b" Jul 11 00:27:03.699594 kubelet[2560]: I0711 00:27:03.698484 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7pqw\" (UniqueName: \"kubernetes.io/projected/5b108b32-37cd-4ffd-8a58-a6fa67ebe9e5-kube-api-access-q7pqw\") pod \"whisker-85d6c9788d-fh75b\" (UID: \"5b108b32-37cd-4ffd-8a58-a6fa67ebe9e5\") " pod="calico-system/whisker-85d6c9788d-fh75b" Jul 11 00:27:03.699594 kubelet[2560]: I0711 00:27:03.698519 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d4787e60-b0e6-42f0-b414-39732f919000-calico-apiserver-certs\") pod \"calico-apiserver-6f647b777b-qhnsp\" (UID: \"d4787e60-b0e6-42f0-b414-39732f919000\") " pod="calico-apiserver/calico-apiserver-6f647b777b-qhnsp" Jul 11 00:27:03.699594 kubelet[2560]: I0711 00:27:03.698543 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z72f9\" (UniqueName: \"kubernetes.io/projected/1d7b0523-a28a-4b28-9a16-dbf8c602e2f1-kube-api-access-z72f9\") pod \"goldmane-768f4c5c69-dnd8p\" (UID: \"1d7b0523-a28a-4b28-9a16-dbf8c602e2f1\") " pod="calico-system/goldmane-768f4c5c69-dnd8p" Jul 11 00:27:03.699594 kubelet[2560]: I0711 00:27:03.698571 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw4vf\" (UniqueName: \"kubernetes.io/projected/532c872b-897c-4658-b37f-c0b4508abd55-kube-api-access-nw4vf\") pod \"coredns-674b8bbfcf-gvf85\" (UID: \"532c872b-897c-4658-b37f-c0b4508abd55\") " pod="kube-system/coredns-674b8bbfcf-gvf85" Jul 11 00:27:03.699594 kubelet[2560]: I0711 00:27:03.698657 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ca320139-04b8-474f-b513-d5dae70779c9-calico-apiserver-certs\") pod \"calico-apiserver-6f647b777b-hj2zv\" (UID: \"ca320139-04b8-474f-b513-d5dae70779c9\") " pod="calico-apiserver/calico-apiserver-6f647b777b-hj2zv" Jul 11 00:27:03.888461 kubelet[2560]: E0711 00:27:03.888261 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:27:03.889286 containerd[1465]: time="2025-07-11T00:27:03.889027716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-msvlj,Uid:73fc6509-dcba-4609-91c7-d051cb3bbfc4,Namespace:kube-system,Attempt:0,}" Jul 11 00:27:03.911216 kubelet[2560]: E0711 00:27:03.911169 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:27:03.912026 containerd[1465]: time="2025-07-11T00:27:03.911986408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gvf85,Uid:532c872b-897c-4658-b37f-c0b4508abd55,Namespace:kube-system,Attempt:0,}" Jul 11 00:27:03.929101 containerd[1465]: time="2025-07-11T00:27:03.928765683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-dnd8p,Uid:1d7b0523-a28a-4b28-9a16-dbf8c602e2f1,Namespace:calico-system,Attempt:0,}" Jul 11 00:27:03.939339 containerd[1465]: time="2025-07-11T00:27:03.939277417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f647b777b-qhnsp,Uid:d4787e60-b0e6-42f0-b414-39732f919000,Namespace:calico-apiserver,Attempt:0,}" Jul 11 00:27:03.958780 containerd[1465]: time="2025-07-11T00:27:03.958548819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f647b777b-hj2zv,Uid:ca320139-04b8-474f-b513-d5dae70779c9,Namespace:calico-apiserver,Attempt:0,}" Jul 11 00:27:03.959624 containerd[1465]: time="2025-07-11T00:27:03.958920924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f7b5d6c54-pbqrn,Uid:26a3e5b9-9cc0-4afc-9ba0-86cf4b152857,Namespace:calico-apiserver,Attempt:0,}" Jul 11 00:27:03.967183 containerd[1465]: time="2025-07-11T00:27:03.966943716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f7c9fffd4-rk9nh,Uid:9bfd05fa-8a91-44eb-8f96-a9e542aaa056,Namespace:calico-system,Attempt:0,}" Jul 11 00:27:03.976491 containerd[1465]: time="2025-07-11T00:27:03.976448939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-85d6c9788d-fh75b,Uid:5b108b32-37cd-4ffd-8a58-a6fa67ebe9e5,Namespace:calico-system,Attempt:0,}" Jul 11 00:27:04.078949 containerd[1465]: time="2025-07-11T00:27:04.078881083Z" level=error msg="Failed to destroy network for sandbox \"1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:27:04.082569 containerd[1465]: time="2025-07-11T00:27:04.082515160Z" level=error msg="encountered an error cleaning up failed sandbox \"1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:27:04.082657 containerd[1465]: time="2025-07-11T00:27:04.082627883Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-dnd8p,Uid:1d7b0523-a28a-4b28-9a16-dbf8c602e2f1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:27:04.083331 kubelet[2560]: E0711 00:27:04.083266 2560 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:27:04.083448 kubelet[2560]: E0711 00:27:04.083385 2560 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-dnd8p" Jul 11 00:27:04.083448 kubelet[2560]: E0711 00:27:04.083413 2560 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-dnd8p" Jul 11 00:27:04.083514 kubelet[2560]: E0711 00:27:04.083467 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-dnd8p_calico-system(1d7b0523-a28a-4b28-9a16-dbf8c602e2f1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-dnd8p_calico-system(1d7b0523-a28a-4b28-9a16-dbf8c602e2f1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-dnd8p" podUID="1d7b0523-a28a-4b28-9a16-dbf8c602e2f1" Jul 11 00:27:04.090908 containerd[1465]: time="2025-07-11T00:27:04.090760025Z" level=error msg="Failed to destroy network for sandbox \"bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:27:04.091311 containerd[1465]: time="2025-07-11T00:27:04.091283559Z" level=error msg="encountered an error cleaning up failed sandbox \"bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:27:04.091485 containerd[1465]: time="2025-07-11T00:27:04.091442843Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gvf85,Uid:532c872b-897c-4658-b37f-c0b4508abd55,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:27:04.093036 kubelet[2560]: E0711 00:27:04.091850 2560 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:27:04.093036 kubelet[2560]: E0711 00:27:04.091915 2560 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-gvf85" Jul 11 00:27:04.093036 kubelet[2560]: E0711 00:27:04.091938 2560 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-gvf85" Jul 11 00:27:04.093178 kubelet[2560]: E0711 00:27:04.091990 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-gvf85_kube-system(532c872b-897c-4658-b37f-c0b4508abd55)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-gvf85_kube-system(532c872b-897c-4658-b37f-c0b4508abd55)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-gvf85" podUID="532c872b-897c-4658-b37f-c0b4508abd55" Jul 11 00:27:04.115555 containerd[1465]: time="2025-07-11T00:27:04.115495480Z" level=error msg="Failed to destroy network for sandbox \"7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:27:04.117330 containerd[1465]: time="2025-07-11T00:27:04.117200968Z" level=error msg="encountered an error cleaning up failed sandbox \"7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:27:04.117575 containerd[1465]: time="2025-07-11T00:27:04.117444680Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-msvlj,Uid:73fc6509-dcba-4609-91c7-d051cb3bbfc4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:27:04.119163 kubelet[2560]: E0711 00:27:04.117970 2560 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:27:04.119163 kubelet[2560]: E0711 00:27:04.118048 2560 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-msvlj" Jul 11 00:27:04.119163 kubelet[2560]: E0711 00:27:04.118071 2560 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-msvlj" Jul 11 00:27:04.119328 kubelet[2560]: E0711 00:27:04.118181 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-msvlj_kube-system(73fc6509-dcba-4609-91c7-d051cb3bbfc4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-msvlj_kube-system(73fc6509-dcba-4609-91c7-d051cb3bbfc4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-msvlj" podUID="73fc6509-dcba-4609-91c7-d051cb3bbfc4" Jul 11 00:27:04.127945 containerd[1465]: time="2025-07-11T00:27:04.127886233Z" level=error msg="Failed to destroy network for sandbox \"b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:27:04.128567 containerd[1465]: time="2025-07-11T00:27:04.128534644Z" level=error msg="encountered an error cleaning up failed sandbox \"b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:27:04.128965 containerd[1465]: time="2025-07-11T00:27:04.128937589Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f647b777b-qhnsp,Uid:d4787e60-b0e6-42f0-b414-39732f919000,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:27:04.129532 kubelet[2560]: E0711 00:27:04.129356 2560 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:27:04.129532 kubelet[2560]: E0711 00:27:04.129410 2560 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f647b777b-qhnsp" Jul 11 00:27:04.129532 kubelet[2560]: E0711 00:27:04.129434 2560 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f647b777b-qhnsp" Jul 11 00:27:04.129667 kubelet[2560]: E0711 00:27:04.129491 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f647b777b-qhnsp_calico-apiserver(d4787e60-b0e6-42f0-b414-39732f919000)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f647b777b-qhnsp_calico-apiserver(d4787e60-b0e6-42f0-b414-39732f919000)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f647b777b-qhnsp" podUID="d4787e60-b0e6-42f0-b414-39732f919000" Jul 11 00:27:04.152867 containerd[1465]: time="2025-07-11T00:27:04.152262749Z" level=error msg="Failed to destroy network for sandbox \"4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:27:04.153024 containerd[1465]: time="2025-07-11T00:27:04.152991408Z" level=error msg="encountered an error cleaning up failed sandbox \"4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:27:04.153600 containerd[1465]: time="2025-07-11T00:27:04.153562346Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f7c9fffd4-rk9nh,Uid:9bfd05fa-8a91-44eb-8f96-a9e542aaa056,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:27:04.154054 kubelet[2560]: E0711 00:27:04.153992 2560 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:27:04.154210 kubelet[2560]: E0711 00:27:04.154076 2560 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f7c9fffd4-rk9nh" Jul 11 00:27:04.154210 kubelet[2560]: E0711 00:27:04.154101 2560 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f7c9fffd4-rk9nh" Jul 11 00:27:04.154210 kubelet[2560]: E0711 00:27:04.154159 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-f7c9fffd4-rk9nh_calico-system(9bfd05fa-8a91-44eb-8f96-a9e542aaa056)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-f7c9fffd4-rk9nh_calico-system(9bfd05fa-8a91-44eb-8f96-a9e542aaa056)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-f7c9fffd4-rk9nh" podUID="9bfd05fa-8a91-44eb-8f96-a9e542aaa056" Jul 11 00:27:04.164495 containerd[1465]: time="2025-07-11T00:27:04.164437586Z" level=error msg="Failed to destroy network for sandbox \"1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:27:04.165091 containerd[1465]: time="2025-07-11T00:27:04.165051348Z" level=error msg="encountered an error cleaning up failed sandbox \"1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:27:04.165142 containerd[1465]: time="2025-07-11T00:27:04.165113221Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f647b777b-hj2zv,Uid:ca320139-04b8-474f-b513-d5dae70779c9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:27:04.165454 kubelet[2560]: E0711 00:27:04.165404 2560 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:27:04.165703 kubelet[2560]: E0711 00:27:04.165655 2560 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f647b777b-hj2zv" Jul 11 00:27:04.165703 kubelet[2560]: E0711 00:27:04.165688 2560 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f647b777b-hj2zv" Jul 11 00:27:04.165934 containerd[1465]: time="2025-07-11T00:27:04.165781009Z" level=error msg="Failed to destroy network for sandbox \"759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:27:04.166258 containerd[1465]: time="2025-07-11T00:27:04.166229174Z" level=error msg="encountered an error cleaning up failed sandbox \"759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:27:04.166332 containerd[1465]: time="2025-07-11T00:27:04.166288472Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-85d6c9788d-fh75b,Uid:5b108b32-37cd-4ffd-8a58-a6fa67ebe9e5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:27:04.166599 kubelet[2560]: E0711 00:27:04.166552 2560 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:27:04.166737 kubelet[2560]: E0711 00:27:04.166643 2560 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-85d6c9788d-fh75b" Jul 11 00:27:04.166737 kubelet[2560]: E0711 00:27:04.166666 2560 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-85d6c9788d-fh75b" Jul 11 00:27:04.166737 kubelet[2560]: E0711 00:27:04.165776 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f647b777b-hj2zv_calico-apiserver(ca320139-04b8-474f-b513-d5dae70779c9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f647b777b-hj2zv_calico-apiserver(ca320139-04b8-474f-b513-d5dae70779c9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f647b777b-hj2zv" podUID="ca320139-04b8-474f-b513-d5dae70779c9" Jul 11 00:27:04.167015 kubelet[2560]: E0711 00:27:04.166725 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-85d6c9788d-fh75b_calico-system(5b108b32-37cd-4ffd-8a58-a6fa67ebe9e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-85d6c9788d-fh75b_calico-system(5b108b32-37cd-4ffd-8a58-a6fa67ebe9e5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-85d6c9788d-fh75b" podUID="5b108b32-37cd-4ffd-8a58-a6fa67ebe9e5" Jul 11 00:27:04.184334 containerd[1465]: time="2025-07-11T00:27:04.184080943Z" level=error msg="Failed to destroy network for sandbox \"de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:27:04.184675 containerd[1465]: time="2025-07-11T00:27:04.184644406Z" level=error msg="encountered an error cleaning up failed sandbox \"de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:27:04.184728 containerd[1465]: time="2025-07-11T00:27:04.184694455Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f7b5d6c54-pbqrn,Uid:26a3e5b9-9cc0-4afc-9ba0-86cf4b152857,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:27:04.184995 kubelet[2560]: E0711 00:27:04.184954 2560 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:27:04.185089 kubelet[2560]: E0711 00:27:04.185019 2560 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f7b5d6c54-pbqrn" Jul 11 00:27:04.185205 kubelet[2560]: E0711 00:27:04.185092 2560 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f7b5d6c54-pbqrn" Jul 11 00:27:04.185205 kubelet[2560]: E0711 00:27:04.185175 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f7b5d6c54-pbqrn_calico-apiserver(26a3e5b9-9cc0-4afc-9ba0-86cf4b152857)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f7b5d6c54-pbqrn_calico-apiserver(26a3e5b9-9cc0-4afc-9ba0-86cf4b152857)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f7b5d6c54-pbqrn" podUID="26a3e5b9-9cc0-4afc-9ba0-86cf4b152857" Jul 11 00:27:04.505450 systemd[1]: Created slice kubepods-besteffort-pod7d661932_2475_4fb4_890b_1d7cc7f7d3fc.slice - libcontainer container kubepods-besteffort-pod7d661932_2475_4fb4_890b_1d7cc7f7d3fc.slice. Jul 11 00:27:04.509070 containerd[1465]: time="2025-07-11T00:27:04.509026265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xd24s,Uid:7d661932-2475-4fb4-890b-1d7cc7f7d3fc,Namespace:calico-system,Attempt:0,}" Jul 11 00:27:04.575474 containerd[1465]: time="2025-07-11T00:27:04.575404419Z" level=error msg="Failed to destroy network for sandbox \"0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:27:04.575928 containerd[1465]: time="2025-07-11T00:27:04.575894978Z" level=error msg="encountered an error cleaning up failed sandbox \"0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:27:04.575980 containerd[1465]: time="2025-07-11T00:27:04.575954175Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xd24s,Uid:7d661932-2475-4fb4-890b-1d7cc7f7d3fc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:27:04.576306 kubelet[2560]: E0711 00:27:04.576242 2560 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:27:04.576392 kubelet[2560]: E0711 00:27:04.576337 2560 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xd24s" Jul 11 00:27:04.576392 kubelet[2560]: E0711 00:27:04.576373 2560 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xd24s" Jul 11 00:27:04.576455 kubelet[2560]: E0711 00:27:04.576433 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xd24s_calico-system(7d661932-2475-4fb4-890b-1d7cc7f7d3fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xd24s_calico-system(7d661932-2475-4fb4-890b-1d7cc7f7d3fc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xd24s" podUID="7d661932-2475-4fb4-890b-1d7cc7f7d3fc" Jul 11 00:27:04.578102 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd-shm.mount: Deactivated successfully. Jul 11 00:27:04.612943 kubelet[2560]: I0711 00:27:04.612875 2560 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547" Jul 11 00:27:04.613629 containerd[1465]: time="2025-07-11T00:27:04.613556653Z" level=info msg="StopPodSandbox for \"1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547\"" Jul 11 00:27:04.615191 kubelet[2560]: I0711 00:27:04.614723 2560 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257" Jul 11 00:27:04.615435 containerd[1465]: time="2025-07-11T00:27:04.615401938Z" level=info msg="StopPodSandbox for \"7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257\"" Jul 11 00:27:04.617616 containerd[1465]: time="2025-07-11T00:27:04.617400276Z" level=info msg="Ensure that sandbox 7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257 in task-service has been cleanup successfully" Jul 11 00:27:04.623788 containerd[1465]: time="2025-07-11T00:27:04.616534084Z" level=info msg="Ensure that sandbox 1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547 in task-service has been cleanup successfully" Jul 11 00:27:04.624225 kubelet[2560]: I0711 00:27:04.624192 2560 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28" Jul 11 00:27:04.625778 containerd[1465]: time="2025-07-11T00:27:04.624986549Z" level=info msg="StopPodSandbox for \"759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28\"" Jul 11 00:27:04.628803 containerd[1465]: time="2025-07-11T00:27:04.628215406Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 11 00:27:04.661164 containerd[1465]: time="2025-07-11T00:27:04.661120918Z" level=info msg="Ensure that sandbox 759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28 in task-service has been cleanup successfully" Jul 11 00:27:04.662147 kubelet[2560]: I0711 00:27:04.661535 2560 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a" Jul 11 00:27:04.667829 containerd[1465]: time="2025-07-11T00:27:04.667235475Z" level=info msg="StopPodSandbox for \"4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a\"" Jul 11 00:27:04.667829 containerd[1465]: time="2025-07-11T00:27:04.667472464Z" level=info msg="Ensure that sandbox 4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a in task-service has been cleanup successfully" Jul 11 00:27:04.671146 kubelet[2560]: I0711 00:27:04.671101 2560 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9" Jul 11 00:27:04.672459 containerd[1465]: time="2025-07-11T00:27:04.672412229Z" level=info msg="StopPodSandbox for \"de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9\"" Jul 11 00:27:04.673427 containerd[1465]: time="2025-07-11T00:27:04.673400482Z" level=info msg="Ensure that sandbox de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9 in task-service has been cleanup successfully" Jul 11 00:27:04.686702 kubelet[2560]: I0711 00:27:04.686659 2560 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef" Jul 11 00:27:04.689471 containerd[1465]: time="2025-07-11T00:27:04.689416244Z" level=info msg="StopPodSandbox for \"bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef\"" Jul 11 00:27:04.690042 containerd[1465]: time="2025-07-11T00:27:04.689781125Z" level=info msg="Ensure that sandbox bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef in task-service has been cleanup successfully" Jul 11 00:27:04.691113 kubelet[2560]: I0711 00:27:04.691086 2560 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08" Jul 11 00:27:04.693124 containerd[1465]: time="2025-07-11T00:27:04.693053428Z" level=info msg="StopPodSandbox for \"b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08\"" Jul 11 00:27:04.694772 containerd[1465]: time="2025-07-11T00:27:04.694741042Z" level=info msg="Ensure that sandbox b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08 in task-service has been cleanup successfully" Jul 11 00:27:04.698843 kubelet[2560]: I0711 00:27:04.698011 2560 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd" Jul 11 00:27:04.699947 containerd[1465]: time="2025-07-11T00:27:04.699913929Z" level=info msg="StopPodSandbox for \"0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd\"" Jul 11 00:27:04.700244 containerd[1465]: time="2025-07-11T00:27:04.700224362Z" level=info msg="Ensure that sandbox 0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd in task-service has been cleanup successfully" Jul 11 00:27:04.704212 kubelet[2560]: I0711 00:27:04.704187 2560 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f" Jul 11 00:27:04.705061 containerd[1465]: time="2025-07-11T00:27:04.705035203Z" level=info msg="StopPodSandbox for \"1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f\"" Jul 11 00:27:04.705332 containerd[1465]: time="2025-07-11T00:27:04.705312962Z" level=info msg="Ensure that sandbox 1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f in task-service has been cleanup successfully" Jul 11 00:27:04.721821 containerd[1465]: time="2025-07-11T00:27:04.721764816Z" level=error msg="StopPodSandbox for \"7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257\" failed" error="failed to destroy network for sandbox \"7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:27:04.722404 kubelet[2560]: E0711 00:27:04.722182 2560 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257" Jul 11 00:27:04.722404 kubelet[2560]: E0711 00:27:04.722250 2560 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257"} Jul 11 00:27:04.722404 kubelet[2560]: E0711 00:27:04.722326 2560 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"73fc6509-dcba-4609-91c7-d051cb3bbfc4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:27:04.722404 kubelet[2560]: E0711 00:27:04.722352 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"73fc6509-dcba-4609-91c7-d051cb3bbfc4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-msvlj" podUID="73fc6509-dcba-4609-91c7-d051cb3bbfc4" Jul 11 00:27:04.734193 containerd[1465]: time="2025-07-11T00:27:04.732752237Z" level=error msg="StopPodSandbox for \"4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a\" failed" error="failed to destroy network for sandbox \"4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:27:04.734392 kubelet[2560]: E0711 00:27:04.733022 2560 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a" Jul 11 00:27:04.734392 kubelet[2560]: E0711 00:27:04.733077 2560 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a"} Jul 11 00:27:04.734392 kubelet[2560]: E0711 00:27:04.733124 2560 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9bfd05fa-8a91-44eb-8f96-a9e542aaa056\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:27:04.734392 kubelet[2560]: E0711 00:27:04.733151 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9bfd05fa-8a91-44eb-8f96-a9e542aaa056\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-f7c9fffd4-rk9nh" podUID="9bfd05fa-8a91-44eb-8f96-a9e542aaa056" Jul 11 00:27:04.743688 containerd[1465]: time="2025-07-11T00:27:04.743589382Z" level=error msg="StopPodSandbox for \"1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547\" failed" error="failed to destroy network for sandbox \"1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:27:04.744054 kubelet[2560]: E0711 00:27:04.743985 2560 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547" Jul 11 00:27:04.744143 kubelet[2560]: E0711 00:27:04.744059 2560 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547"} Jul 11 00:27:04.744143 kubelet[2560]: E0711 00:27:04.744103 2560 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1d7b0523-a28a-4b28-9a16-dbf8c602e2f1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:27:04.744143 kubelet[2560]: E0711 00:27:04.744127 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1d7b0523-a28a-4b28-9a16-dbf8c602e2f1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-dnd8p" podUID="1d7b0523-a28a-4b28-9a16-dbf8c602e2f1" Jul 11 00:27:04.750406 containerd[1465]: time="2025-07-11T00:27:04.750309856Z" level=error msg="StopPodSandbox for \"1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f\" failed" error="failed to destroy network for sandbox \"1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:27:04.750736 kubelet[2560]: E0711 00:27:04.750674 2560 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f" Jul 11 00:27:04.750835 kubelet[2560]: E0711 00:27:04.750751 2560 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f"} Jul 11 00:27:04.750835 kubelet[2560]: E0711 00:27:04.750811 2560 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ca320139-04b8-474f-b513-d5dae70779c9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:27:04.751034 kubelet[2560]: E0711 00:27:04.750835 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ca320139-04b8-474f-b513-d5dae70779c9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f647b777b-hj2zv" podUID="ca320139-04b8-474f-b513-d5dae70779c9" Jul 11 00:27:04.754008 containerd[1465]: time="2025-07-11T00:27:04.753946508Z" level=error msg="StopPodSandbox for \"bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef\" failed" error="failed to destroy network for sandbox \"bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:27:04.754761 kubelet[2560]: E0711 00:27:04.754486 2560 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef" Jul 11 00:27:04.754761 kubelet[2560]: E0711 00:27:04.754524 2560 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef"} Jul 11 00:27:04.754761 kubelet[2560]: E0711 00:27:04.754554 2560 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"532c872b-897c-4658-b37f-c0b4508abd55\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:27:04.754761 kubelet[2560]: E0711 00:27:04.754579 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"532c872b-897c-4658-b37f-c0b4508abd55\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-gvf85" podUID="532c872b-897c-4658-b37f-c0b4508abd55" Jul 11 00:27:04.763781 containerd[1465]: time="2025-07-11T00:27:04.763644453Z" level=error msg="StopPodSandbox for \"de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9\" failed" error="failed to destroy network for sandbox \"de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:27:04.766978 containerd[1465]: time="2025-07-11T00:27:04.765772816Z" level=error msg="StopPodSandbox for \"b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08\" failed" error="failed to destroy network for sandbox \"b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:27:04.767214 kubelet[2560]: E0711 00:27:04.767153 2560 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9" Jul 11 00:27:04.767269 kubelet[2560]: E0711 00:27:04.767223 2560 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9"} Jul 11 00:27:04.767312 kubelet[2560]: E0711 00:27:04.767282 2560 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"26a3e5b9-9cc0-4afc-9ba0-86cf4b152857\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:27:04.767394 kubelet[2560]: E0711 00:27:04.767319 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"26a3e5b9-9cc0-4afc-9ba0-86cf4b152857\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f7b5d6c54-pbqrn" podUID="26a3e5b9-9cc0-4afc-9ba0-86cf4b152857" Jul 11 00:27:04.767574 containerd[1465]: time="2025-07-11T00:27:04.766277524Z" level=error msg="StopPodSandbox for \"0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd\" failed" error="failed to destroy network for sandbox \"0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:27:04.767764 kubelet[2560]: E0711 00:27:04.767706 2560 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08" Jul 11 00:27:04.767764 kubelet[2560]: E0711 00:27:04.767759 2560 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08"} Jul 11 00:27:04.767919 kubelet[2560]: E0711 00:27:04.767779 2560 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d4787e60-b0e6-42f0-b414-39732f919000\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:27:04.767919 kubelet[2560]: E0711 00:27:04.767838 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d4787e60-b0e6-42f0-b414-39732f919000\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f647b777b-qhnsp" podUID="d4787e60-b0e6-42f0-b414-39732f919000" Jul 11 00:27:04.767919 kubelet[2560]: E0711 00:27:04.767841 2560 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd" Jul 11 00:27:04.767919 kubelet[2560]: E0711 00:27:04.767899 2560 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd"} Jul 11 00:27:04.768090 containerd[1465]: time="2025-07-11T00:27:04.767704081Z" level=error msg="StopPodSandbox for \"759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28\" failed" error="failed to destroy network for sandbox \"759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:27:04.768130 kubelet[2560]: E0711 00:27:04.767935 2560 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7d661932-2475-4fb4-890b-1d7cc7f7d3fc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:27:04.768130 kubelet[2560]: E0711 00:27:04.767961 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7d661932-2475-4fb4-890b-1d7cc7f7d3fc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xd24s" podUID="7d661932-2475-4fb4-890b-1d7cc7f7d3fc" Jul 11 00:27:04.768130 kubelet[2560]: E0711 00:27:04.768080 2560 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28" Jul 11 00:27:04.768130 kubelet[2560]: E0711 00:27:04.768110 2560 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28"} Jul 11 00:27:04.768255 kubelet[2560]: E0711 00:27:04.768138 2560 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5b108b32-37cd-4ffd-8a58-a6fa67ebe9e5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:27:04.768255 kubelet[2560]: E0711 00:27:04.768156 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5b108b32-37cd-4ffd-8a58-a6fa67ebe9e5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-85d6c9788d-fh75b" podUID="5b108b32-37cd-4ffd-8a58-a6fa67ebe9e5" Jul 11 00:27:06.987143 kubelet[2560]: I0711 00:27:06.987089 2560 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:27:06.987707 kubelet[2560]: E0711 00:27:06.987484 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:27:07.710156 kubelet[2560]: E0711 00:27:07.710102 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:27:11.190583 systemd[1]: Started sshd@9-10.0.0.159:22-10.0.0.1:44530.service - OpenSSH per-connection server daemon (10.0.0.1:44530). Jul 11 00:27:11.232992 sshd[3851]: Accepted publickey for core from 10.0.0.1 port 44530 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:27:11.234878 sshd[3851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:27:11.240623 systemd-logind[1445]: New session 10 of user core. Jul 11 00:27:11.248824 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 11 00:27:11.414652 sshd[3851]: pam_unix(sshd:session): session closed for user core Jul 11 00:27:11.418104 systemd[1]: sshd@9-10.0.0.159:22-10.0.0.1:44530.service: Deactivated successfully. Jul 11 00:27:11.420495 systemd[1]: session-10.scope: Deactivated successfully. Jul 11 00:27:11.423147 systemd-logind[1445]: Session 10 logged out. Waiting for processes to exit. Jul 11 00:27:11.424381 systemd-logind[1445]: Removed session 10. Jul 11 00:27:12.014831 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount101152348.mount: Deactivated successfully. Jul 11 00:27:13.186029 containerd[1465]: time="2025-07-11T00:27:13.185898012Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:27:13.187052 containerd[1465]: time="2025-07-11T00:27:13.187010375Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 11 00:27:13.190075 containerd[1465]: time="2025-07-11T00:27:13.189147738Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:27:13.195583 containerd[1465]: time="2025-07-11T00:27:13.194563140Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:27:13.195950 containerd[1465]: time="2025-07-11T00:27:13.195898701Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 8.565546644s" Jul 11 00:27:13.196020 containerd[1465]: time="2025-07-11T00:27:13.195952426Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 11 00:27:13.256772 containerd[1465]: time="2025-07-11T00:27:13.256706518Z" level=info msg="CreateContainer within sandbox \"b8b29f675371e73eefdf02ed2059860958f750e66a6b355614c1844c483bd13f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 11 00:27:13.434085 containerd[1465]: time="2025-07-11T00:27:13.434030928Z" level=info msg="CreateContainer within sandbox \"b8b29f675371e73eefdf02ed2059860958f750e66a6b355614c1844c483bd13f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d924ceeb6ef2ae6c1a3040803b4c14b9d2cfc4afb5a34ff2b32959d5892f9cfc\"" Jul 11 00:27:13.434626 containerd[1465]: time="2025-07-11T00:27:13.434570566Z" level=info msg="StartContainer for \"d924ceeb6ef2ae6c1a3040803b4c14b9d2cfc4afb5a34ff2b32959d5892f9cfc\"" Jul 11 00:27:13.484774 systemd[1]: Started cri-containerd-d924ceeb6ef2ae6c1a3040803b4c14b9d2cfc4afb5a34ff2b32959d5892f9cfc.scope - libcontainer container d924ceeb6ef2ae6c1a3040803b4c14b9d2cfc4afb5a34ff2b32959d5892f9cfc. Jul 11 00:27:13.593397 containerd[1465]: time="2025-07-11T00:27:13.593103022Z" level=info msg="StartContainer for \"d924ceeb6ef2ae6c1a3040803b4c14b9d2cfc4afb5a34ff2b32959d5892f9cfc\" returns successfully" Jul 11 00:27:13.623963 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 11 00:27:13.624200 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 11 00:27:13.704917 containerd[1465]: time="2025-07-11T00:27:13.704869136Z" level=info msg="StopPodSandbox for \"759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28\"" Jul 11 00:27:13.780734 kubelet[2560]: I0711 00:27:13.779069 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-v2cmq" podStartSLOduration=1.182111893 podStartE2EDuration="19.779037573s" podCreationTimestamp="2025-07-11 00:26:54 +0000 UTC" firstStartedPulling="2025-07-11 00:26:54.600294012 +0000 UTC m=+18.192787521" lastFinishedPulling="2025-07-11 00:27:13.197219692 +0000 UTC m=+36.789713201" observedRunningTime="2025-07-11 00:27:13.765031457 +0000 UTC m=+37.357524966" watchObservedRunningTime="2025-07-11 00:27:13.779037573 +0000 UTC m=+37.371531082" Jul 11 00:27:13.888297 containerd[1465]: 2025-07-11 00:27:13.783 [INFO][3930] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28" Jul 11 00:27:13.888297 containerd[1465]: 2025-07-11 00:27:13.784 [INFO][3930] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28" iface="eth0" netns="/var/run/netns/cni-d4c982d2-7016-762b-30e8-e1e176d3cce0" Jul 11 00:27:13.888297 containerd[1465]: 2025-07-11 00:27:13.784 [INFO][3930] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28" iface="eth0" netns="/var/run/netns/cni-d4c982d2-7016-762b-30e8-e1e176d3cce0" Jul 11 00:27:13.888297 containerd[1465]: 2025-07-11 00:27:13.785 [INFO][3930] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28" iface="eth0" netns="/var/run/netns/cni-d4c982d2-7016-762b-30e8-e1e176d3cce0" Jul 11 00:27:13.888297 containerd[1465]: 2025-07-11 00:27:13.785 [INFO][3930] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28" Jul 11 00:27:13.888297 containerd[1465]: 2025-07-11 00:27:13.785 [INFO][3930] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28" Jul 11 00:27:13.888297 containerd[1465]: 2025-07-11 00:27:13.869 [INFO][3944] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28" HandleID="k8s-pod-network.759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28" Workload="localhost-k8s-whisker--85d6c9788d--fh75b-eth0" Jul 11 00:27:13.888297 containerd[1465]: 2025-07-11 00:27:13.870 [INFO][3944] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:27:13.888297 containerd[1465]: 2025-07-11 00:27:13.870 [INFO][3944] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:27:13.888297 containerd[1465]: 2025-07-11 00:27:13.877 [WARNING][3944] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28" HandleID="k8s-pod-network.759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28" Workload="localhost-k8s-whisker--85d6c9788d--fh75b-eth0" Jul 11 00:27:13.888297 containerd[1465]: 2025-07-11 00:27:13.877 [INFO][3944] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28" HandleID="k8s-pod-network.759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28" Workload="localhost-k8s-whisker--85d6c9788d--fh75b-eth0" Jul 11 00:27:13.888297 containerd[1465]: 2025-07-11 00:27:13.878 [INFO][3944] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:27:13.888297 containerd[1465]: 2025-07-11 00:27:13.882 [INFO][3930] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28" Jul 11 00:27:13.889752 containerd[1465]: time="2025-07-11T00:27:13.889702365Z" level=info msg="TearDown network for sandbox \"759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28\" successfully" Jul 11 00:27:13.889752 containerd[1465]: time="2025-07-11T00:27:13.889742434Z" level=info msg="StopPodSandbox for \"759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28\" returns successfully" Jul 11 00:27:13.960132 kubelet[2560]: I0711 00:27:13.960072 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5b108b32-37cd-4ffd-8a58-a6fa67ebe9e5-whisker-backend-key-pair\") pod \"5b108b32-37cd-4ffd-8a58-a6fa67ebe9e5\" (UID: \"5b108b32-37cd-4ffd-8a58-a6fa67ebe9e5\") " Jul 11 00:27:13.960132 kubelet[2560]: I0711 00:27:13.960113 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b108b32-37cd-4ffd-8a58-a6fa67ebe9e5-whisker-ca-bundle\") pod \"5b108b32-37cd-4ffd-8a58-a6fa67ebe9e5\" (UID: \"5b108b32-37cd-4ffd-8a58-a6fa67ebe9e5\") " Jul 11 00:27:13.960132 kubelet[2560]: I0711 00:27:13.960131 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7pqw\" (UniqueName: \"kubernetes.io/projected/5b108b32-37cd-4ffd-8a58-a6fa67ebe9e5-kube-api-access-q7pqw\") pod \"5b108b32-37cd-4ffd-8a58-a6fa67ebe9e5\" (UID: \"5b108b32-37cd-4ffd-8a58-a6fa67ebe9e5\") " Jul 11 00:27:13.960752 kubelet[2560]: I0711 00:27:13.960693 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b108b32-37cd-4ffd-8a58-a6fa67ebe9e5-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "5b108b32-37cd-4ffd-8a58-a6fa67ebe9e5" (UID: "5b108b32-37cd-4ffd-8a58-a6fa67ebe9e5"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 11 00:27:13.964472 kubelet[2560]: I0711 00:27:13.964419 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b108b32-37cd-4ffd-8a58-a6fa67ebe9e5-kube-api-access-q7pqw" (OuterVolumeSpecName: "kube-api-access-q7pqw") pod "5b108b32-37cd-4ffd-8a58-a6fa67ebe9e5" (UID: "5b108b32-37cd-4ffd-8a58-a6fa67ebe9e5"). InnerVolumeSpecName "kube-api-access-q7pqw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 11 00:27:13.965660 kubelet[2560]: I0711 00:27:13.965595 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b108b32-37cd-4ffd-8a58-a6fa67ebe9e5-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "5b108b32-37cd-4ffd-8a58-a6fa67ebe9e5" (UID: "5b108b32-37cd-4ffd-8a58-a6fa67ebe9e5"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 11 00:27:14.061196 kubelet[2560]: I0711 00:27:14.061060 2560 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5b108b32-37cd-4ffd-8a58-a6fa67ebe9e5-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 11 00:27:14.061196 kubelet[2560]: I0711 00:27:14.061101 2560 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b108b32-37cd-4ffd-8a58-a6fa67ebe9e5-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 11 00:27:14.061196 kubelet[2560]: I0711 00:27:14.061116 2560 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q7pqw\" (UniqueName: \"kubernetes.io/projected/5b108b32-37cd-4ffd-8a58-a6fa67ebe9e5-kube-api-access-q7pqw\") on node \"localhost\" DevicePath \"\"" Jul 11 00:27:14.210349 systemd[1]: run-netns-cni\x2dd4c982d2\x2d7016\x2d762b\x2d30e8\x2de1e176d3cce0.mount: Deactivated successfully. Jul 11 00:27:14.210486 systemd[1]: var-lib-kubelet-pods-5b108b32\x2d37cd\x2d4ffd\x2d8a58\x2da6fa67ebe9e5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq7pqw.mount: Deactivated successfully. Jul 11 00:27:14.210574 systemd[1]: var-lib-kubelet-pods-5b108b32\x2d37cd\x2d4ffd\x2d8a58\x2da6fa67ebe9e5-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 11 00:27:14.506966 systemd[1]: Removed slice kubepods-besteffort-pod5b108b32_37cd_4ffd_8a58_a6fa67ebe9e5.slice - libcontainer container kubepods-besteffort-pod5b108b32_37cd_4ffd_8a58_a6fa67ebe9e5.slice. Jul 11 00:27:14.814517 systemd[1]: Created slice kubepods-besteffort-pod55695e6f_2278_49ae_b890_7d6c9b182a18.slice - libcontainer container kubepods-besteffort-pod55695e6f_2278_49ae_b890_7d6c9b182a18.slice. Jul 11 00:27:14.866761 kubelet[2560]: I0711 00:27:14.866314 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95wpg\" (UniqueName: \"kubernetes.io/projected/55695e6f-2278-49ae-b890-7d6c9b182a18-kube-api-access-95wpg\") pod \"whisker-8f5b58dcb-df62x\" (UID: \"55695e6f-2278-49ae-b890-7d6c9b182a18\") " pod="calico-system/whisker-8f5b58dcb-df62x" Jul 11 00:27:14.866761 kubelet[2560]: I0711 00:27:14.866435 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/55695e6f-2278-49ae-b890-7d6c9b182a18-whisker-backend-key-pair\") pod \"whisker-8f5b58dcb-df62x\" (UID: \"55695e6f-2278-49ae-b890-7d6c9b182a18\") " pod="calico-system/whisker-8f5b58dcb-df62x" Jul 11 00:27:14.866761 kubelet[2560]: I0711 00:27:14.866465 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55695e6f-2278-49ae-b890-7d6c9b182a18-whisker-ca-bundle\") pod \"whisker-8f5b58dcb-df62x\" (UID: \"55695e6f-2278-49ae-b890-7d6c9b182a18\") " pod="calico-system/whisker-8f5b58dcb-df62x" Jul 11 00:27:15.118064 containerd[1465]: time="2025-07-11T00:27:15.118003075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8f5b58dcb-df62x,Uid:55695e6f-2278-49ae-b890-7d6c9b182a18,Namespace:calico-system,Attempt:0,}" Jul 11 00:27:15.207643 kernel: bpftool[4163]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 11 00:27:15.273723 systemd-networkd[1391]: cali2e340285fb5: Link UP Jul 11 00:27:15.273983 systemd-networkd[1391]: cali2e340285fb5: Gained carrier Jul 11 00:27:15.292630 containerd[1465]: 2025-07-11 00:27:15.183 [INFO][4139] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 11 00:27:15.292630 containerd[1465]: 2025-07-11 00:27:15.197 [INFO][4139] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--8f5b58dcb--df62x-eth0 whisker-8f5b58dcb- calico-system 55695e6f-2278-49ae-b890-7d6c9b182a18 988 0 2025-07-11 00:27:14 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:8f5b58dcb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-8f5b58dcb-df62x eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali2e340285fb5 [] [] }} ContainerID="6d2d7ef6efebca1e34cfea74ccfc9a9ba978922ba98d7b7b404af0bffac987f5" Namespace="calico-system" Pod="whisker-8f5b58dcb-df62x" WorkloadEndpoint="localhost-k8s-whisker--8f5b58dcb--df62x-" Jul 11 00:27:15.292630 containerd[1465]: 2025-07-11 00:27:15.197 [INFO][4139] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6d2d7ef6efebca1e34cfea74ccfc9a9ba978922ba98d7b7b404af0bffac987f5" Namespace="calico-system" Pod="whisker-8f5b58dcb-df62x" WorkloadEndpoint="localhost-k8s-whisker--8f5b58dcb--df62x-eth0" Jul 11 00:27:15.292630 containerd[1465]: 2025-07-11 00:27:15.228 [INFO][4157] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6d2d7ef6efebca1e34cfea74ccfc9a9ba978922ba98d7b7b404af0bffac987f5" HandleID="k8s-pod-network.6d2d7ef6efebca1e34cfea74ccfc9a9ba978922ba98d7b7b404af0bffac987f5" Workload="localhost-k8s-whisker--8f5b58dcb--df62x-eth0" Jul 11 00:27:15.292630 containerd[1465]: 2025-07-11 00:27:15.228 [INFO][4157] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6d2d7ef6efebca1e34cfea74ccfc9a9ba978922ba98d7b7b404af0bffac987f5" HandleID="k8s-pod-network.6d2d7ef6efebca1e34cfea74ccfc9a9ba978922ba98d7b7b404af0bffac987f5" Workload="localhost-k8s-whisker--8f5b58dcb--df62x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000287630), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-8f5b58dcb-df62x", "timestamp":"2025-07-11 00:27:15.228370287 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:27:15.292630 containerd[1465]: 2025-07-11 00:27:15.228 [INFO][4157] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:27:15.292630 containerd[1465]: 2025-07-11 00:27:15.228 [INFO][4157] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:27:15.292630 containerd[1465]: 2025-07-11 00:27:15.228 [INFO][4157] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:27:15.292630 containerd[1465]: 2025-07-11 00:27:15.237 [INFO][4157] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6d2d7ef6efebca1e34cfea74ccfc9a9ba978922ba98d7b7b404af0bffac987f5" host="localhost" Jul 11 00:27:15.292630 containerd[1465]: 2025-07-11 00:27:15.243 [INFO][4157] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:27:15.292630 containerd[1465]: 2025-07-11 00:27:15.247 [INFO][4157] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:27:15.292630 containerd[1465]: 2025-07-11 00:27:15.249 [INFO][4157] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:27:15.292630 containerd[1465]: 2025-07-11 00:27:15.250 [INFO][4157] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:27:15.292630 containerd[1465]: 2025-07-11 00:27:15.250 [INFO][4157] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6d2d7ef6efebca1e34cfea74ccfc9a9ba978922ba98d7b7b404af0bffac987f5" host="localhost" Jul 11 00:27:15.292630 containerd[1465]: 2025-07-11 00:27:15.252 [INFO][4157] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6d2d7ef6efebca1e34cfea74ccfc9a9ba978922ba98d7b7b404af0bffac987f5 Jul 11 00:27:15.292630 containerd[1465]: 2025-07-11 00:27:15.255 [INFO][4157] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6d2d7ef6efebca1e34cfea74ccfc9a9ba978922ba98d7b7b404af0bffac987f5" host="localhost" Jul 11 00:27:15.292630 containerd[1465]: 2025-07-11 00:27:15.261 [INFO][4157] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.6d2d7ef6efebca1e34cfea74ccfc9a9ba978922ba98d7b7b404af0bffac987f5" host="localhost" Jul 11 00:27:15.292630 containerd[1465]: 2025-07-11 00:27:15.261 [INFO][4157] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.6d2d7ef6efebca1e34cfea74ccfc9a9ba978922ba98d7b7b404af0bffac987f5" host="localhost" Jul 11 00:27:15.292630 containerd[1465]: 2025-07-11 00:27:15.261 [INFO][4157] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:27:15.292630 containerd[1465]: 2025-07-11 00:27:15.261 [INFO][4157] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="6d2d7ef6efebca1e34cfea74ccfc9a9ba978922ba98d7b7b404af0bffac987f5" HandleID="k8s-pod-network.6d2d7ef6efebca1e34cfea74ccfc9a9ba978922ba98d7b7b404af0bffac987f5" Workload="localhost-k8s-whisker--8f5b58dcb--df62x-eth0" Jul 11 00:27:15.293411 containerd[1465]: 2025-07-11 00:27:15.266 [INFO][4139] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6d2d7ef6efebca1e34cfea74ccfc9a9ba978922ba98d7b7b404af0bffac987f5" Namespace="calico-system" Pod="whisker-8f5b58dcb-df62x" WorkloadEndpoint="localhost-k8s-whisker--8f5b58dcb--df62x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--8f5b58dcb--df62x-eth0", GenerateName:"whisker-8f5b58dcb-", Namespace:"calico-system", SelfLink:"", UID:"55695e6f-2278-49ae-b890-7d6c9b182a18", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 27, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8f5b58dcb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-8f5b58dcb-df62x", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2e340285fb5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:27:15.293411 containerd[1465]: 2025-07-11 00:27:15.266 [INFO][4139] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="6d2d7ef6efebca1e34cfea74ccfc9a9ba978922ba98d7b7b404af0bffac987f5" Namespace="calico-system" Pod="whisker-8f5b58dcb-df62x" WorkloadEndpoint="localhost-k8s-whisker--8f5b58dcb--df62x-eth0" Jul 11 00:27:15.293411 containerd[1465]: 2025-07-11 00:27:15.266 [INFO][4139] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2e340285fb5 ContainerID="6d2d7ef6efebca1e34cfea74ccfc9a9ba978922ba98d7b7b404af0bffac987f5" Namespace="calico-system" Pod="whisker-8f5b58dcb-df62x" WorkloadEndpoint="localhost-k8s-whisker--8f5b58dcb--df62x-eth0" Jul 11 00:27:15.293411 containerd[1465]: 2025-07-11 00:27:15.273 [INFO][4139] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6d2d7ef6efebca1e34cfea74ccfc9a9ba978922ba98d7b7b404af0bffac987f5" Namespace="calico-system" Pod="whisker-8f5b58dcb-df62x" WorkloadEndpoint="localhost-k8s-whisker--8f5b58dcb--df62x-eth0" Jul 11 00:27:15.293411 containerd[1465]: 2025-07-11 00:27:15.274 [INFO][4139] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6d2d7ef6efebca1e34cfea74ccfc9a9ba978922ba98d7b7b404af0bffac987f5" Namespace="calico-system" Pod="whisker-8f5b58dcb-df62x" WorkloadEndpoint="localhost-k8s-whisker--8f5b58dcb--df62x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--8f5b58dcb--df62x-eth0", GenerateName:"whisker-8f5b58dcb-", Namespace:"calico-system", SelfLink:"", UID:"55695e6f-2278-49ae-b890-7d6c9b182a18", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 27, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8f5b58dcb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6d2d7ef6efebca1e34cfea74ccfc9a9ba978922ba98d7b7b404af0bffac987f5", Pod:"whisker-8f5b58dcb-df62x", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2e340285fb5", MAC:"9e:d6:bc:ac:59:b0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:27:15.293411 containerd[1465]: 2025-07-11 00:27:15.287 [INFO][4139] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6d2d7ef6efebca1e34cfea74ccfc9a9ba978922ba98d7b7b404af0bffac987f5" Namespace="calico-system" Pod="whisker-8f5b58dcb-df62x" WorkloadEndpoint="localhost-k8s-whisker--8f5b58dcb--df62x-eth0" Jul 11 00:27:15.323083 containerd[1465]: time="2025-07-11T00:27:15.322969800Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:27:15.323083 containerd[1465]: time="2025-07-11T00:27:15.323031621Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:27:15.323083 containerd[1465]: time="2025-07-11T00:27:15.323047311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:27:15.323337 containerd[1465]: time="2025-07-11T00:27:15.323283113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:27:15.348763 systemd[1]: Started cri-containerd-6d2d7ef6efebca1e34cfea74ccfc9a9ba978922ba98d7b7b404af0bffac987f5.scope - libcontainer container 6d2d7ef6efebca1e34cfea74ccfc9a9ba978922ba98d7b7b404af0bffac987f5. Jul 11 00:27:15.364110 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:27:15.391587 containerd[1465]: time="2025-07-11T00:27:15.391453855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8f5b58dcb-df62x,Uid:55695e6f-2278-49ae-b890-7d6c9b182a18,Namespace:calico-system,Attempt:0,} returns sandbox id \"6d2d7ef6efebca1e34cfea74ccfc9a9ba978922ba98d7b7b404af0bffac987f5\"" Jul 11 00:27:15.396488 containerd[1465]: time="2025-07-11T00:27:15.396449154Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 11 00:27:15.478636 systemd-networkd[1391]: vxlan.calico: Link UP Jul 11 00:27:15.478645 systemd-networkd[1391]: vxlan.calico: Gained carrier Jul 11 00:27:15.499429 containerd[1465]: time="2025-07-11T00:27:15.499370624Z" level=info msg="StopPodSandbox for \"1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f\"" Jul 11 00:27:15.598306 containerd[1465]: 2025-07-11 00:27:15.555 [INFO][4256] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f" Jul 11 00:27:15.598306 containerd[1465]: 2025-07-11 00:27:15.555 [INFO][4256] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f" iface="eth0" netns="/var/run/netns/cni-c7d5b400-ee1d-35a4-e885-f93633a36dba" Jul 11 00:27:15.598306 containerd[1465]: 2025-07-11 00:27:15.556 [INFO][4256] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f" iface="eth0" netns="/var/run/netns/cni-c7d5b400-ee1d-35a4-e885-f93633a36dba" Jul 11 00:27:15.598306 containerd[1465]: 2025-07-11 00:27:15.557 [INFO][4256] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f" iface="eth0" netns="/var/run/netns/cni-c7d5b400-ee1d-35a4-e885-f93633a36dba" Jul 11 00:27:15.598306 containerd[1465]: 2025-07-11 00:27:15.557 [INFO][4256] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f" Jul 11 00:27:15.598306 containerd[1465]: 2025-07-11 00:27:15.557 [INFO][4256] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f" Jul 11 00:27:15.598306 containerd[1465]: 2025-07-11 00:27:15.582 [INFO][4274] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f" HandleID="k8s-pod-network.1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f" Workload="localhost-k8s-calico--apiserver--6f647b777b--hj2zv-eth0" Jul 11 00:27:15.598306 containerd[1465]: 2025-07-11 00:27:15.582 [INFO][4274] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:27:15.598306 containerd[1465]: 2025-07-11 00:27:15.582 [INFO][4274] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:27:15.598306 containerd[1465]: 2025-07-11 00:27:15.590 [WARNING][4274] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f" HandleID="k8s-pod-network.1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f" Workload="localhost-k8s-calico--apiserver--6f647b777b--hj2zv-eth0" Jul 11 00:27:15.598306 containerd[1465]: 2025-07-11 00:27:15.590 [INFO][4274] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f" HandleID="k8s-pod-network.1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f" Workload="localhost-k8s-calico--apiserver--6f647b777b--hj2zv-eth0" Jul 11 00:27:15.598306 containerd[1465]: 2025-07-11 00:27:15.591 [INFO][4274] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:27:15.598306 containerd[1465]: 2025-07-11 00:27:15.594 [INFO][4256] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f" Jul 11 00:27:15.602046 containerd[1465]: time="2025-07-11T00:27:15.599228401Z" level=info msg="TearDown network for sandbox \"1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f\" successfully" Jul 11 00:27:15.602046 containerd[1465]: time="2025-07-11T00:27:15.599260944Z" level=info msg="StopPodSandbox for \"1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f\" returns successfully" Jul 11 00:27:15.602425 containerd[1465]: time="2025-07-11T00:27:15.602379556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f647b777b-hj2zv,Uid:ca320139-04b8-474f-b513-d5dae70779c9,Namespace:calico-apiserver,Attempt:1,}" Jul 11 00:27:15.605306 systemd[1]: run-netns-cni\x2dc7d5b400\x2dee1d\x2d35a4\x2de885\x2df93633a36dba.mount: Deactivated successfully. Jul 11 00:27:15.728205 systemd-networkd[1391]: cali1e2b514fea0: Link UP Jul 11 00:27:15.730166 systemd-networkd[1391]: cali1e2b514fea0: Gained carrier Jul 11 00:27:15.744293 containerd[1465]: 2025-07-11 00:27:15.656 [INFO][4282] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6f647b777b--hj2zv-eth0 calico-apiserver-6f647b777b- calico-apiserver ca320139-04b8-474f-b513-d5dae70779c9 1002 0 2025-07-11 00:26:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f647b777b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6f647b777b-hj2zv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1e2b514fea0 [] [] }} ContainerID="4c3620b231bd77ab885314b9a835ded0db44271822c2226b377c41cab1660b54" Namespace="calico-apiserver" Pod="calico-apiserver-6f647b777b-hj2zv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f647b777b--hj2zv-" Jul 11 00:27:15.744293 containerd[1465]: 2025-07-11 00:27:15.656 [INFO][4282] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4c3620b231bd77ab885314b9a835ded0db44271822c2226b377c41cab1660b54" Namespace="calico-apiserver" Pod="calico-apiserver-6f647b777b-hj2zv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f647b777b--hj2zv-eth0" Jul 11 00:27:15.744293 containerd[1465]: 2025-07-11 00:27:15.685 [INFO][4296] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4c3620b231bd77ab885314b9a835ded0db44271822c2226b377c41cab1660b54" HandleID="k8s-pod-network.4c3620b231bd77ab885314b9a835ded0db44271822c2226b377c41cab1660b54" Workload="localhost-k8s-calico--apiserver--6f647b777b--hj2zv-eth0" Jul 11 00:27:15.744293 containerd[1465]: 2025-07-11 00:27:15.685 [INFO][4296] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4c3620b231bd77ab885314b9a835ded0db44271822c2226b377c41cab1660b54" HandleID="k8s-pod-network.4c3620b231bd77ab885314b9a835ded0db44271822c2226b377c41cab1660b54" Workload="localhost-k8s-calico--apiserver--6f647b777b--hj2zv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f640), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6f647b777b-hj2zv", "timestamp":"2025-07-11 00:27:15.685100651 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:27:15.744293 containerd[1465]: 2025-07-11 00:27:15.685 [INFO][4296] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:27:15.744293 containerd[1465]: 2025-07-11 00:27:15.685 [INFO][4296] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:27:15.744293 containerd[1465]: 2025-07-11 00:27:15.686 [INFO][4296] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:27:15.744293 containerd[1465]: 2025-07-11 00:27:15.694 [INFO][4296] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4c3620b231bd77ab885314b9a835ded0db44271822c2226b377c41cab1660b54" host="localhost" Jul 11 00:27:15.744293 containerd[1465]: 2025-07-11 00:27:15.699 [INFO][4296] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:27:15.744293 containerd[1465]: 2025-07-11 00:27:15.703 [INFO][4296] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:27:15.744293 containerd[1465]: 2025-07-11 00:27:15.705 [INFO][4296] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:27:15.744293 containerd[1465]: 2025-07-11 00:27:15.707 [INFO][4296] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:27:15.744293 containerd[1465]: 2025-07-11 00:27:15.707 [INFO][4296] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4c3620b231bd77ab885314b9a835ded0db44271822c2226b377c41cab1660b54" host="localhost" Jul 11 00:27:15.744293 containerd[1465]: 2025-07-11 00:27:15.708 [INFO][4296] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4c3620b231bd77ab885314b9a835ded0db44271822c2226b377c41cab1660b54 Jul 11 00:27:15.744293 containerd[1465]: 2025-07-11 00:27:15.711 [INFO][4296] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4c3620b231bd77ab885314b9a835ded0db44271822c2226b377c41cab1660b54" host="localhost" Jul 11 00:27:15.744293 containerd[1465]: 2025-07-11 00:27:15.716 [INFO][4296] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.4c3620b231bd77ab885314b9a835ded0db44271822c2226b377c41cab1660b54" host="localhost" Jul 11 00:27:15.744293 containerd[1465]: 2025-07-11 00:27:15.716 [INFO][4296] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.4c3620b231bd77ab885314b9a835ded0db44271822c2226b377c41cab1660b54" host="localhost" Jul 11 00:27:15.744293 containerd[1465]: 2025-07-11 00:27:15.716 [INFO][4296] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:27:15.744293 containerd[1465]: 2025-07-11 00:27:15.716 [INFO][4296] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="4c3620b231bd77ab885314b9a835ded0db44271822c2226b377c41cab1660b54" HandleID="k8s-pod-network.4c3620b231bd77ab885314b9a835ded0db44271822c2226b377c41cab1660b54" Workload="localhost-k8s-calico--apiserver--6f647b777b--hj2zv-eth0" Jul 11 00:27:15.744959 containerd[1465]: 2025-07-11 00:27:15.720 [INFO][4282] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4c3620b231bd77ab885314b9a835ded0db44271822c2226b377c41cab1660b54" Namespace="calico-apiserver" Pod="calico-apiserver-6f647b777b-hj2zv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f647b777b--hj2zv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f647b777b--hj2zv-eth0", GenerateName:"calico-apiserver-6f647b777b-", Namespace:"calico-apiserver", SelfLink:"", UID:"ca320139-04b8-474f-b513-d5dae70779c9", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f647b777b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6f647b777b-hj2zv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1e2b514fea0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:27:15.744959 containerd[1465]: 2025-07-11 00:27:15.721 [INFO][4282] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="4c3620b231bd77ab885314b9a835ded0db44271822c2226b377c41cab1660b54" Namespace="calico-apiserver" Pod="calico-apiserver-6f647b777b-hj2zv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f647b777b--hj2zv-eth0" Jul 11 00:27:15.744959 containerd[1465]: 2025-07-11 00:27:15.721 [INFO][4282] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1e2b514fea0 ContainerID="4c3620b231bd77ab885314b9a835ded0db44271822c2226b377c41cab1660b54" Namespace="calico-apiserver" Pod="calico-apiserver-6f647b777b-hj2zv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f647b777b--hj2zv-eth0" Jul 11 00:27:15.744959 containerd[1465]: 2025-07-11 00:27:15.730 [INFO][4282] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4c3620b231bd77ab885314b9a835ded0db44271822c2226b377c41cab1660b54" Namespace="calico-apiserver" Pod="calico-apiserver-6f647b777b-hj2zv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f647b777b--hj2zv-eth0" Jul 11 00:27:15.744959 containerd[1465]: 2025-07-11 00:27:15.730 [INFO][4282] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4c3620b231bd77ab885314b9a835ded0db44271822c2226b377c41cab1660b54" Namespace="calico-apiserver" Pod="calico-apiserver-6f647b777b-hj2zv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f647b777b--hj2zv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f647b777b--hj2zv-eth0", GenerateName:"calico-apiserver-6f647b777b-", Namespace:"calico-apiserver", SelfLink:"", UID:"ca320139-04b8-474f-b513-d5dae70779c9", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f647b777b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4c3620b231bd77ab885314b9a835ded0db44271822c2226b377c41cab1660b54", Pod:"calico-apiserver-6f647b777b-hj2zv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1e2b514fea0", MAC:"36:98:b4:14:f3:e6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:27:15.744959 containerd[1465]: 2025-07-11 00:27:15.739 [INFO][4282] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4c3620b231bd77ab885314b9a835ded0db44271822c2226b377c41cab1660b54" Namespace="calico-apiserver" Pod="calico-apiserver-6f647b777b-hj2zv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f647b777b--hj2zv-eth0" Jul 11 00:27:15.768648 containerd[1465]: time="2025-07-11T00:27:15.767334140Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:27:15.768648 containerd[1465]: time="2025-07-11T00:27:15.767405279Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:27:15.768648 containerd[1465]: time="2025-07-11T00:27:15.767421231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:27:15.768648 containerd[1465]: time="2025-07-11T00:27:15.767518371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:27:15.788131 systemd[1]: Started cri-containerd-4c3620b231bd77ab885314b9a835ded0db44271822c2226b377c41cab1660b54.scope - libcontainer container 4c3620b231bd77ab885314b9a835ded0db44271822c2226b377c41cab1660b54. Jul 11 00:27:15.804750 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:27:15.848421 containerd[1465]: time="2025-07-11T00:27:15.848369421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f647b777b-hj2zv,Uid:ca320139-04b8-474f-b513-d5dae70779c9,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"4c3620b231bd77ab885314b9a835ded0db44271822c2226b377c41cab1660b54\"" Jul 11 00:27:16.430501 systemd[1]: Started sshd@10-10.0.0.159:22-10.0.0.1:44534.service - OpenSSH per-connection server daemon (10.0.0.1:44534). Jul 11 00:27:16.473766 sshd[4396]: Accepted publickey for core from 10.0.0.1 port 44534 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:27:16.475787 sshd[4396]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:27:16.483765 systemd-logind[1445]: New session 11 of user core. Jul 11 00:27:16.493760 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 11 00:27:16.501593 kubelet[2560]: I0711 00:27:16.501546 2560 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b108b32-37cd-4ffd-8a58-a6fa67ebe9e5" path="/var/lib/kubelet/pods/5b108b32-37cd-4ffd-8a58-a6fa67ebe9e5/volumes" Jul 11 00:27:16.642031 sshd[4396]: pam_unix(sshd:session): session closed for user core Jul 11 00:27:16.645467 systemd[1]: sshd@10-10.0.0.159:22-10.0.0.1:44534.service: Deactivated successfully. Jul 11 00:27:16.649199 systemd[1]: session-11.scope: Deactivated successfully. Jul 11 00:27:16.650466 systemd-logind[1445]: Session 11 logged out. Waiting for processes to exit. Jul 11 00:27:16.651684 systemd-logind[1445]: Removed session 11. Jul 11 00:27:16.700565 containerd[1465]: time="2025-07-11T00:27:16.700441437Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:27:16.701443 containerd[1465]: time="2025-07-11T00:27:16.701408290Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 11 00:27:16.702626 containerd[1465]: time="2025-07-11T00:27:16.702597229Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:27:16.704813 containerd[1465]: time="2025-07-11T00:27:16.704790415Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:27:16.705407 containerd[1465]: time="2025-07-11T00:27:16.705385270Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.308902851s" Jul 11 00:27:16.705455 containerd[1465]: time="2025-07-11T00:27:16.705412173Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 11 00:27:16.706655 containerd[1465]: time="2025-07-11T00:27:16.706346743Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 11 00:27:16.710035 containerd[1465]: time="2025-07-11T00:27:16.710008115Z" level=info msg="CreateContainer within sandbox \"6d2d7ef6efebca1e34cfea74ccfc9a9ba978922ba98d7b7b404af0bffac987f5\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 11 00:27:16.724685 containerd[1465]: time="2025-07-11T00:27:16.724543115Z" level=info msg="CreateContainer within sandbox \"6d2d7ef6efebca1e34cfea74ccfc9a9ba978922ba98d7b7b404af0bffac987f5\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"94abc995b9e97542fa0678e3324454b7ab713fd1fcbfb8e9fbc52c1486f37b99\"" Jul 11 00:27:16.726623 containerd[1465]: time="2025-07-11T00:27:16.726516230Z" level=info msg="StartContainer for \"94abc995b9e97542fa0678e3324454b7ab713fd1fcbfb8e9fbc52c1486f37b99\"" Jul 11 00:27:16.767732 systemd[1]: Started cri-containerd-94abc995b9e97542fa0678e3324454b7ab713fd1fcbfb8e9fbc52c1486f37b99.scope - libcontainer container 94abc995b9e97542fa0678e3324454b7ab713fd1fcbfb8e9fbc52c1486f37b99. Jul 11 00:27:16.807143 containerd[1465]: time="2025-07-11T00:27:16.807085523Z" level=info msg="StartContainer for \"94abc995b9e97542fa0678e3324454b7ab713fd1fcbfb8e9fbc52c1486f37b99\" returns successfully" Jul 11 00:27:17.300758 systemd-networkd[1391]: vxlan.calico: Gained IPv6LL Jul 11 00:27:17.301084 systemd-networkd[1391]: cali2e340285fb5: Gained IPv6LL Jul 11 00:27:17.499727 containerd[1465]: time="2025-07-11T00:27:17.499266155Z" level=info msg="StopPodSandbox for \"1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547\"" Jul 11 00:27:17.499727 containerd[1465]: time="2025-07-11T00:27:17.499310491Z" level=info msg="StopPodSandbox for \"4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a\"" Jul 11 00:27:17.500000 containerd[1465]: time="2025-07-11T00:27:17.499740924Z" level=info msg="StopPodSandbox for \"de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9\"" Jul 11 00:27:17.500000 containerd[1465]: time="2025-07-11T00:27:17.499271254Z" level=info msg="StopPodSandbox for \"b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08\"" Jul 11 00:27:17.623700 systemd-networkd[1391]: cali1e2b514fea0: Gained IPv6LL Jul 11 00:27:17.638583 containerd[1465]: 2025-07-11 00:27:17.580 [INFO][4500] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08" Jul 11 00:27:17.638583 containerd[1465]: 2025-07-11 00:27:17.581 [INFO][4500] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08" iface="eth0" netns="/var/run/netns/cni-55e578c9-8040-57bd-b52b-75735cf13da9" Jul 11 00:27:17.638583 containerd[1465]: 2025-07-11 00:27:17.582 [INFO][4500] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08" iface="eth0" netns="/var/run/netns/cni-55e578c9-8040-57bd-b52b-75735cf13da9" Jul 11 00:27:17.638583 containerd[1465]: 2025-07-11 00:27:17.584 [INFO][4500] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08" iface="eth0" netns="/var/run/netns/cni-55e578c9-8040-57bd-b52b-75735cf13da9" Jul 11 00:27:17.638583 containerd[1465]: 2025-07-11 00:27:17.584 [INFO][4500] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08" Jul 11 00:27:17.638583 containerd[1465]: 2025-07-11 00:27:17.584 [INFO][4500] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08" Jul 11 00:27:17.638583 containerd[1465]: 2025-07-11 00:27:17.624 [INFO][4543] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08" HandleID="k8s-pod-network.b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08" Workload="localhost-k8s-calico--apiserver--6f647b777b--qhnsp-eth0" Jul 11 00:27:17.638583 containerd[1465]: 2025-07-11 00:27:17.624 [INFO][4543] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:27:17.638583 containerd[1465]: 2025-07-11 00:27:17.625 [INFO][4543] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:27:17.638583 containerd[1465]: 2025-07-11 00:27:17.631 [WARNING][4543] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08" HandleID="k8s-pod-network.b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08" Workload="localhost-k8s-calico--apiserver--6f647b777b--qhnsp-eth0" Jul 11 00:27:17.638583 containerd[1465]: 2025-07-11 00:27:17.631 [INFO][4543] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08" HandleID="k8s-pod-network.b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08" Workload="localhost-k8s-calico--apiserver--6f647b777b--qhnsp-eth0" Jul 11 00:27:17.638583 containerd[1465]: 2025-07-11 00:27:17.633 [INFO][4543] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:27:17.638583 containerd[1465]: 2025-07-11 00:27:17.636 [INFO][4500] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08" Jul 11 00:27:17.639731 containerd[1465]: time="2025-07-11T00:27:17.639666306Z" level=info msg="TearDown network for sandbox \"b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08\" successfully" Jul 11 00:27:17.639831 containerd[1465]: time="2025-07-11T00:27:17.639816270Z" level=info msg="StopPodSandbox for \"b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08\" returns successfully" Jul 11 00:27:17.641852 systemd[1]: run-netns-cni\x2d55e578c9\x2d8040\x2d57bd\x2db52b\x2d75735cf13da9.mount: Deactivated successfully. Jul 11 00:27:17.643494 containerd[1465]: time="2025-07-11T00:27:17.643448461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f647b777b-qhnsp,Uid:d4787e60-b0e6-42f0-b414-39732f919000,Namespace:calico-apiserver,Attempt:1,}" Jul 11 00:27:17.652203 containerd[1465]: 2025-07-11 00:27:17.576 [INFO][4501] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547" Jul 11 00:27:17.652203 containerd[1465]: 2025-07-11 00:27:17.577 [INFO][4501] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547" iface="eth0" netns="/var/run/netns/cni-901892b4-c564-a324-a820-a496a855f1a8" Jul 11 00:27:17.652203 containerd[1465]: 2025-07-11 00:27:17.577 [INFO][4501] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547" iface="eth0" netns="/var/run/netns/cni-901892b4-c564-a324-a820-a496a855f1a8" Jul 11 00:27:17.652203 containerd[1465]: 2025-07-11 00:27:17.577 [INFO][4501] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547" iface="eth0" netns="/var/run/netns/cni-901892b4-c564-a324-a820-a496a855f1a8" Jul 11 00:27:17.652203 containerd[1465]: 2025-07-11 00:27:17.577 [INFO][4501] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547" Jul 11 00:27:17.652203 containerd[1465]: 2025-07-11 00:27:17.577 [INFO][4501] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547" Jul 11 00:27:17.652203 containerd[1465]: 2025-07-11 00:27:17.625 [INFO][4532] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547" HandleID="k8s-pod-network.1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547" Workload="localhost-k8s-goldmane--768f4c5c69--dnd8p-eth0" Jul 11 00:27:17.652203 containerd[1465]: 2025-07-11 00:27:17.629 [INFO][4532] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:27:17.652203 containerd[1465]: 2025-07-11 00:27:17.633 [INFO][4532] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:27:17.652203 containerd[1465]: 2025-07-11 00:27:17.643 [WARNING][4532] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547" HandleID="k8s-pod-network.1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547" Workload="localhost-k8s-goldmane--768f4c5c69--dnd8p-eth0" Jul 11 00:27:17.652203 containerd[1465]: 2025-07-11 00:27:17.643 [INFO][4532] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547" HandleID="k8s-pod-network.1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547" Workload="localhost-k8s-goldmane--768f4c5c69--dnd8p-eth0" Jul 11 00:27:17.652203 containerd[1465]: 2025-07-11 00:27:17.646 [INFO][4532] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:27:17.652203 containerd[1465]: 2025-07-11 00:27:17.649 [INFO][4501] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547" Jul 11 00:27:17.652718 containerd[1465]: time="2025-07-11T00:27:17.652689884Z" level=info msg="TearDown network for sandbox \"1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547\" successfully" Jul 11 00:27:17.652785 containerd[1465]: time="2025-07-11T00:27:17.652770262Z" level=info msg="StopPodSandbox for \"1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547\" returns successfully" Jul 11 00:27:17.655347 systemd[1]: run-netns-cni\x2d901892b4\x2dc564\x2da324\x2da820\x2da496a855f1a8.mount: Deactivated successfully. Jul 11 00:27:17.655721 containerd[1465]: time="2025-07-11T00:27:17.655547589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-dnd8p,Uid:1d7b0523-a28a-4b28-9a16-dbf8c602e2f1,Namespace:calico-system,Attempt:1,}" Jul 11 00:27:17.663472 containerd[1465]: 2025-07-11 00:27:17.568 [INFO][4502] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9" Jul 11 00:27:17.663472 containerd[1465]: 2025-07-11 00:27:17.570 [INFO][4502] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9" iface="eth0" netns="/var/run/netns/cni-f5cb0e9d-55c2-1cb0-5bce-edaeca05f23b" Jul 11 00:27:17.663472 containerd[1465]: 2025-07-11 00:27:17.570 [INFO][4502] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9" iface="eth0" netns="/var/run/netns/cni-f5cb0e9d-55c2-1cb0-5bce-edaeca05f23b" Jul 11 00:27:17.663472 containerd[1465]: 2025-07-11 00:27:17.572 [INFO][4502] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9" iface="eth0" netns="/var/run/netns/cni-f5cb0e9d-55c2-1cb0-5bce-edaeca05f23b" Jul 11 00:27:17.663472 containerd[1465]: 2025-07-11 00:27:17.572 [INFO][4502] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9" Jul 11 00:27:17.663472 containerd[1465]: 2025-07-11 00:27:17.572 [INFO][4502] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9" Jul 11 00:27:17.663472 containerd[1465]: 2025-07-11 00:27:17.632 [INFO][4530] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9" HandleID="k8s-pod-network.de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9" Workload="localhost-k8s-calico--apiserver--5f7b5d6c54--pbqrn-eth0" Jul 11 00:27:17.663472 containerd[1465]: 2025-07-11 00:27:17.632 [INFO][4530] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:27:17.663472 containerd[1465]: 2025-07-11 00:27:17.646 [INFO][4530] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:27:17.663472 containerd[1465]: 2025-07-11 00:27:17.655 [WARNING][4530] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9" HandleID="k8s-pod-network.de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9" Workload="localhost-k8s-calico--apiserver--5f7b5d6c54--pbqrn-eth0" Jul 11 00:27:17.663472 containerd[1465]: 2025-07-11 00:27:17.655 [INFO][4530] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9" HandleID="k8s-pod-network.de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9" Workload="localhost-k8s-calico--apiserver--5f7b5d6c54--pbqrn-eth0" Jul 11 00:27:17.663472 containerd[1465]: 2025-07-11 00:27:17.657 [INFO][4530] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:27:17.663472 containerd[1465]: 2025-07-11 00:27:17.660 [INFO][4502] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9" Jul 11 00:27:17.665706 containerd[1465]: time="2025-07-11T00:27:17.665673584Z" level=info msg="TearDown network for sandbox \"de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9\" successfully" Jul 11 00:27:17.665706 containerd[1465]: time="2025-07-11T00:27:17.665703122Z" level=info msg="StopPodSandbox for \"de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9\" returns successfully" Jul 11 00:27:17.666492 containerd[1465]: time="2025-07-11T00:27:17.666449774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f7b5d6c54-pbqrn,Uid:26a3e5b9-9cc0-4afc-9ba0-86cf4b152857,Namespace:calico-apiserver,Attempt:1,}" Jul 11 00:27:17.666704 systemd[1]: run-netns-cni\x2df5cb0e9d\x2d55c2\x2d1cb0\x2d5bce\x2dedaeca05f23b.mount: Deactivated successfully. Jul 11 00:27:17.676046 containerd[1465]: 2025-07-11 00:27:17.593 [INFO][4499] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a" Jul 11 00:27:17.676046 containerd[1465]: 2025-07-11 00:27:17.593 [INFO][4499] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a" iface="eth0" netns="/var/run/netns/cni-84cd0179-270a-4906-9843-a28abebe6c47" Jul 11 00:27:17.676046 containerd[1465]: 2025-07-11 00:27:17.593 [INFO][4499] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a" iface="eth0" netns="/var/run/netns/cni-84cd0179-270a-4906-9843-a28abebe6c47" Jul 11 00:27:17.676046 containerd[1465]: 2025-07-11 00:27:17.594 [INFO][4499] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a" iface="eth0" netns="/var/run/netns/cni-84cd0179-270a-4906-9843-a28abebe6c47" Jul 11 00:27:17.676046 containerd[1465]: 2025-07-11 00:27:17.594 [INFO][4499] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a" Jul 11 00:27:17.676046 containerd[1465]: 2025-07-11 00:27:17.594 [INFO][4499] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a" Jul 11 00:27:17.676046 containerd[1465]: 2025-07-11 00:27:17.637 [INFO][4549] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a" HandleID="k8s-pod-network.4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a" Workload="localhost-k8s-calico--kube--controllers--f7c9fffd4--rk9nh-eth0" Jul 11 00:27:17.676046 containerd[1465]: 2025-07-11 00:27:17.637 [INFO][4549] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:27:17.676046 containerd[1465]: 2025-07-11 00:27:17.658 [INFO][4549] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:27:17.676046 containerd[1465]: 2025-07-11 00:27:17.663 [WARNING][4549] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a" HandleID="k8s-pod-network.4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a" Workload="localhost-k8s-calico--kube--controllers--f7c9fffd4--rk9nh-eth0" Jul 11 00:27:17.676046 containerd[1465]: 2025-07-11 00:27:17.663 [INFO][4549] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a" HandleID="k8s-pod-network.4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a" Workload="localhost-k8s-calico--kube--controllers--f7c9fffd4--rk9nh-eth0" Jul 11 00:27:17.676046 containerd[1465]: 2025-07-11 00:27:17.669 [INFO][4549] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:27:17.676046 containerd[1465]: 2025-07-11 00:27:17.672 [INFO][4499] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a" Jul 11 00:27:17.676421 containerd[1465]: time="2025-07-11T00:27:17.676242337Z" level=info msg="TearDown network for sandbox \"4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a\" successfully" Jul 11 00:27:17.676421 containerd[1465]: time="2025-07-11T00:27:17.676283046Z" level=info msg="StopPodSandbox for \"4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a\" returns successfully" Jul 11 00:27:17.676990 containerd[1465]: time="2025-07-11T00:27:17.676959731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f7c9fffd4-rk9nh,Uid:9bfd05fa-8a91-44eb-8f96-a9e542aaa056,Namespace:calico-system,Attempt:1,}" Jul 11 00:27:17.679051 systemd[1]: run-netns-cni\x2d84cd0179\x2d270a\x2d4906\x2d9843\x2da28abebe6c47.mount: Deactivated successfully. Jul 11 00:27:17.867468 systemd-networkd[1391]: cali94d89816dd3: Link UP Jul 11 00:27:17.868283 systemd-networkd[1391]: cali94d89816dd3: Gained carrier Jul 11 00:27:17.886166 containerd[1465]: 2025-07-11 00:27:17.726 [INFO][4564] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6f647b777b--qhnsp-eth0 calico-apiserver-6f647b777b- calico-apiserver d4787e60-b0e6-42f0-b414-39732f919000 1030 0 2025-07-11 00:26:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f647b777b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6f647b777b-qhnsp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali94d89816dd3 [] [] }} ContainerID="1a9af3873cb7c7c2d6501ee2a258757429a5eb1c33bdf2fe68189bf10987c4b6" Namespace="calico-apiserver" Pod="calico-apiserver-6f647b777b-qhnsp" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f647b777b--qhnsp-" Jul 11 00:27:17.886166 containerd[1465]: 2025-07-11 00:27:17.726 [INFO][4564] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1a9af3873cb7c7c2d6501ee2a258757429a5eb1c33bdf2fe68189bf10987c4b6" Namespace="calico-apiserver" Pod="calico-apiserver-6f647b777b-qhnsp" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f647b777b--qhnsp-eth0" Jul 11 00:27:17.886166 containerd[1465]: 2025-07-11 00:27:17.823 [INFO][4616] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1a9af3873cb7c7c2d6501ee2a258757429a5eb1c33bdf2fe68189bf10987c4b6" HandleID="k8s-pod-network.1a9af3873cb7c7c2d6501ee2a258757429a5eb1c33bdf2fe68189bf10987c4b6" Workload="localhost-k8s-calico--apiserver--6f647b777b--qhnsp-eth0" Jul 11 00:27:17.886166 containerd[1465]: 2025-07-11 00:27:17.823 [INFO][4616] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1a9af3873cb7c7c2d6501ee2a258757429a5eb1c33bdf2fe68189bf10987c4b6" HandleID="k8s-pod-network.1a9af3873cb7c7c2d6501ee2a258757429a5eb1c33bdf2fe68189bf10987c4b6" Workload="localhost-k8s-calico--apiserver--6f647b777b--qhnsp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003644a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6f647b777b-qhnsp", "timestamp":"2025-07-11 00:27:17.823776996 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:27:17.886166 containerd[1465]: 2025-07-11 00:27:17.824 [INFO][4616] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:27:17.886166 containerd[1465]: 2025-07-11 00:27:17.824 [INFO][4616] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:27:17.886166 containerd[1465]: 2025-07-11 00:27:17.824 [INFO][4616] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:27:17.886166 containerd[1465]: 2025-07-11 00:27:17.832 [INFO][4616] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1a9af3873cb7c7c2d6501ee2a258757429a5eb1c33bdf2fe68189bf10987c4b6" host="localhost" Jul 11 00:27:17.886166 containerd[1465]: 2025-07-11 00:27:17.837 [INFO][4616] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:27:17.886166 containerd[1465]: 2025-07-11 00:27:17.842 [INFO][4616] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:27:17.886166 containerd[1465]: 2025-07-11 00:27:17.844 [INFO][4616] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:27:17.886166 containerd[1465]: 2025-07-11 00:27:17.845 [INFO][4616] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:27:17.886166 containerd[1465]: 2025-07-11 00:27:17.845 [INFO][4616] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1a9af3873cb7c7c2d6501ee2a258757429a5eb1c33bdf2fe68189bf10987c4b6" host="localhost" Jul 11 00:27:17.886166 containerd[1465]: 2025-07-11 00:27:17.847 [INFO][4616] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1a9af3873cb7c7c2d6501ee2a258757429a5eb1c33bdf2fe68189bf10987c4b6 Jul 11 00:27:17.886166 containerd[1465]: 2025-07-11 00:27:17.852 [INFO][4616] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1a9af3873cb7c7c2d6501ee2a258757429a5eb1c33bdf2fe68189bf10987c4b6" host="localhost" Jul 11 00:27:17.886166 containerd[1465]: 2025-07-11 00:27:17.857 [INFO][4616] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.1a9af3873cb7c7c2d6501ee2a258757429a5eb1c33bdf2fe68189bf10987c4b6" host="localhost" Jul 11 00:27:17.886166 containerd[1465]: 2025-07-11 00:27:17.857 [INFO][4616] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.1a9af3873cb7c7c2d6501ee2a258757429a5eb1c33bdf2fe68189bf10987c4b6" host="localhost" Jul 11 00:27:17.886166 containerd[1465]: 2025-07-11 00:27:17.857 [INFO][4616] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:27:17.886166 containerd[1465]: 2025-07-11 00:27:17.857 [INFO][4616] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="1a9af3873cb7c7c2d6501ee2a258757429a5eb1c33bdf2fe68189bf10987c4b6" HandleID="k8s-pod-network.1a9af3873cb7c7c2d6501ee2a258757429a5eb1c33bdf2fe68189bf10987c4b6" Workload="localhost-k8s-calico--apiserver--6f647b777b--qhnsp-eth0" Jul 11 00:27:17.887339 containerd[1465]: 2025-07-11 00:27:17.861 [INFO][4564] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1a9af3873cb7c7c2d6501ee2a258757429a5eb1c33bdf2fe68189bf10987c4b6" Namespace="calico-apiserver" Pod="calico-apiserver-6f647b777b-qhnsp" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f647b777b--qhnsp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f647b777b--qhnsp-eth0", GenerateName:"calico-apiserver-6f647b777b-", Namespace:"calico-apiserver", SelfLink:"", UID:"d4787e60-b0e6-42f0-b414-39732f919000", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f647b777b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6f647b777b-qhnsp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali94d89816dd3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:27:17.887339 containerd[1465]: 2025-07-11 00:27:17.862 [INFO][4564] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="1a9af3873cb7c7c2d6501ee2a258757429a5eb1c33bdf2fe68189bf10987c4b6" Namespace="calico-apiserver" Pod="calico-apiserver-6f647b777b-qhnsp" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f647b777b--qhnsp-eth0" Jul 11 00:27:17.887339 containerd[1465]: 2025-07-11 00:27:17.862 [INFO][4564] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali94d89816dd3 ContainerID="1a9af3873cb7c7c2d6501ee2a258757429a5eb1c33bdf2fe68189bf10987c4b6" Namespace="calico-apiserver" Pod="calico-apiserver-6f647b777b-qhnsp" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f647b777b--qhnsp-eth0" Jul 11 00:27:17.887339 containerd[1465]: 2025-07-11 00:27:17.868 [INFO][4564] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1a9af3873cb7c7c2d6501ee2a258757429a5eb1c33bdf2fe68189bf10987c4b6" Namespace="calico-apiserver" Pod="calico-apiserver-6f647b777b-qhnsp" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f647b777b--qhnsp-eth0" Jul 11 00:27:17.887339 containerd[1465]: 2025-07-11 00:27:17.869 [INFO][4564] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1a9af3873cb7c7c2d6501ee2a258757429a5eb1c33bdf2fe68189bf10987c4b6" Namespace="calico-apiserver" Pod="calico-apiserver-6f647b777b-qhnsp" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f647b777b--qhnsp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f647b777b--qhnsp-eth0", GenerateName:"calico-apiserver-6f647b777b-", Namespace:"calico-apiserver", SelfLink:"", UID:"d4787e60-b0e6-42f0-b414-39732f919000", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f647b777b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1a9af3873cb7c7c2d6501ee2a258757429a5eb1c33bdf2fe68189bf10987c4b6", Pod:"calico-apiserver-6f647b777b-qhnsp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali94d89816dd3", MAC:"86:cd:be:92:ac:76", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:27:17.887339 containerd[1465]: 2025-07-11 00:27:17.882 [INFO][4564] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1a9af3873cb7c7c2d6501ee2a258757429a5eb1c33bdf2fe68189bf10987c4b6" Namespace="calico-apiserver" Pod="calico-apiserver-6f647b777b-qhnsp" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f647b777b--qhnsp-eth0" Jul 11 00:27:17.909547 containerd[1465]: time="2025-07-11T00:27:17.909259568Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:27:17.909547 containerd[1465]: time="2025-07-11T00:27:17.909328322Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:27:17.909547 containerd[1465]: time="2025-07-11T00:27:17.909345797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:27:17.909547 containerd[1465]: time="2025-07-11T00:27:17.909449961Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:27:17.932828 systemd[1]: Started cri-containerd-1a9af3873cb7c7c2d6501ee2a258757429a5eb1c33bdf2fe68189bf10987c4b6.scope - libcontainer container 1a9af3873cb7c7c2d6501ee2a258757429a5eb1c33bdf2fe68189bf10987c4b6. Jul 11 00:27:17.948146 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:27:17.968684 systemd-networkd[1391]: calib9ad08f285b: Link UP Jul 11 00:27:17.969740 systemd-networkd[1391]: calib9ad08f285b: Gained carrier Jul 11 00:27:17.996537 containerd[1465]: time="2025-07-11T00:27:17.996475854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f647b777b-qhnsp,Uid:d4787e60-b0e6-42f0-b414-39732f919000,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"1a9af3873cb7c7c2d6501ee2a258757429a5eb1c33bdf2fe68189bf10987c4b6\"" Jul 11 00:27:17.997923 containerd[1465]: 2025-07-11 00:27:17.768 [INFO][4575] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--768f4c5c69--dnd8p-eth0 goldmane-768f4c5c69- calico-system 1d7b0523-a28a-4b28-9a16-dbf8c602e2f1 1031 0 2025-07-11 00:26:53 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-768f4c5c69-dnd8p eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calib9ad08f285b [] [] }} ContainerID="b60dc3d2c51cb01838fa7cb503e07f6f0953ad9421537be2175f95585ba19970" Namespace="calico-system" Pod="goldmane-768f4c5c69-dnd8p" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--dnd8p-" Jul 11 00:27:17.997923 containerd[1465]: 2025-07-11 00:27:17.768 [INFO][4575] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b60dc3d2c51cb01838fa7cb503e07f6f0953ad9421537be2175f95585ba19970" Namespace="calico-system" Pod="goldmane-768f4c5c69-dnd8p" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--dnd8p-eth0" Jul 11 00:27:17.997923 containerd[1465]: 2025-07-11 00:27:17.834 [INFO][4631] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b60dc3d2c51cb01838fa7cb503e07f6f0953ad9421537be2175f95585ba19970" HandleID="k8s-pod-network.b60dc3d2c51cb01838fa7cb503e07f6f0953ad9421537be2175f95585ba19970" Workload="localhost-k8s-goldmane--768f4c5c69--dnd8p-eth0" Jul 11 00:27:17.997923 containerd[1465]: 2025-07-11 00:27:17.834 [INFO][4631] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b60dc3d2c51cb01838fa7cb503e07f6f0953ad9421537be2175f95585ba19970" HandleID="k8s-pod-network.b60dc3d2c51cb01838fa7cb503e07f6f0953ad9421537be2175f95585ba19970" Workload="localhost-k8s-goldmane--768f4c5c69--dnd8p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001aa490), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-768f4c5c69-dnd8p", "timestamp":"2025-07-11 00:27:17.834741323 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:27:17.997923 containerd[1465]: 2025-07-11 00:27:17.834 [INFO][4631] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:27:17.997923 containerd[1465]: 2025-07-11 00:27:17.857 [INFO][4631] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:27:17.997923 containerd[1465]: 2025-07-11 00:27:17.857 [INFO][4631] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:27:17.997923 containerd[1465]: 2025-07-11 00:27:17.933 [INFO][4631] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b60dc3d2c51cb01838fa7cb503e07f6f0953ad9421537be2175f95585ba19970" host="localhost" Jul 11 00:27:17.997923 containerd[1465]: 2025-07-11 00:27:17.939 [INFO][4631] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:27:17.997923 containerd[1465]: 2025-07-11 00:27:17.944 [INFO][4631] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:27:17.997923 containerd[1465]: 2025-07-11 00:27:17.946 [INFO][4631] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:27:17.997923 containerd[1465]: 2025-07-11 00:27:17.949 [INFO][4631] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:27:17.997923 containerd[1465]: 2025-07-11 00:27:17.949 [INFO][4631] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b60dc3d2c51cb01838fa7cb503e07f6f0953ad9421537be2175f95585ba19970" host="localhost" Jul 11 00:27:17.997923 containerd[1465]: 2025-07-11 00:27:17.951 [INFO][4631] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b60dc3d2c51cb01838fa7cb503e07f6f0953ad9421537be2175f95585ba19970 Jul 11 00:27:17.997923 containerd[1465]: 2025-07-11 00:27:17.955 [INFO][4631] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b60dc3d2c51cb01838fa7cb503e07f6f0953ad9421537be2175f95585ba19970" host="localhost" Jul 11 00:27:17.997923 containerd[1465]: 2025-07-11 00:27:17.960 [INFO][4631] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.b60dc3d2c51cb01838fa7cb503e07f6f0953ad9421537be2175f95585ba19970" host="localhost" Jul 11 00:27:17.997923 containerd[1465]: 2025-07-11 00:27:17.961 [INFO][4631] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.b60dc3d2c51cb01838fa7cb503e07f6f0953ad9421537be2175f95585ba19970" host="localhost" Jul 11 00:27:17.997923 containerd[1465]: 2025-07-11 00:27:17.961 [INFO][4631] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:27:17.997923 containerd[1465]: 2025-07-11 00:27:17.961 [INFO][4631] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="b60dc3d2c51cb01838fa7cb503e07f6f0953ad9421537be2175f95585ba19970" HandleID="k8s-pod-network.b60dc3d2c51cb01838fa7cb503e07f6f0953ad9421537be2175f95585ba19970" Workload="localhost-k8s-goldmane--768f4c5c69--dnd8p-eth0" Jul 11 00:27:17.999470 containerd[1465]: 2025-07-11 00:27:17.964 [INFO][4575] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b60dc3d2c51cb01838fa7cb503e07f6f0953ad9421537be2175f95585ba19970" Namespace="calico-system" Pod="goldmane-768f4c5c69-dnd8p" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--dnd8p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--dnd8p-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"1d7b0523-a28a-4b28-9a16-dbf8c602e2f1", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-768f4c5c69-dnd8p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib9ad08f285b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:27:17.999470 containerd[1465]: 2025-07-11 00:27:17.964 [INFO][4575] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="b60dc3d2c51cb01838fa7cb503e07f6f0953ad9421537be2175f95585ba19970" Namespace="calico-system" Pod="goldmane-768f4c5c69-dnd8p" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--dnd8p-eth0" Jul 11 00:27:17.999470 containerd[1465]: 2025-07-11 00:27:17.964 [INFO][4575] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib9ad08f285b ContainerID="b60dc3d2c51cb01838fa7cb503e07f6f0953ad9421537be2175f95585ba19970" Namespace="calico-system" Pod="goldmane-768f4c5c69-dnd8p" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--dnd8p-eth0" Jul 11 00:27:17.999470 containerd[1465]: 2025-07-11 00:27:17.974 [INFO][4575] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b60dc3d2c51cb01838fa7cb503e07f6f0953ad9421537be2175f95585ba19970" Namespace="calico-system" Pod="goldmane-768f4c5c69-dnd8p" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--dnd8p-eth0" Jul 11 00:27:17.999470 containerd[1465]: 2025-07-11 00:27:17.974 [INFO][4575] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b60dc3d2c51cb01838fa7cb503e07f6f0953ad9421537be2175f95585ba19970" Namespace="calico-system" Pod="goldmane-768f4c5c69-dnd8p" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--dnd8p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--dnd8p-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"1d7b0523-a28a-4b28-9a16-dbf8c602e2f1", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b60dc3d2c51cb01838fa7cb503e07f6f0953ad9421537be2175f95585ba19970", Pod:"goldmane-768f4c5c69-dnd8p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib9ad08f285b", MAC:"9a:59:f8:9f:01:39", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:27:17.999470 containerd[1465]: 2025-07-11 00:27:17.986 [INFO][4575] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b60dc3d2c51cb01838fa7cb503e07f6f0953ad9421537be2175f95585ba19970" Namespace="calico-system" Pod="goldmane-768f4c5c69-dnd8p" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--dnd8p-eth0" Jul 11 00:27:18.022384 containerd[1465]: time="2025-07-11T00:27:18.021983578Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:27:18.022384 containerd[1465]: time="2025-07-11T00:27:18.022066640Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:27:18.022384 containerd[1465]: time="2025-07-11T00:27:18.022082501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:27:18.022384 containerd[1465]: time="2025-07-11T00:27:18.022260219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:27:18.062833 systemd[1]: Started cri-containerd-b60dc3d2c51cb01838fa7cb503e07f6f0953ad9421537be2175f95585ba19970.scope - libcontainer container b60dc3d2c51cb01838fa7cb503e07f6f0953ad9421537be2175f95585ba19970. Jul 11 00:27:18.080453 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:27:18.120192 containerd[1465]: time="2025-07-11T00:27:18.120149601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-dnd8p,Uid:1d7b0523-a28a-4b28-9a16-dbf8c602e2f1,Namespace:calico-system,Attempt:1,} returns sandbox id \"b60dc3d2c51cb01838fa7cb503e07f6f0953ad9421537be2175f95585ba19970\"" Jul 11 00:27:18.427780 systemd-networkd[1391]: cali7f574f60324: Link UP Jul 11 00:27:18.429805 systemd-networkd[1391]: cali7f574f60324: Gained carrier Jul 11 00:27:18.449220 containerd[1465]: 2025-07-11 00:27:17.750 [INFO][4586] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5f7b5d6c54--pbqrn-eth0 calico-apiserver-5f7b5d6c54- calico-apiserver 26a3e5b9-9cc0-4afc-9ba0-86cf4b152857 1029 0 2025-07-11 00:26:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f7b5d6c54 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5f7b5d6c54-pbqrn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7f574f60324 [] [] }} ContainerID="9faf57772507de039b382dbe1857facc7ef0ee51abc0fb3ec5dd6c5b2f4c0bd4" Namespace="calico-apiserver" Pod="calico-apiserver-5f7b5d6c54-pbqrn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f7b5d6c54--pbqrn-" Jul 11 00:27:18.449220 containerd[1465]: 2025-07-11 00:27:17.751 [INFO][4586] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9faf57772507de039b382dbe1857facc7ef0ee51abc0fb3ec5dd6c5b2f4c0bd4" Namespace="calico-apiserver" Pod="calico-apiserver-5f7b5d6c54-pbqrn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f7b5d6c54--pbqrn-eth0" Jul 11 00:27:18.449220 containerd[1465]: 2025-07-11 00:27:17.835 [INFO][4624] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9faf57772507de039b382dbe1857facc7ef0ee51abc0fb3ec5dd6c5b2f4c0bd4" HandleID="k8s-pod-network.9faf57772507de039b382dbe1857facc7ef0ee51abc0fb3ec5dd6c5b2f4c0bd4" Workload="localhost-k8s-calico--apiserver--5f7b5d6c54--pbqrn-eth0" Jul 11 00:27:18.449220 containerd[1465]: 2025-07-11 00:27:17.835 [INFO][4624] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9faf57772507de039b382dbe1857facc7ef0ee51abc0fb3ec5dd6c5b2f4c0bd4" HandleID="k8s-pod-network.9faf57772507de039b382dbe1857facc7ef0ee51abc0fb3ec5dd6c5b2f4c0bd4" Workload="localhost-k8s-calico--apiserver--5f7b5d6c54--pbqrn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030c000), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5f7b5d6c54-pbqrn", "timestamp":"2025-07-11 00:27:17.835264558 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:27:18.449220 containerd[1465]: 2025-07-11 00:27:17.835 [INFO][4624] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:27:18.449220 containerd[1465]: 2025-07-11 00:27:17.961 [INFO][4624] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:27:18.449220 containerd[1465]: 2025-07-11 00:27:17.961 [INFO][4624] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:27:18.449220 containerd[1465]: 2025-07-11 00:27:18.034 [INFO][4624] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9faf57772507de039b382dbe1857facc7ef0ee51abc0fb3ec5dd6c5b2f4c0bd4" host="localhost" Jul 11 00:27:18.449220 containerd[1465]: 2025-07-11 00:27:18.040 [INFO][4624] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:27:18.449220 containerd[1465]: 2025-07-11 00:27:18.049 [INFO][4624] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:27:18.449220 containerd[1465]: 2025-07-11 00:27:18.051 [INFO][4624] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:27:18.449220 containerd[1465]: 2025-07-11 00:27:18.054 [INFO][4624] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:27:18.449220 containerd[1465]: 2025-07-11 00:27:18.054 [INFO][4624] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9faf57772507de039b382dbe1857facc7ef0ee51abc0fb3ec5dd6c5b2f4c0bd4" host="localhost" Jul 11 00:27:18.449220 containerd[1465]: 2025-07-11 00:27:18.056 [INFO][4624] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9faf57772507de039b382dbe1857facc7ef0ee51abc0fb3ec5dd6c5b2f4c0bd4 Jul 11 00:27:18.449220 containerd[1465]: 2025-07-11 00:27:18.089 [INFO][4624] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9faf57772507de039b382dbe1857facc7ef0ee51abc0fb3ec5dd6c5b2f4c0bd4" host="localhost" Jul 11 00:27:18.449220 containerd[1465]: 2025-07-11 00:27:18.420 [INFO][4624] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.9faf57772507de039b382dbe1857facc7ef0ee51abc0fb3ec5dd6c5b2f4c0bd4" host="localhost" Jul 11 00:27:18.449220 containerd[1465]: 2025-07-11 00:27:18.420 [INFO][4624] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.9faf57772507de039b382dbe1857facc7ef0ee51abc0fb3ec5dd6c5b2f4c0bd4" host="localhost" Jul 11 00:27:18.449220 containerd[1465]: 2025-07-11 00:27:18.420 [INFO][4624] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:27:18.449220 containerd[1465]: 2025-07-11 00:27:18.420 [INFO][4624] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="9faf57772507de039b382dbe1857facc7ef0ee51abc0fb3ec5dd6c5b2f4c0bd4" HandleID="k8s-pod-network.9faf57772507de039b382dbe1857facc7ef0ee51abc0fb3ec5dd6c5b2f4c0bd4" Workload="localhost-k8s-calico--apiserver--5f7b5d6c54--pbqrn-eth0" Jul 11 00:27:18.449921 containerd[1465]: 2025-07-11 00:27:18.423 [INFO][4586] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9faf57772507de039b382dbe1857facc7ef0ee51abc0fb3ec5dd6c5b2f4c0bd4" Namespace="calico-apiserver" Pod="calico-apiserver-5f7b5d6c54-pbqrn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f7b5d6c54--pbqrn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f7b5d6c54--pbqrn-eth0", GenerateName:"calico-apiserver-5f7b5d6c54-", Namespace:"calico-apiserver", SelfLink:"", UID:"26a3e5b9-9cc0-4afc-9ba0-86cf4b152857", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f7b5d6c54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5f7b5d6c54-pbqrn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7f574f60324", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:27:18.449921 containerd[1465]: 2025-07-11 00:27:18.423 [INFO][4586] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="9faf57772507de039b382dbe1857facc7ef0ee51abc0fb3ec5dd6c5b2f4c0bd4" Namespace="calico-apiserver" Pod="calico-apiserver-5f7b5d6c54-pbqrn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f7b5d6c54--pbqrn-eth0" Jul 11 00:27:18.449921 containerd[1465]: 2025-07-11 00:27:18.423 [INFO][4586] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7f574f60324 ContainerID="9faf57772507de039b382dbe1857facc7ef0ee51abc0fb3ec5dd6c5b2f4c0bd4" Namespace="calico-apiserver" Pod="calico-apiserver-5f7b5d6c54-pbqrn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f7b5d6c54--pbqrn-eth0" Jul 11 00:27:18.449921 containerd[1465]: 2025-07-11 00:27:18.431 [INFO][4586] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9faf57772507de039b382dbe1857facc7ef0ee51abc0fb3ec5dd6c5b2f4c0bd4" Namespace="calico-apiserver" Pod="calico-apiserver-5f7b5d6c54-pbqrn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f7b5d6c54--pbqrn-eth0" Jul 11 00:27:18.449921 containerd[1465]: 2025-07-11 00:27:18.433 [INFO][4586] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9faf57772507de039b382dbe1857facc7ef0ee51abc0fb3ec5dd6c5b2f4c0bd4" Namespace="calico-apiserver" Pod="calico-apiserver-5f7b5d6c54-pbqrn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f7b5d6c54--pbqrn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f7b5d6c54--pbqrn-eth0", GenerateName:"calico-apiserver-5f7b5d6c54-", Namespace:"calico-apiserver", SelfLink:"", UID:"26a3e5b9-9cc0-4afc-9ba0-86cf4b152857", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f7b5d6c54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9faf57772507de039b382dbe1857facc7ef0ee51abc0fb3ec5dd6c5b2f4c0bd4", Pod:"calico-apiserver-5f7b5d6c54-pbqrn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7f574f60324", MAC:"ba:1f:74:1f:a5:ce", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:27:18.449921 containerd[1465]: 2025-07-11 00:27:18.444 [INFO][4586] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9faf57772507de039b382dbe1857facc7ef0ee51abc0fb3ec5dd6c5b2f4c0bd4" Namespace="calico-apiserver" Pod="calico-apiserver-5f7b5d6c54-pbqrn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f7b5d6c54--pbqrn-eth0" Jul 11 00:27:18.483176 systemd-networkd[1391]: cali6ffcc6d4897: Link UP Jul 11 00:27:18.485453 systemd-networkd[1391]: cali6ffcc6d4897: Gained carrier Jul 11 00:27:18.491192 containerd[1465]: time="2025-07-11T00:27:18.490922242Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:27:18.491192 containerd[1465]: time="2025-07-11T00:27:18.490999282Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:27:18.491726 containerd[1465]: time="2025-07-11T00:27:18.491021656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:27:18.491726 containerd[1465]: time="2025-07-11T00:27:18.491137033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:27:18.501327 containerd[1465]: time="2025-07-11T00:27:18.501273299Z" level=info msg="StopPodSandbox for \"7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257\"" Jul 11 00:27:18.502306 containerd[1465]: time="2025-07-11T00:27:18.502277234Z" level=info msg="StopPodSandbox for \"0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd\"" Jul 11 00:27:18.508253 containerd[1465]: 2025-07-11 00:27:17.822 [INFO][4609] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--f7c9fffd4--rk9nh-eth0 calico-kube-controllers-f7c9fffd4- calico-system 9bfd05fa-8a91-44eb-8f96-a9e542aaa056 1032 0 2025-07-11 00:26:54 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:f7c9fffd4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-f7c9fffd4-rk9nh eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6ffcc6d4897 [] [] }} ContainerID="fe63ae533620bda24bb7b2f9f1b2829c9d0b98ff67df7bef9880e8bfdca5124a" Namespace="calico-system" Pod="calico-kube-controllers-f7c9fffd4-rk9nh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f7c9fffd4--rk9nh-" Jul 11 00:27:18.508253 containerd[1465]: 2025-07-11 00:27:17.822 [INFO][4609] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fe63ae533620bda24bb7b2f9f1b2829c9d0b98ff67df7bef9880e8bfdca5124a" Namespace="calico-system" Pod="calico-kube-controllers-f7c9fffd4-rk9nh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f7c9fffd4--rk9nh-eth0" Jul 11 00:27:18.508253 containerd[1465]: 2025-07-11 00:27:17.864 [INFO][4647] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fe63ae533620bda24bb7b2f9f1b2829c9d0b98ff67df7bef9880e8bfdca5124a" HandleID="k8s-pod-network.fe63ae533620bda24bb7b2f9f1b2829c9d0b98ff67df7bef9880e8bfdca5124a" Workload="localhost-k8s-calico--kube--controllers--f7c9fffd4--rk9nh-eth0" Jul 11 00:27:18.508253 containerd[1465]: 2025-07-11 00:27:17.865 [INFO][4647] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fe63ae533620bda24bb7b2f9f1b2829c9d0b98ff67df7bef9880e8bfdca5124a" HandleID="k8s-pod-network.fe63ae533620bda24bb7b2f9f1b2829c9d0b98ff67df7bef9880e8bfdca5124a" Workload="localhost-k8s-calico--kube--controllers--f7c9fffd4--rk9nh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000ca8d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-f7c9fffd4-rk9nh", "timestamp":"2025-07-11 00:27:17.864580653 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:27:18.508253 containerd[1465]: 2025-07-11 00:27:17.865 [INFO][4647] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:27:18.508253 containerd[1465]: 2025-07-11 00:27:18.420 [INFO][4647] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:27:18.508253 containerd[1465]: 2025-07-11 00:27:18.420 [INFO][4647] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:27:18.508253 containerd[1465]: 2025-07-11 00:27:18.428 [INFO][4647] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fe63ae533620bda24bb7b2f9f1b2829c9d0b98ff67df7bef9880e8bfdca5124a" host="localhost" Jul 11 00:27:18.508253 containerd[1465]: 2025-07-11 00:27:18.436 [INFO][4647] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:27:18.508253 containerd[1465]: 2025-07-11 00:27:18.445 [INFO][4647] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:27:18.508253 containerd[1465]: 2025-07-11 00:27:18.448 [INFO][4647] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:27:18.508253 containerd[1465]: 2025-07-11 00:27:18.451 [INFO][4647] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:27:18.508253 containerd[1465]: 2025-07-11 00:27:18.451 [INFO][4647] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fe63ae533620bda24bb7b2f9f1b2829c9d0b98ff67df7bef9880e8bfdca5124a" host="localhost" Jul 11 00:27:18.508253 containerd[1465]: 2025-07-11 00:27:18.456 [INFO][4647] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fe63ae533620bda24bb7b2f9f1b2829c9d0b98ff67df7bef9880e8bfdca5124a Jul 11 00:27:18.508253 containerd[1465]: 2025-07-11 00:27:18.461 [INFO][4647] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fe63ae533620bda24bb7b2f9f1b2829c9d0b98ff67df7bef9880e8bfdca5124a" host="localhost" Jul 11 00:27:18.508253 containerd[1465]: 2025-07-11 00:27:18.469 [INFO][4647] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.fe63ae533620bda24bb7b2f9f1b2829c9d0b98ff67df7bef9880e8bfdca5124a" host="localhost" Jul 11 00:27:18.508253 containerd[1465]: 2025-07-11 00:27:18.469 [INFO][4647] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.fe63ae533620bda24bb7b2f9f1b2829c9d0b98ff67df7bef9880e8bfdca5124a" host="localhost" Jul 11 00:27:18.508253 containerd[1465]: 2025-07-11 00:27:18.469 [INFO][4647] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:27:18.508253 containerd[1465]: 2025-07-11 00:27:18.469 [INFO][4647] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="fe63ae533620bda24bb7b2f9f1b2829c9d0b98ff67df7bef9880e8bfdca5124a" HandleID="k8s-pod-network.fe63ae533620bda24bb7b2f9f1b2829c9d0b98ff67df7bef9880e8bfdca5124a" Workload="localhost-k8s-calico--kube--controllers--f7c9fffd4--rk9nh-eth0" Jul 11 00:27:18.509009 containerd[1465]: 2025-07-11 00:27:18.475 [INFO][4609] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fe63ae533620bda24bb7b2f9f1b2829c9d0b98ff67df7bef9880e8bfdca5124a" Namespace="calico-system" Pod="calico-kube-controllers-f7c9fffd4-rk9nh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f7c9fffd4--rk9nh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--f7c9fffd4--rk9nh-eth0", GenerateName:"calico-kube-controllers-f7c9fffd4-", Namespace:"calico-system", SelfLink:"", UID:"9bfd05fa-8a91-44eb-8f96-a9e542aaa056", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f7c9fffd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-f7c9fffd4-rk9nh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6ffcc6d4897", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:27:18.509009 containerd[1465]: 2025-07-11 00:27:18.475 [INFO][4609] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="fe63ae533620bda24bb7b2f9f1b2829c9d0b98ff67df7bef9880e8bfdca5124a" Namespace="calico-system" Pod="calico-kube-controllers-f7c9fffd4-rk9nh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f7c9fffd4--rk9nh-eth0" Jul 11 00:27:18.509009 containerd[1465]: 2025-07-11 00:27:18.475 [INFO][4609] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6ffcc6d4897 ContainerID="fe63ae533620bda24bb7b2f9f1b2829c9d0b98ff67df7bef9880e8bfdca5124a" Namespace="calico-system" Pod="calico-kube-controllers-f7c9fffd4-rk9nh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f7c9fffd4--rk9nh-eth0" Jul 11 00:27:18.509009 containerd[1465]: 2025-07-11 00:27:18.485 [INFO][4609] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fe63ae533620bda24bb7b2f9f1b2829c9d0b98ff67df7bef9880e8bfdca5124a" Namespace="calico-system" Pod="calico-kube-controllers-f7c9fffd4-rk9nh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f7c9fffd4--rk9nh-eth0" Jul 11 00:27:18.509009 containerd[1465]: 2025-07-11 00:27:18.487 [INFO][4609] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fe63ae533620bda24bb7b2f9f1b2829c9d0b98ff67df7bef9880e8bfdca5124a" Namespace="calico-system" Pod="calico-kube-controllers-f7c9fffd4-rk9nh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f7c9fffd4--rk9nh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--f7c9fffd4--rk9nh-eth0", GenerateName:"calico-kube-controllers-f7c9fffd4-", Namespace:"calico-system", SelfLink:"", UID:"9bfd05fa-8a91-44eb-8f96-a9e542aaa056", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f7c9fffd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fe63ae533620bda24bb7b2f9f1b2829c9d0b98ff67df7bef9880e8bfdca5124a", Pod:"calico-kube-controllers-f7c9fffd4-rk9nh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6ffcc6d4897", MAC:"66:41:54:e1:cc:a2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:27:18.509009 containerd[1465]: 2025-07-11 00:27:18.500 [INFO][4609] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fe63ae533620bda24bb7b2f9f1b2829c9d0b98ff67df7bef9880e8bfdca5124a" Namespace="calico-system" Pod="calico-kube-controllers-f7c9fffd4-rk9nh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f7c9fffd4--rk9nh-eth0" Jul 11 00:27:18.527243 systemd[1]: Started cri-containerd-9faf57772507de039b382dbe1857facc7ef0ee51abc0fb3ec5dd6c5b2f4c0bd4.scope - libcontainer container 9faf57772507de039b382dbe1857facc7ef0ee51abc0fb3ec5dd6c5b2f4c0bd4. Jul 11 00:27:18.553456 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:27:18.567420 containerd[1465]: time="2025-07-11T00:27:18.567276379Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:27:18.567420 containerd[1465]: time="2025-07-11T00:27:18.567357047Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:27:18.567420 containerd[1465]: time="2025-07-11T00:27:18.567379160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:27:18.573833 containerd[1465]: time="2025-07-11T00:27:18.568586363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:27:18.599872 systemd[1]: Started cri-containerd-fe63ae533620bda24bb7b2f9f1b2829c9d0b98ff67df7bef9880e8bfdca5124a.scope - libcontainer container fe63ae533620bda24bb7b2f9f1b2829c9d0b98ff67df7bef9880e8bfdca5124a. Jul 11 00:27:18.603506 containerd[1465]: time="2025-07-11T00:27:18.603432948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f7b5d6c54-pbqrn,Uid:26a3e5b9-9cc0-4afc-9ba0-86cf4b152857,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"9faf57772507de039b382dbe1857facc7ef0ee51abc0fb3ec5dd6c5b2f4c0bd4\"" Jul 11 00:27:18.627139 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:27:18.677690 containerd[1465]: 2025-07-11 00:27:18.610 [INFO][4820] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257" Jul 11 00:27:18.677690 containerd[1465]: 2025-07-11 00:27:18.611 [INFO][4820] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257" iface="eth0" netns="/var/run/netns/cni-30153e69-e846-9fd6-1ad7-95e82462b333" Jul 11 00:27:18.677690 containerd[1465]: 2025-07-11 00:27:18.611 [INFO][4820] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257" iface="eth0" netns="/var/run/netns/cni-30153e69-e846-9fd6-1ad7-95e82462b333" Jul 11 00:27:18.677690 containerd[1465]: 2025-07-11 00:27:18.611 [INFO][4820] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257" iface="eth0" netns="/var/run/netns/cni-30153e69-e846-9fd6-1ad7-95e82462b333" Jul 11 00:27:18.677690 containerd[1465]: 2025-07-11 00:27:18.611 [INFO][4820] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257" Jul 11 00:27:18.677690 containerd[1465]: 2025-07-11 00:27:18.612 [INFO][4820] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257" Jul 11 00:27:18.677690 containerd[1465]: 2025-07-11 00:27:18.654 [INFO][4897] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257" HandleID="k8s-pod-network.7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257" Workload="localhost-k8s-coredns--674b8bbfcf--msvlj-eth0" Jul 11 00:27:18.677690 containerd[1465]: 2025-07-11 00:27:18.655 [INFO][4897] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:27:18.677690 containerd[1465]: 2025-07-11 00:27:18.655 [INFO][4897] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:27:18.677690 containerd[1465]: 2025-07-11 00:27:18.663 [WARNING][4897] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257" HandleID="k8s-pod-network.7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257" Workload="localhost-k8s-coredns--674b8bbfcf--msvlj-eth0" Jul 11 00:27:18.677690 containerd[1465]: 2025-07-11 00:27:18.664 [INFO][4897] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257" HandleID="k8s-pod-network.7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257" Workload="localhost-k8s-coredns--674b8bbfcf--msvlj-eth0" Jul 11 00:27:18.677690 containerd[1465]: 2025-07-11 00:27:18.665 [INFO][4897] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:27:18.677690 containerd[1465]: 2025-07-11 00:27:18.671 [INFO][4820] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257" Jul 11 00:27:18.678313 containerd[1465]: time="2025-07-11T00:27:18.678058012Z" level=info msg="TearDown network for sandbox \"7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257\" successfully" Jul 11 00:27:18.678313 containerd[1465]: time="2025-07-11T00:27:18.678096367Z" level=info msg="StopPodSandbox for \"7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257\" returns successfully" Jul 11 00:27:18.680832 containerd[1465]: time="2025-07-11T00:27:18.679120110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-msvlj,Uid:73fc6509-dcba-4609-91c7-d051cb3bbfc4,Namespace:kube-system,Attempt:1,}" Jul 11 00:27:18.680918 kubelet[2560]: E0711 00:27:18.678569 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:27:18.683529 systemd[1]: run-netns-cni\x2d30153e69\x2de846\x2d9fd6\x2d1ad7\x2d95e82462b333.mount: Deactivated successfully. Jul 11 00:27:18.686205 containerd[1465]: time="2025-07-11T00:27:18.686153534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f7c9fffd4-rk9nh,Uid:9bfd05fa-8a91-44eb-8f96-a9e542aaa056,Namespace:calico-system,Attempt:1,} returns sandbox id \"fe63ae533620bda24bb7b2f9f1b2829c9d0b98ff67df7bef9880e8bfdca5124a\"" Jul 11 00:27:18.712693 containerd[1465]: 2025-07-11 00:27:18.630 [INFO][4847] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd" Jul 11 00:27:18.712693 containerd[1465]: 2025-07-11 00:27:18.631 [INFO][4847] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd" iface="eth0" netns="/var/run/netns/cni-f17aa826-c004-900c-7378-c4b07c41c5b0" Jul 11 00:27:18.712693 containerd[1465]: 2025-07-11 00:27:18.631 [INFO][4847] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd" iface="eth0" netns="/var/run/netns/cni-f17aa826-c004-900c-7378-c4b07c41c5b0" Jul 11 00:27:18.712693 containerd[1465]: 2025-07-11 00:27:18.631 [INFO][4847] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd" iface="eth0" netns="/var/run/netns/cni-f17aa826-c004-900c-7378-c4b07c41c5b0" Jul 11 00:27:18.712693 containerd[1465]: 2025-07-11 00:27:18.631 [INFO][4847] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd" Jul 11 00:27:18.712693 containerd[1465]: 2025-07-11 00:27:18.631 [INFO][4847] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd" Jul 11 00:27:18.712693 containerd[1465]: 2025-07-11 00:27:18.682 [INFO][4906] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd" HandleID="k8s-pod-network.0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd" Workload="localhost-k8s-csi--node--driver--xd24s-eth0" Jul 11 00:27:18.712693 containerd[1465]: 2025-07-11 00:27:18.683 [INFO][4906] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:27:18.712693 containerd[1465]: 2025-07-11 00:27:18.683 [INFO][4906] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:27:18.712693 containerd[1465]: 2025-07-11 00:27:18.695 [WARNING][4906] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd" HandleID="k8s-pod-network.0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd" Workload="localhost-k8s-csi--node--driver--xd24s-eth0" Jul 11 00:27:18.712693 containerd[1465]: 2025-07-11 00:27:18.695 [INFO][4906] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd" HandleID="k8s-pod-network.0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd" Workload="localhost-k8s-csi--node--driver--xd24s-eth0" Jul 11 00:27:18.712693 containerd[1465]: 2025-07-11 00:27:18.697 [INFO][4906] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:27:18.712693 containerd[1465]: 2025-07-11 00:27:18.703 [INFO][4847] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd" Jul 11 00:27:18.713262 containerd[1465]: time="2025-07-11T00:27:18.713211748Z" level=info msg="TearDown network for sandbox \"0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd\" successfully" Jul 11 00:27:18.713380 containerd[1465]: time="2025-07-11T00:27:18.713357614Z" level=info msg="StopPodSandbox for \"0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd\" returns successfully" Jul 11 00:27:18.713671 systemd[1]: run-netns-cni\x2df17aa826\x2dc004\x2d900c\x2d7378\x2dc4b07c41c5b0.mount: Deactivated successfully. Jul 11 00:27:18.716004 containerd[1465]: time="2025-07-11T00:27:18.715979274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xd24s,Uid:7d661932-2475-4fb4-890b-1d7cc7f7d3fc,Namespace:calico-system,Attempt:1,}" Jul 11 00:27:18.816842 systemd-networkd[1391]: cali1aa84b469b9: Link UP Jul 11 00:27:18.818038 systemd-networkd[1391]: cali1aa84b469b9: Gained carrier Jul 11 00:27:18.836780 containerd[1465]: 2025-07-11 00:27:18.741 [INFO][4922] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--msvlj-eth0 coredns-674b8bbfcf- kube-system 73fc6509-dcba-4609-91c7-d051cb3bbfc4 1053 0 2025-07-11 00:26:43 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-msvlj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1aa84b469b9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4e6ef8d93f1975aefd1f8ee540b2a00d3fed07305474c13c54848801ced7bc01" Namespace="kube-system" Pod="coredns-674b8bbfcf-msvlj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--msvlj-" Jul 11 00:27:18.836780 containerd[1465]: 2025-07-11 00:27:18.741 [INFO][4922] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4e6ef8d93f1975aefd1f8ee540b2a00d3fed07305474c13c54848801ced7bc01" Namespace="kube-system" Pod="coredns-674b8bbfcf-msvlj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--msvlj-eth0" Jul 11 00:27:18.836780 containerd[1465]: 2025-07-11 00:27:18.770 [INFO][4948] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4e6ef8d93f1975aefd1f8ee540b2a00d3fed07305474c13c54848801ced7bc01" HandleID="k8s-pod-network.4e6ef8d93f1975aefd1f8ee540b2a00d3fed07305474c13c54848801ced7bc01" Workload="localhost-k8s-coredns--674b8bbfcf--msvlj-eth0" Jul 11 00:27:18.836780 containerd[1465]: 2025-07-11 00:27:18.770 [INFO][4948] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4e6ef8d93f1975aefd1f8ee540b2a00d3fed07305474c13c54848801ced7bc01" HandleID="k8s-pod-network.4e6ef8d93f1975aefd1f8ee540b2a00d3fed07305474c13c54848801ced7bc01" Workload="localhost-k8s-coredns--674b8bbfcf--msvlj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035f630), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-msvlj", "timestamp":"2025-07-11 00:27:18.770569709 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:27:18.836780 containerd[1465]: 2025-07-11 00:27:18.771 [INFO][4948] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:27:18.836780 containerd[1465]: 2025-07-11 00:27:18.771 [INFO][4948] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:27:18.836780 containerd[1465]: 2025-07-11 00:27:18.771 [INFO][4948] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:27:18.836780 containerd[1465]: 2025-07-11 00:27:18.781 [INFO][4948] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4e6ef8d93f1975aefd1f8ee540b2a00d3fed07305474c13c54848801ced7bc01" host="localhost" Jul 11 00:27:18.836780 containerd[1465]: 2025-07-11 00:27:18.787 [INFO][4948] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:27:18.836780 containerd[1465]: 2025-07-11 00:27:18.792 [INFO][4948] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:27:18.836780 containerd[1465]: 2025-07-11 00:27:18.794 [INFO][4948] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:27:18.836780 containerd[1465]: 2025-07-11 00:27:18.796 [INFO][4948] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:27:18.836780 containerd[1465]: 2025-07-11 00:27:18.796 [INFO][4948] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4e6ef8d93f1975aefd1f8ee540b2a00d3fed07305474c13c54848801ced7bc01" host="localhost" Jul 11 00:27:18.836780 containerd[1465]: 2025-07-11 00:27:18.798 [INFO][4948] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4e6ef8d93f1975aefd1f8ee540b2a00d3fed07305474c13c54848801ced7bc01 Jul 11 00:27:18.836780 containerd[1465]: 2025-07-11 00:27:18.802 [INFO][4948] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4e6ef8d93f1975aefd1f8ee540b2a00d3fed07305474c13c54848801ced7bc01" host="localhost" Jul 11 00:27:18.836780 containerd[1465]: 2025-07-11 00:27:18.808 [INFO][4948] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.4e6ef8d93f1975aefd1f8ee540b2a00d3fed07305474c13c54848801ced7bc01" host="localhost" Jul 11 00:27:18.836780 containerd[1465]: 2025-07-11 00:27:18.808 [INFO][4948] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.4e6ef8d93f1975aefd1f8ee540b2a00d3fed07305474c13c54848801ced7bc01" host="localhost" Jul 11 00:27:18.836780 containerd[1465]: 2025-07-11 00:27:18.808 [INFO][4948] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:27:18.836780 containerd[1465]: 2025-07-11 00:27:18.808 [INFO][4948] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="4e6ef8d93f1975aefd1f8ee540b2a00d3fed07305474c13c54848801ced7bc01" HandleID="k8s-pod-network.4e6ef8d93f1975aefd1f8ee540b2a00d3fed07305474c13c54848801ced7bc01" Workload="localhost-k8s-coredns--674b8bbfcf--msvlj-eth0" Jul 11 00:27:18.837471 containerd[1465]: 2025-07-11 00:27:18.812 [INFO][4922] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4e6ef8d93f1975aefd1f8ee540b2a00d3fed07305474c13c54848801ced7bc01" Namespace="kube-system" Pod="coredns-674b8bbfcf-msvlj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--msvlj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--msvlj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"73fc6509-dcba-4609-91c7-d051cb3bbfc4", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-msvlj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1aa84b469b9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:27:18.837471 containerd[1465]: 2025-07-11 00:27:18.812 [INFO][4922] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="4e6ef8d93f1975aefd1f8ee540b2a00d3fed07305474c13c54848801ced7bc01" Namespace="kube-system" Pod="coredns-674b8bbfcf-msvlj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--msvlj-eth0" Jul 11 00:27:18.837471 containerd[1465]: 2025-07-11 00:27:18.812 [INFO][4922] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1aa84b469b9 ContainerID="4e6ef8d93f1975aefd1f8ee540b2a00d3fed07305474c13c54848801ced7bc01" Namespace="kube-system" Pod="coredns-674b8bbfcf-msvlj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--msvlj-eth0" Jul 11 00:27:18.837471 containerd[1465]: 2025-07-11 00:27:18.819 [INFO][4922] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4e6ef8d93f1975aefd1f8ee540b2a00d3fed07305474c13c54848801ced7bc01" Namespace="kube-system" Pod="coredns-674b8bbfcf-msvlj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--msvlj-eth0" Jul 11 00:27:18.837471 containerd[1465]: 2025-07-11 00:27:18.819 [INFO][4922] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4e6ef8d93f1975aefd1f8ee540b2a00d3fed07305474c13c54848801ced7bc01" Namespace="kube-system" Pod="coredns-674b8bbfcf-msvlj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--msvlj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--msvlj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"73fc6509-dcba-4609-91c7-d051cb3bbfc4", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4e6ef8d93f1975aefd1f8ee540b2a00d3fed07305474c13c54848801ced7bc01", Pod:"coredns-674b8bbfcf-msvlj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1aa84b469b9", MAC:"7a:33:95:a0:16:7b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:27:18.837471 containerd[1465]: 2025-07-11 00:27:18.833 [INFO][4922] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4e6ef8d93f1975aefd1f8ee540b2a00d3fed07305474c13c54848801ced7bc01" Namespace="kube-system" Pod="coredns-674b8bbfcf-msvlj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--msvlj-eth0" Jul 11 00:27:18.902494 containerd[1465]: time="2025-07-11T00:27:18.902084387Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:27:18.902494 containerd[1465]: time="2025-07-11T00:27:18.902274989Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:27:18.903108 containerd[1465]: time="2025-07-11T00:27:18.902826118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:27:18.903108 containerd[1465]: time="2025-07-11T00:27:18.902920282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:27:18.931652 systemd-networkd[1391]: cali4209e93a805: Link UP Jul 11 00:27:18.932358 systemd[1]: Started cri-containerd-4e6ef8d93f1975aefd1f8ee540b2a00d3fed07305474c13c54848801ced7bc01.scope - libcontainer container 4e6ef8d93f1975aefd1f8ee540b2a00d3fed07305474c13c54848801ced7bc01. Jul 11 00:27:18.938729 systemd-networkd[1391]: cali4209e93a805: Gained carrier Jul 11 00:27:18.961196 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:27:18.966831 containerd[1465]: 2025-07-11 00:27:18.779 [INFO][4937] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--xd24s-eth0 csi-node-driver- calico-system 7d661932-2475-4fb4-890b-1d7cc7f7d3fc 1055 0 2025-07-11 00:26:54 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-xd24s eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali4209e93a805 [] [] }} ContainerID="8f66db4219a036cc72578215e1b4f56b14543e12c108e67bb817117a2b4174da" Namespace="calico-system" Pod="csi-node-driver-xd24s" WorkloadEndpoint="localhost-k8s-csi--node--driver--xd24s-" Jul 11 00:27:18.966831 containerd[1465]: 2025-07-11 00:27:18.779 [INFO][4937] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8f66db4219a036cc72578215e1b4f56b14543e12c108e67bb817117a2b4174da" Namespace="calico-system" Pod="csi-node-driver-xd24s" WorkloadEndpoint="localhost-k8s-csi--node--driver--xd24s-eth0" Jul 11 00:27:18.966831 containerd[1465]: 2025-07-11 00:27:18.815 [INFO][4960] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8f66db4219a036cc72578215e1b4f56b14543e12c108e67bb817117a2b4174da" HandleID="k8s-pod-network.8f66db4219a036cc72578215e1b4f56b14543e12c108e67bb817117a2b4174da" Workload="localhost-k8s-csi--node--driver--xd24s-eth0" Jul 11 00:27:18.966831 containerd[1465]: 2025-07-11 00:27:18.815 [INFO][4960] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8f66db4219a036cc72578215e1b4f56b14543e12c108e67bb817117a2b4174da" HandleID="k8s-pod-network.8f66db4219a036cc72578215e1b4f56b14543e12c108e67bb817117a2b4174da" Workload="localhost-k8s-csi--node--driver--xd24s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000495b60), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-xd24s", "timestamp":"2025-07-11 00:27:18.815645373 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:27:18.966831 containerd[1465]: 2025-07-11 00:27:18.816 [INFO][4960] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:27:18.966831 containerd[1465]: 2025-07-11 00:27:18.816 [INFO][4960] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:27:18.966831 containerd[1465]: 2025-07-11 00:27:18.816 [INFO][4960] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:27:18.966831 containerd[1465]: 2025-07-11 00:27:18.881 [INFO][4960] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8f66db4219a036cc72578215e1b4f56b14543e12c108e67bb817117a2b4174da" host="localhost" Jul 11 00:27:18.966831 containerd[1465]: 2025-07-11 00:27:18.888 [INFO][4960] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:27:18.966831 containerd[1465]: 2025-07-11 00:27:18.893 [INFO][4960] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:27:18.966831 containerd[1465]: 2025-07-11 00:27:18.896 [INFO][4960] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:27:18.966831 containerd[1465]: 2025-07-11 00:27:18.898 [INFO][4960] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:27:18.966831 containerd[1465]: 2025-07-11 00:27:18.898 [INFO][4960] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8f66db4219a036cc72578215e1b4f56b14543e12c108e67bb817117a2b4174da" host="localhost" Jul 11 00:27:18.966831 containerd[1465]: 2025-07-11 00:27:18.900 [INFO][4960] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8f66db4219a036cc72578215e1b4f56b14543e12c108e67bb817117a2b4174da Jul 11 00:27:18.966831 containerd[1465]: 2025-07-11 00:27:18.904 [INFO][4960] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8f66db4219a036cc72578215e1b4f56b14543e12c108e67bb817117a2b4174da" host="localhost" Jul 11 00:27:18.966831 containerd[1465]: 2025-07-11 00:27:18.914 [INFO][4960] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.8f66db4219a036cc72578215e1b4f56b14543e12c108e67bb817117a2b4174da" host="localhost" Jul 11 00:27:18.966831 containerd[1465]: 2025-07-11 00:27:18.914 [INFO][4960] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.8f66db4219a036cc72578215e1b4f56b14543e12c108e67bb817117a2b4174da" host="localhost" Jul 11 00:27:18.966831 containerd[1465]: 2025-07-11 00:27:18.914 [INFO][4960] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:27:18.966831 containerd[1465]: 2025-07-11 00:27:18.915 [INFO][4960] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="8f66db4219a036cc72578215e1b4f56b14543e12c108e67bb817117a2b4174da" HandleID="k8s-pod-network.8f66db4219a036cc72578215e1b4f56b14543e12c108e67bb817117a2b4174da" Workload="localhost-k8s-csi--node--driver--xd24s-eth0" Jul 11 00:27:18.967435 containerd[1465]: 2025-07-11 00:27:18.924 [INFO][4937] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8f66db4219a036cc72578215e1b4f56b14543e12c108e67bb817117a2b4174da" Namespace="calico-system" Pod="csi-node-driver-xd24s" WorkloadEndpoint="localhost-k8s-csi--node--driver--xd24s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xd24s-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7d661932-2475-4fb4-890b-1d7cc7f7d3fc", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-xd24s", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4209e93a805", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:27:18.967435 containerd[1465]: 2025-07-11 00:27:18.924 [INFO][4937] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="8f66db4219a036cc72578215e1b4f56b14543e12c108e67bb817117a2b4174da" Namespace="calico-system" Pod="csi-node-driver-xd24s" WorkloadEndpoint="localhost-k8s-csi--node--driver--xd24s-eth0" Jul 11 00:27:18.967435 containerd[1465]: 2025-07-11 00:27:18.924 [INFO][4937] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4209e93a805 ContainerID="8f66db4219a036cc72578215e1b4f56b14543e12c108e67bb817117a2b4174da" Namespace="calico-system" Pod="csi-node-driver-xd24s" WorkloadEndpoint="localhost-k8s-csi--node--driver--xd24s-eth0" Jul 11 00:27:18.967435 containerd[1465]: 2025-07-11 00:27:18.941 [INFO][4937] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8f66db4219a036cc72578215e1b4f56b14543e12c108e67bb817117a2b4174da" Namespace="calico-system" Pod="csi-node-driver-xd24s" WorkloadEndpoint="localhost-k8s-csi--node--driver--xd24s-eth0" Jul 11 00:27:18.967435 containerd[1465]: 2025-07-11 00:27:18.942 [INFO][4937] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8f66db4219a036cc72578215e1b4f56b14543e12c108e67bb817117a2b4174da" Namespace="calico-system" Pod="csi-node-driver-xd24s" WorkloadEndpoint="localhost-k8s-csi--node--driver--xd24s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xd24s-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7d661932-2475-4fb4-890b-1d7cc7f7d3fc", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8f66db4219a036cc72578215e1b4f56b14543e12c108e67bb817117a2b4174da", Pod:"csi-node-driver-xd24s", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4209e93a805", MAC:"da:a6:67:85:4c:81", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:27:18.967435 containerd[1465]: 2025-07-11 00:27:18.959 [INFO][4937] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8f66db4219a036cc72578215e1b4f56b14543e12c108e67bb817117a2b4174da" Namespace="calico-system" Pod="csi-node-driver-xd24s" WorkloadEndpoint="localhost-k8s-csi--node--driver--xd24s-eth0" Jul 11 00:27:18.992495 containerd[1465]: time="2025-07-11T00:27:18.992440967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-msvlj,Uid:73fc6509-dcba-4609-91c7-d051cb3bbfc4,Namespace:kube-system,Attempt:1,} returns sandbox id \"4e6ef8d93f1975aefd1f8ee540b2a00d3fed07305474c13c54848801ced7bc01\"" Jul 11 00:27:18.993686 kubelet[2560]: E0711 00:27:18.993199 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:27:18.998281 containerd[1465]: time="2025-07-11T00:27:18.997681773Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:27:18.998281 containerd[1465]: time="2025-07-11T00:27:18.997765406Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:27:18.998281 containerd[1465]: time="2025-07-11T00:27:18.997783763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:27:18.998281 containerd[1465]: time="2025-07-11T00:27:18.997909338Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:27:19.002706 containerd[1465]: time="2025-07-11T00:27:19.002656318Z" level=info msg="CreateContainer within sandbox \"4e6ef8d93f1975aefd1f8ee540b2a00d3fed07305474c13c54848801ced7bc01\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 11 00:27:19.024915 containerd[1465]: time="2025-07-11T00:27:19.024821057Z" level=info msg="CreateContainer within sandbox \"4e6ef8d93f1975aefd1f8ee540b2a00d3fed07305474c13c54848801ced7bc01\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c07ad45a9f9138470169c39c1f4756c03165be1d4d53c4b3a27d98d4eb09ff92\"" Jul 11 00:27:19.026225 containerd[1465]: time="2025-07-11T00:27:19.026164656Z" level=info msg="StartContainer for \"c07ad45a9f9138470169c39c1f4756c03165be1d4d53c4b3a27d98d4eb09ff92\"" Jul 11 00:27:19.029850 systemd[1]: Started cri-containerd-8f66db4219a036cc72578215e1b4f56b14543e12c108e67bb817117a2b4174da.scope - libcontainer container 8f66db4219a036cc72578215e1b4f56b14543e12c108e67bb817117a2b4174da. Jul 11 00:27:19.045548 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:27:19.064797 systemd[1]: Started cri-containerd-c07ad45a9f9138470169c39c1f4756c03165be1d4d53c4b3a27d98d4eb09ff92.scope - libcontainer container c07ad45a9f9138470169c39c1f4756c03165be1d4d53c4b3a27d98d4eb09ff92. Jul 11 00:27:19.066890 containerd[1465]: time="2025-07-11T00:27:19.066844365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xd24s,Uid:7d661932-2475-4fb4-890b-1d7cc7f7d3fc,Namespace:calico-system,Attempt:1,} returns sandbox id \"8f66db4219a036cc72578215e1b4f56b14543e12c108e67bb817117a2b4174da\"" Jul 11 00:27:19.155799 systemd-networkd[1391]: calib9ad08f285b: Gained IPv6LL Jul 11 00:27:19.282473 containerd[1465]: time="2025-07-11T00:27:19.282290042Z" level=info msg="StartContainer for \"c07ad45a9f9138470169c39c1f4756c03165be1d4d53c4b3a27d98d4eb09ff92\" returns successfully" Jul 11 00:27:19.411778 systemd-networkd[1391]: cali94d89816dd3: Gained IPv6LL Jul 11 00:27:19.499304 containerd[1465]: time="2025-07-11T00:27:19.499202038Z" level=info msg="StopPodSandbox for \"bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef\"" Jul 11 00:27:19.575108 containerd[1465]: time="2025-07-11T00:27:19.574948397Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:27:19.576227 containerd[1465]: time="2025-07-11T00:27:19.576181399Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 11 00:27:19.577476 containerd[1465]: time="2025-07-11T00:27:19.577441474Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:27:19.580159 containerd[1465]: time="2025-07-11T00:27:19.580121055Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:27:19.580869 containerd[1465]: time="2025-07-11T00:27:19.580827647Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 2.874455475s" Jul 11 00:27:19.580869 containerd[1465]: time="2025-07-11T00:27:19.580858348Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 11 00:27:19.584392 containerd[1465]: time="2025-07-11T00:27:19.583599330Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 11 00:27:19.587974 containerd[1465]: time="2025-07-11T00:27:19.587941444Z" level=info msg="CreateContainer within sandbox \"4c3620b231bd77ab885314b9a835ded0db44271822c2226b377c41cab1660b54\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 11 00:27:19.589068 containerd[1465]: 2025-07-11 00:27:19.546 [INFO][5120] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef" Jul 11 00:27:19.589068 containerd[1465]: 2025-07-11 00:27:19.546 [INFO][5120] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef" iface="eth0" netns="/var/run/netns/cni-0d71a051-6017-1da1-3d4f-b2f3316f6c95" Jul 11 00:27:19.589068 containerd[1465]: 2025-07-11 00:27:19.547 [INFO][5120] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef" iface="eth0" netns="/var/run/netns/cni-0d71a051-6017-1da1-3d4f-b2f3316f6c95" Jul 11 00:27:19.589068 containerd[1465]: 2025-07-11 00:27:19.547 [INFO][5120] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef" iface="eth0" netns="/var/run/netns/cni-0d71a051-6017-1da1-3d4f-b2f3316f6c95" Jul 11 00:27:19.589068 containerd[1465]: 2025-07-11 00:27:19.547 [INFO][5120] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef" Jul 11 00:27:19.589068 containerd[1465]: 2025-07-11 00:27:19.547 [INFO][5120] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef" Jul 11 00:27:19.589068 containerd[1465]: 2025-07-11 00:27:19.571 [INFO][5128] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef" HandleID="k8s-pod-network.bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef" Workload="localhost-k8s-coredns--674b8bbfcf--gvf85-eth0" Jul 11 00:27:19.589068 containerd[1465]: 2025-07-11 00:27:19.571 [INFO][5128] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:27:19.589068 containerd[1465]: 2025-07-11 00:27:19.571 [INFO][5128] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:27:19.589068 containerd[1465]: 2025-07-11 00:27:19.579 [WARNING][5128] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef" HandleID="k8s-pod-network.bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef" Workload="localhost-k8s-coredns--674b8bbfcf--gvf85-eth0" Jul 11 00:27:19.589068 containerd[1465]: 2025-07-11 00:27:19.579 [INFO][5128] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef" HandleID="k8s-pod-network.bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef" Workload="localhost-k8s-coredns--674b8bbfcf--gvf85-eth0" Jul 11 00:27:19.589068 containerd[1465]: 2025-07-11 00:27:19.580 [INFO][5128] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:27:19.589068 containerd[1465]: 2025-07-11 00:27:19.584 [INFO][5120] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef" Jul 11 00:27:19.589068 containerd[1465]: time="2025-07-11T00:27:19.589090762Z" level=info msg="TearDown network for sandbox \"bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef\" successfully" Jul 11 00:27:19.589462 containerd[1465]: time="2025-07-11T00:27:19.589116152Z" level=info msg="StopPodSandbox for \"bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef\" returns successfully" Jul 11 00:27:19.590020 kubelet[2560]: E0711 00:27:19.589686 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:27:19.590779 containerd[1465]: time="2025-07-11T00:27:19.590396938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gvf85,Uid:532c872b-897c-4658-b37f-c0b4508abd55,Namespace:kube-system,Attempt:1,}" Jul 11 00:27:19.614550 containerd[1465]: time="2025-07-11T00:27:19.614356439Z" level=info msg="CreateContainer within sandbox \"4c3620b231bd77ab885314b9a835ded0db44271822c2226b377c41cab1660b54\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"027df56831d1b4d5d76d96a53bc8b89b7f825dec1bc813191ba2cd2e27c5551f\"" Jul 11 00:27:19.615035 containerd[1465]: time="2025-07-11T00:27:19.614992022Z" level=info msg="StartContainer for \"027df56831d1b4d5d76d96a53bc8b89b7f825dec1bc813191ba2cd2e27c5551f\"" Jul 11 00:27:19.649112 systemd[1]: run-netns-cni\x2d0d71a051\x2d6017\x2d1da1\x2d3d4f\x2db2f3316f6c95.mount: Deactivated successfully. Jul 11 00:27:19.653291 systemd[1]: run-containerd-runc-k8s.io-027df56831d1b4d5d76d96a53bc8b89b7f825dec1bc813191ba2cd2e27c5551f-runc.7vtTts.mount: Deactivated successfully. Jul 11 00:27:19.662950 systemd[1]: Started cri-containerd-027df56831d1b4d5d76d96a53bc8b89b7f825dec1bc813191ba2cd2e27c5551f.scope - libcontainer container 027df56831d1b4d5d76d96a53bc8b89b7f825dec1bc813191ba2cd2e27c5551f. Jul 11 00:27:19.715709 systemd-networkd[1391]: cali8adc612b392: Link UP Jul 11 00:27:19.716447 systemd-networkd[1391]: cali8adc612b392: Gained carrier Jul 11 00:27:19.735770 containerd[1465]: 2025-07-11 00:27:19.646 [INFO][5140] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--gvf85-eth0 coredns-674b8bbfcf- kube-system 532c872b-897c-4658-b37f-c0b4508abd55 1072 0 2025-07-11 00:26:43 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-gvf85 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8adc612b392 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b2da956b8ef3c0e5d7bb09b98b014293f006c42457c4dcdbb16c79ea1b08b9d3" Namespace="kube-system" Pod="coredns-674b8bbfcf-gvf85" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gvf85-" Jul 11 00:27:19.735770 containerd[1465]: 2025-07-11 00:27:19.646 [INFO][5140] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b2da956b8ef3c0e5d7bb09b98b014293f006c42457c4dcdbb16c79ea1b08b9d3" Namespace="kube-system" Pod="coredns-674b8bbfcf-gvf85" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gvf85-eth0" Jul 11 00:27:19.735770 containerd[1465]: 2025-07-11 00:27:19.674 [INFO][5173] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b2da956b8ef3c0e5d7bb09b98b014293f006c42457c4dcdbb16c79ea1b08b9d3" HandleID="k8s-pod-network.b2da956b8ef3c0e5d7bb09b98b014293f006c42457c4dcdbb16c79ea1b08b9d3" Workload="localhost-k8s-coredns--674b8bbfcf--gvf85-eth0" Jul 11 00:27:19.735770 containerd[1465]: 2025-07-11 00:27:19.675 [INFO][5173] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b2da956b8ef3c0e5d7bb09b98b014293f006c42457c4dcdbb16c79ea1b08b9d3" HandleID="k8s-pod-network.b2da956b8ef3c0e5d7bb09b98b014293f006c42457c4dcdbb16c79ea1b08b9d3" Workload="localhost-k8s-coredns--674b8bbfcf--gvf85-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325490), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-gvf85", "timestamp":"2025-07-11 00:27:19.674860049 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:27:19.735770 containerd[1465]: 2025-07-11 00:27:19.675 [INFO][5173] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:27:19.735770 containerd[1465]: 2025-07-11 00:27:19.675 [INFO][5173] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:27:19.735770 containerd[1465]: 2025-07-11 00:27:19.675 [INFO][5173] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:27:19.735770 containerd[1465]: 2025-07-11 00:27:19.681 [INFO][5173] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b2da956b8ef3c0e5d7bb09b98b014293f006c42457c4dcdbb16c79ea1b08b9d3" host="localhost" Jul 11 00:27:19.735770 containerd[1465]: 2025-07-11 00:27:19.688 [INFO][5173] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:27:19.735770 containerd[1465]: 2025-07-11 00:27:19.692 [INFO][5173] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:27:19.735770 containerd[1465]: 2025-07-11 00:27:19.694 [INFO][5173] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:27:19.735770 containerd[1465]: 2025-07-11 00:27:19.696 [INFO][5173] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:27:19.735770 containerd[1465]: 2025-07-11 00:27:19.696 [INFO][5173] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b2da956b8ef3c0e5d7bb09b98b014293f006c42457c4dcdbb16c79ea1b08b9d3" host="localhost" Jul 11 00:27:19.735770 containerd[1465]: 2025-07-11 00:27:19.698 [INFO][5173] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b2da956b8ef3c0e5d7bb09b98b014293f006c42457c4dcdbb16c79ea1b08b9d3 Jul 11 00:27:19.735770 containerd[1465]: 2025-07-11 00:27:19.702 [INFO][5173] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b2da956b8ef3c0e5d7bb09b98b014293f006c42457c4dcdbb16c79ea1b08b9d3" host="localhost" Jul 11 00:27:19.735770 containerd[1465]: 2025-07-11 00:27:19.707 [INFO][5173] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.b2da956b8ef3c0e5d7bb09b98b014293f006c42457c4dcdbb16c79ea1b08b9d3" host="localhost" Jul 11 00:27:19.735770 containerd[1465]: 2025-07-11 00:27:19.708 [INFO][5173] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.b2da956b8ef3c0e5d7bb09b98b014293f006c42457c4dcdbb16c79ea1b08b9d3" host="localhost" Jul 11 00:27:19.735770 containerd[1465]: 2025-07-11 00:27:19.708 [INFO][5173] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:27:19.735770 containerd[1465]: 2025-07-11 00:27:19.708 [INFO][5173] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="b2da956b8ef3c0e5d7bb09b98b014293f006c42457c4dcdbb16c79ea1b08b9d3" HandleID="k8s-pod-network.b2da956b8ef3c0e5d7bb09b98b014293f006c42457c4dcdbb16c79ea1b08b9d3" Workload="localhost-k8s-coredns--674b8bbfcf--gvf85-eth0" Jul 11 00:27:19.737584 containerd[1465]: 2025-07-11 00:27:19.713 [INFO][5140] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b2da956b8ef3c0e5d7bb09b98b014293f006c42457c4dcdbb16c79ea1b08b9d3" Namespace="kube-system" Pod="coredns-674b8bbfcf-gvf85" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gvf85-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--gvf85-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"532c872b-897c-4658-b37f-c0b4508abd55", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-gvf85", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8adc612b392", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:27:19.737584 containerd[1465]: 2025-07-11 00:27:19.713 [INFO][5140] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="b2da956b8ef3c0e5d7bb09b98b014293f006c42457c4dcdbb16c79ea1b08b9d3" Namespace="kube-system" Pod="coredns-674b8bbfcf-gvf85" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gvf85-eth0" Jul 11 00:27:19.737584 containerd[1465]: 2025-07-11 00:27:19.713 [INFO][5140] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8adc612b392 ContainerID="b2da956b8ef3c0e5d7bb09b98b014293f006c42457c4dcdbb16c79ea1b08b9d3" Namespace="kube-system" Pod="coredns-674b8bbfcf-gvf85" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gvf85-eth0" Jul 11 00:27:19.737584 containerd[1465]: 2025-07-11 00:27:19.716 [INFO][5140] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b2da956b8ef3c0e5d7bb09b98b014293f006c42457c4dcdbb16c79ea1b08b9d3" Namespace="kube-system" Pod="coredns-674b8bbfcf-gvf85" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gvf85-eth0" Jul 11 00:27:19.737584 containerd[1465]: 2025-07-11 00:27:19.716 [INFO][5140] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b2da956b8ef3c0e5d7bb09b98b014293f006c42457c4dcdbb16c79ea1b08b9d3" Namespace="kube-system" Pod="coredns-674b8bbfcf-gvf85" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gvf85-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--gvf85-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"532c872b-897c-4658-b37f-c0b4508abd55", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b2da956b8ef3c0e5d7bb09b98b014293f006c42457c4dcdbb16c79ea1b08b9d3", Pod:"coredns-674b8bbfcf-gvf85", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8adc612b392", MAC:"42:49:21:f1:e7:c3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:27:19.737584 containerd[1465]: 2025-07-11 00:27:19.728 [INFO][5140] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b2da956b8ef3c0e5d7bb09b98b014293f006c42457c4dcdbb16c79ea1b08b9d3" Namespace="kube-system" Pod="coredns-674b8bbfcf-gvf85" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gvf85-eth0" Jul 11 00:27:19.737584 containerd[1465]: time="2025-07-11T00:27:19.736408734Z" level=info msg="StartContainer for \"027df56831d1b4d5d76d96a53bc8b89b7f825dec1bc813191ba2cd2e27c5551f\" returns successfully" Jul 11 00:27:19.763996 containerd[1465]: time="2025-07-11T00:27:19.763866699Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:27:19.763996 containerd[1465]: time="2025-07-11T00:27:19.763944461Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:27:19.763996 containerd[1465]: time="2025-07-11T00:27:19.763959470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:27:19.764158 containerd[1465]: time="2025-07-11T00:27:19.764076449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:27:19.796761 systemd-networkd[1391]: cali6ffcc6d4897: Gained IPv6LL Jul 11 00:27:19.798933 systemd[1]: Started cri-containerd-b2da956b8ef3c0e5d7bb09b98b014293f006c42457c4dcdbb16c79ea1b08b9d3.scope - libcontainer container b2da956b8ef3c0e5d7bb09b98b014293f006c42457c4dcdbb16c79ea1b08b9d3. Jul 11 00:27:19.819736 kubelet[2560]: E0711 00:27:19.819704 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:27:19.821631 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:27:19.852008 containerd[1465]: time="2025-07-11T00:27:19.851632832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gvf85,Uid:532c872b-897c-4658-b37f-c0b4508abd55,Namespace:kube-system,Attempt:1,} returns sandbox id \"b2da956b8ef3c0e5d7bb09b98b014293f006c42457c4dcdbb16c79ea1b08b9d3\"" Jul 11 00:27:19.852327 kubelet[2560]: E0711 00:27:19.852289 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:27:20.051861 systemd-networkd[1391]: cali7f574f60324: Gained IPv6LL Jul 11 00:27:20.062201 kubelet[2560]: I0711 00:27:20.061674 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-msvlj" podStartSLOduration=37.061654504 podStartE2EDuration="37.061654504s" podCreationTimestamp="2025-07-11 00:26:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:27:20.055644151 +0000 UTC m=+43.648137660" watchObservedRunningTime="2025-07-11 00:27:20.061654504 +0000 UTC m=+43.654148013" Jul 11 00:27:20.062201 kubelet[2560]: I0711 00:27:20.061827 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6f647b777b-hj2zv" podStartSLOduration=25.330326001 podStartE2EDuration="29.061822082s" podCreationTimestamp="2025-07-11 00:26:51 +0000 UTC" firstStartedPulling="2025-07-11 00:27:15.8518492 +0000 UTC m=+39.444342710" lastFinishedPulling="2025-07-11 00:27:19.583345282 +0000 UTC m=+43.175838791" observedRunningTime="2025-07-11 00:27:19.886954981 +0000 UTC m=+43.479448490" watchObservedRunningTime="2025-07-11 00:27:20.061822082 +0000 UTC m=+43.654315591" Jul 11 00:27:20.307818 systemd-networkd[1391]: cali4209e93a805: Gained IPv6LL Jul 11 00:27:20.563749 systemd-networkd[1391]: cali1aa84b469b9: Gained IPv6LL Jul 11 00:27:20.596413 containerd[1465]: time="2025-07-11T00:27:20.596357754Z" level=info msg="CreateContainer within sandbox \"b2da956b8ef3c0e5d7bb09b98b014293f006c42457c4dcdbb16c79ea1b08b9d3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 11 00:27:20.655498 containerd[1465]: time="2025-07-11T00:27:20.655448498Z" level=info msg="CreateContainer within sandbox \"b2da956b8ef3c0e5d7bb09b98b014293f006c42457c4dcdbb16c79ea1b08b9d3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e968b928e789d5eac354c71d992bfc1d71da5167edb6ff4991a756e3621367b5\"" Jul 11 00:27:20.656465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2372857263.mount: Deactivated successfully. Jul 11 00:27:20.657794 containerd[1465]: time="2025-07-11T00:27:20.657043657Z" level=info msg="StartContainer for \"e968b928e789d5eac354c71d992bfc1d71da5167edb6ff4991a756e3621367b5\"" Jul 11 00:27:20.697841 systemd[1]: Started cri-containerd-e968b928e789d5eac354c71d992bfc1d71da5167edb6ff4991a756e3621367b5.scope - libcontainer container e968b928e789d5eac354c71d992bfc1d71da5167edb6ff4991a756e3621367b5. Jul 11 00:27:20.734206 containerd[1465]: time="2025-07-11T00:27:20.734147997Z" level=info msg="StartContainer for \"e968b928e789d5eac354c71d992bfc1d71da5167edb6ff4991a756e3621367b5\" returns successfully" Jul 11 00:27:20.825163 kubelet[2560]: E0711 00:27:20.824884 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:27:20.825163 kubelet[2560]: E0711 00:27:20.825117 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:27:20.839384 kubelet[2560]: I0711 00:27:20.837473 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-gvf85" podStartSLOduration=37.83745626 podStartE2EDuration="37.83745626s" podCreationTimestamp="2025-07-11 00:26:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:27:20.837314453 +0000 UTC m=+44.429807962" watchObservedRunningTime="2025-07-11 00:27:20.83745626 +0000 UTC m=+44.429949769" Jul 11 00:27:21.652536 systemd-networkd[1391]: cali8adc612b392: Gained IPv6LL Jul 11 00:27:21.665922 systemd[1]: Started sshd@11-10.0.0.159:22-10.0.0.1:48056.service - OpenSSH per-connection server daemon (10.0.0.1:48056). Jul 11 00:27:21.703509 sshd[5306]: Accepted publickey for core from 10.0.0.1 port 48056 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:27:21.705327 sshd[5306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:27:21.709469 systemd-logind[1445]: New session 12 of user core. Jul 11 00:27:21.717748 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 11 00:27:21.828651 kubelet[2560]: E0711 00:27:21.827633 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:27:21.830435 kubelet[2560]: E0711 00:27:21.829319 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:27:21.906266 sshd[5306]: pam_unix(sshd:session): session closed for user core Jul 11 00:27:21.916759 systemd[1]: sshd@11-10.0.0.159:22-10.0.0.1:48056.service: Deactivated successfully. Jul 11 00:27:21.918921 systemd[1]: session-12.scope: Deactivated successfully. Jul 11 00:27:21.920864 systemd-logind[1445]: Session 12 logged out. Waiting for processes to exit. Jul 11 00:27:21.922435 systemd[1]: Started sshd@12-10.0.0.159:22-10.0.0.1:48064.service - OpenSSH per-connection server daemon (10.0.0.1:48064). Jul 11 00:27:21.923480 systemd-logind[1445]: Removed session 12. Jul 11 00:27:21.956274 sshd[5321]: Accepted publickey for core from 10.0.0.1 port 48064 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:27:21.957976 sshd[5321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:27:21.961932 systemd-logind[1445]: New session 13 of user core. Jul 11 00:27:21.970722 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 11 00:27:22.159630 sshd[5321]: pam_unix(sshd:session): session closed for user core Jul 11 00:27:22.170394 systemd[1]: sshd@12-10.0.0.159:22-10.0.0.1:48064.service: Deactivated successfully. Jul 11 00:27:22.173744 systemd[1]: session-13.scope: Deactivated successfully. Jul 11 00:27:22.175073 systemd-logind[1445]: Session 13 logged out. Waiting for processes to exit. Jul 11 00:27:22.191996 systemd[1]: Started sshd@13-10.0.0.159:22-10.0.0.1:48072.service - OpenSSH per-connection server daemon (10.0.0.1:48072). Jul 11 00:27:22.193756 systemd-logind[1445]: Removed session 13. Jul 11 00:27:22.220403 sshd[5337]: Accepted publickey for core from 10.0.0.1 port 48072 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:27:22.222041 sshd[5337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:27:22.228870 systemd-logind[1445]: New session 14 of user core. Jul 11 00:27:22.236830 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 11 00:27:22.406058 sshd[5337]: pam_unix(sshd:session): session closed for user core Jul 11 00:27:22.412523 systemd[1]: sshd@13-10.0.0.159:22-10.0.0.1:48072.service: Deactivated successfully. Jul 11 00:27:22.415687 systemd[1]: session-14.scope: Deactivated successfully. Jul 11 00:27:22.418078 systemd-logind[1445]: Session 14 logged out. Waiting for processes to exit. Jul 11 00:27:22.419708 systemd-logind[1445]: Removed session 14. Jul 11 00:27:22.471967 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3650593182.mount: Deactivated successfully. Jul 11 00:27:22.644668 containerd[1465]: time="2025-07-11T00:27:22.644550332Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:27:22.645838 containerd[1465]: time="2025-07-11T00:27:22.645741355Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 11 00:27:22.647362 containerd[1465]: time="2025-07-11T00:27:22.646784651Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:27:22.649869 containerd[1465]: time="2025-07-11T00:27:22.649826097Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:27:22.650947 containerd[1465]: time="2025-07-11T00:27:22.650897328Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 3.066502102s" Jul 11 00:27:22.651005 containerd[1465]: time="2025-07-11T00:27:22.650953758Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 11 00:27:22.652584 containerd[1465]: time="2025-07-11T00:27:22.652244063Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 11 00:27:22.656271 containerd[1465]: time="2025-07-11T00:27:22.656212670Z" level=info msg="CreateContainer within sandbox \"6d2d7ef6efebca1e34cfea74ccfc9a9ba978922ba98d7b7b404af0bffac987f5\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 11 00:27:22.671067 containerd[1465]: time="2025-07-11T00:27:22.670926885Z" level=info msg="CreateContainer within sandbox \"6d2d7ef6efebca1e34cfea74ccfc9a9ba978922ba98d7b7b404af0bffac987f5\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"603ba5634ed78be47cb1b14d23eaa0366eeca368c4c5566ee376181751c3aaa4\"" Jul 11 00:27:22.671633 containerd[1465]: time="2025-07-11T00:27:22.671582117Z" level=info msg="StartContainer for \"603ba5634ed78be47cb1b14d23eaa0366eeca368c4c5566ee376181751c3aaa4\"" Jul 11 00:27:22.729763 systemd[1]: Started cri-containerd-603ba5634ed78be47cb1b14d23eaa0366eeca368c4c5566ee376181751c3aaa4.scope - libcontainer container 603ba5634ed78be47cb1b14d23eaa0366eeca368c4c5566ee376181751c3aaa4. Jul 11 00:27:22.779281 containerd[1465]: time="2025-07-11T00:27:22.779216344Z" level=info msg="StartContainer for \"603ba5634ed78be47cb1b14d23eaa0366eeca368c4c5566ee376181751c3aaa4\" returns successfully" Jul 11 00:27:22.831517 kubelet[2560]: E0711 00:27:22.831461 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:27:23.046962 containerd[1465]: time="2025-07-11T00:27:23.046752068Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:27:23.047463 containerd[1465]: time="2025-07-11T00:27:23.047376725Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 11 00:27:23.049735 containerd[1465]: time="2025-07-11T00:27:23.049698605Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 397.404956ms" Jul 11 00:27:23.049789 containerd[1465]: time="2025-07-11T00:27:23.049736184Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 11 00:27:23.050970 containerd[1465]: time="2025-07-11T00:27:23.050935665Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 11 00:27:23.055910 containerd[1465]: time="2025-07-11T00:27:23.055856974Z" level=info msg="CreateContainer within sandbox \"1a9af3873cb7c7c2d6501ee2a258757429a5eb1c33bdf2fe68189bf10987c4b6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 11 00:27:23.081366 containerd[1465]: time="2025-07-11T00:27:23.081267546Z" level=info msg="CreateContainer within sandbox \"1a9af3873cb7c7c2d6501ee2a258757429a5eb1c33bdf2fe68189bf10987c4b6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4c413424f24cbc292dc20eb4f6e946b97c1b45cf4a1c0f9c0b23f02055893981\"" Jul 11 00:27:23.082225 containerd[1465]: time="2025-07-11T00:27:23.082146428Z" level=info msg="StartContainer for \"4c413424f24cbc292dc20eb4f6e946b97c1b45cf4a1c0f9c0b23f02055893981\"" Jul 11 00:27:23.118764 systemd[1]: Started cri-containerd-4c413424f24cbc292dc20eb4f6e946b97c1b45cf4a1c0f9c0b23f02055893981.scope - libcontainer container 4c413424f24cbc292dc20eb4f6e946b97c1b45cf4a1c0f9c0b23f02055893981. Jul 11 00:27:23.167234 containerd[1465]: time="2025-07-11T00:27:23.167188569Z" level=info msg="StartContainer for \"4c413424f24cbc292dc20eb4f6e946b97c1b45cf4a1c0f9c0b23f02055893981\" returns successfully" Jul 11 00:27:23.835294 kubelet[2560]: E0711 00:27:23.834815 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:27:24.206969 kubelet[2560]: I0711 00:27:24.206765 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-8f5b58dcb-df62x" podStartSLOduration=2.94993111 podStartE2EDuration="10.206741645s" podCreationTimestamp="2025-07-11 00:27:14 +0000 UTC" firstStartedPulling="2025-07-11 00:27:15.395145429 +0000 UTC m=+38.987638938" lastFinishedPulling="2025-07-11 00:27:22.651955964 +0000 UTC m=+46.244449473" observedRunningTime="2025-07-11 00:27:22.841511522 +0000 UTC m=+46.434005041" watchObservedRunningTime="2025-07-11 00:27:24.206741645 +0000 UTC m=+47.799235154" Jul 11 00:27:24.221520 kubelet[2560]: I0711 00:27:24.207852 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6f647b777b-qhnsp" podStartSLOduration=28.155485264 podStartE2EDuration="33.207842599s" podCreationTimestamp="2025-07-11 00:26:51 +0000 UTC" firstStartedPulling="2025-07-11 00:27:17.998403278 +0000 UTC m=+41.590896787" lastFinishedPulling="2025-07-11 00:27:23.050760613 +0000 UTC m=+46.643254122" observedRunningTime="2025-07-11 00:27:24.20596056 +0000 UTC m=+47.798454080" watchObservedRunningTime="2025-07-11 00:27:24.207842599 +0000 UTC m=+47.800336108" Jul 11 00:27:24.845550 kubelet[2560]: I0711 00:27:24.845486 2560 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:27:25.299933 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4171573002.mount: Deactivated successfully. Jul 11 00:27:25.796975 containerd[1465]: time="2025-07-11T00:27:25.796911570Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:27:25.797673 containerd[1465]: time="2025-07-11T00:27:25.797636795Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 11 00:27:25.798843 containerd[1465]: time="2025-07-11T00:27:25.798816817Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:27:25.801148 containerd[1465]: time="2025-07-11T00:27:25.801110310Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:27:25.801881 containerd[1465]: time="2025-07-11T00:27:25.801848749Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 2.750868913s" Jul 11 00:27:25.801931 containerd[1465]: time="2025-07-11T00:27:25.801881459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 11 00:27:25.803212 containerd[1465]: time="2025-07-11T00:27:25.802909372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 11 00:27:25.806581 containerd[1465]: time="2025-07-11T00:27:25.806531749Z" level=info msg="CreateContainer within sandbox \"b60dc3d2c51cb01838fa7cb503e07f6f0953ad9421537be2175f95585ba19970\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 11 00:27:25.849513 containerd[1465]: time="2025-07-11T00:27:25.849456824Z" level=info msg="CreateContainer within sandbox \"b60dc3d2c51cb01838fa7cb503e07f6f0953ad9421537be2175f95585ba19970\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"939414b6c67a9dccc6cb017427accb2bf076205601a48cdcad5b1d3f56ad13a5\"" Jul 11 00:27:25.850041 containerd[1465]: time="2025-07-11T00:27:25.850005113Z" level=info msg="StartContainer for \"939414b6c67a9dccc6cb017427accb2bf076205601a48cdcad5b1d3f56ad13a5\"" Jul 11 00:27:25.912795 systemd[1]: Started cri-containerd-939414b6c67a9dccc6cb017427accb2bf076205601a48cdcad5b1d3f56ad13a5.scope - libcontainer container 939414b6c67a9dccc6cb017427accb2bf076205601a48cdcad5b1d3f56ad13a5. Jul 11 00:27:25.959186 containerd[1465]: time="2025-07-11T00:27:25.959141237Z" level=info msg="StartContainer for \"939414b6c67a9dccc6cb017427accb2bf076205601a48cdcad5b1d3f56ad13a5\" returns successfully" Jul 11 00:27:26.907748 kubelet[2560]: I0711 00:27:26.907668 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-dnd8p" podStartSLOduration=26.228439091 podStartE2EDuration="33.907647282s" podCreationTimestamp="2025-07-11 00:26:53 +0000 UTC" firstStartedPulling="2025-07-11 00:27:18.123527491 +0000 UTC m=+41.716021000" lastFinishedPulling="2025-07-11 00:27:25.802735682 +0000 UTC m=+49.395229191" observedRunningTime="2025-07-11 00:27:26.907644015 +0000 UTC m=+50.500137524" watchObservedRunningTime="2025-07-11 00:27:26.907647282 +0000 UTC m=+50.500140791" Jul 11 00:27:27.283367 containerd[1465]: time="2025-07-11T00:27:27.283139614Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:27:27.284985 containerd[1465]: time="2025-07-11T00:27:27.284495897Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 11 00:27:27.287198 containerd[1465]: time="2025-07-11T00:27:27.287165637Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 1.48420436s" Jul 11 00:27:27.287265 containerd[1465]: time="2025-07-11T00:27:27.287202605Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 11 00:27:27.288304 containerd[1465]: time="2025-07-11T00:27:27.288261721Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 11 00:27:27.293048 containerd[1465]: time="2025-07-11T00:27:27.293001863Z" level=info msg="CreateContainer within sandbox \"9faf57772507de039b382dbe1857facc7ef0ee51abc0fb3ec5dd6c5b2f4c0bd4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 11 00:27:27.307861 containerd[1465]: time="2025-07-11T00:27:27.307812650Z" level=info msg="CreateContainer within sandbox \"9faf57772507de039b382dbe1857facc7ef0ee51abc0fb3ec5dd6c5b2f4c0bd4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7e6e7fcea824e844352216dcb4a38231b2620756b77cf83e6b4cefda8553c85d\"" Jul 11 00:27:27.308448 containerd[1465]: time="2025-07-11T00:27:27.308426212Z" level=info msg="StartContainer for \"7e6e7fcea824e844352216dcb4a38231b2620756b77cf83e6b4cefda8553c85d\"" Jul 11 00:27:27.352754 systemd[1]: Started cri-containerd-7e6e7fcea824e844352216dcb4a38231b2620756b77cf83e6b4cefda8553c85d.scope - libcontainer container 7e6e7fcea824e844352216dcb4a38231b2620756b77cf83e6b4cefda8553c85d. Jul 11 00:27:27.396286 containerd[1465]: time="2025-07-11T00:27:27.396235684Z" level=info msg="StartContainer for \"7e6e7fcea824e844352216dcb4a38231b2620756b77cf83e6b4cefda8553c85d\" returns successfully" Jul 11 00:27:27.417515 systemd[1]: Started sshd@14-10.0.0.159:22-10.0.0.1:48080.service - OpenSSH per-connection server daemon (10.0.0.1:48080). Jul 11 00:27:27.477736 sshd[5564]: Accepted publickey for core from 10.0.0.1 port 48080 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:27:27.480552 sshd[5564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:27:27.486059 systemd-logind[1445]: New session 15 of user core. Jul 11 00:27:27.491790 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 11 00:27:27.774117 sshd[5564]: pam_unix(sshd:session): session closed for user core Jul 11 00:27:27.779241 systemd[1]: sshd@14-10.0.0.159:22-10.0.0.1:48080.service: Deactivated successfully. Jul 11 00:27:27.781870 systemd[1]: session-15.scope: Deactivated successfully. Jul 11 00:27:27.782631 systemd-logind[1445]: Session 15 logged out. Waiting for processes to exit. Jul 11 00:27:27.783723 systemd-logind[1445]: Removed session 15. Jul 11 00:27:27.869377 kubelet[2560]: I0711 00:27:27.868940 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5f7b5d6c54-pbqrn" podStartSLOduration=28.187134537 podStartE2EDuration="36.868917477s" podCreationTimestamp="2025-07-11 00:26:51 +0000 UTC" firstStartedPulling="2025-07-11 00:27:18.606303156 +0000 UTC m=+42.198796665" lastFinishedPulling="2025-07-11 00:27:27.288086096 +0000 UTC m=+50.880579605" observedRunningTime="2025-07-11 00:27:27.868860482 +0000 UTC m=+51.461353991" watchObservedRunningTime="2025-07-11 00:27:27.868917477 +0000 UTC m=+51.461410996" Jul 11 00:27:27.872568 systemd[1]: run-containerd-runc-k8s.io-7e6e7fcea824e844352216dcb4a38231b2620756b77cf83e6b4cefda8553c85d-runc.bf2JIh.mount: Deactivated successfully. Jul 11 00:27:27.887297 systemd[1]: run-containerd-runc-k8s.io-939414b6c67a9dccc6cb017427accb2bf076205601a48cdcad5b1d3f56ad13a5-runc.j93KWG.mount: Deactivated successfully. Jul 11 00:27:28.858590 kubelet[2560]: I0711 00:27:28.858544 2560 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:27:30.582466 containerd[1465]: time="2025-07-11T00:27:30.582411181Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:27:30.583249 containerd[1465]: time="2025-07-11T00:27:30.583216805Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 11 00:27:30.584560 containerd[1465]: time="2025-07-11T00:27:30.584536632Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:27:30.586832 containerd[1465]: time="2025-07-11T00:27:30.586801863Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:27:30.587693 containerd[1465]: time="2025-07-11T00:27:30.587642591Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 3.299339936s" Jul 11 00:27:30.587736 containerd[1465]: time="2025-07-11T00:27:30.587691973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 11 00:27:30.593513 containerd[1465]: time="2025-07-11T00:27:30.593089812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 11 00:27:30.638781 containerd[1465]: time="2025-07-11T00:27:30.638728226Z" level=info msg="CreateContainer within sandbox \"fe63ae533620bda24bb7b2f9f1b2829c9d0b98ff67df7bef9880e8bfdca5124a\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 11 00:27:30.655624 containerd[1465]: time="2025-07-11T00:27:30.655571798Z" level=info msg="CreateContainer within sandbox \"fe63ae533620bda24bb7b2f9f1b2829c9d0b98ff67df7bef9880e8bfdca5124a\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b12933b303476fa0e51a04a97bd6549034817e1fab35d6a103d4f5fa93c21c55\"" Jul 11 00:27:30.656316 containerd[1465]: time="2025-07-11T00:27:30.656207667Z" level=info msg="StartContainer for \"b12933b303476fa0e51a04a97bd6549034817e1fab35d6a103d4f5fa93c21c55\"" Jul 11 00:27:30.700025 systemd[1]: Started cri-containerd-b12933b303476fa0e51a04a97bd6549034817e1fab35d6a103d4f5fa93c21c55.scope - libcontainer container b12933b303476fa0e51a04a97bd6549034817e1fab35d6a103d4f5fa93c21c55. Jul 11 00:27:30.751358 containerd[1465]: time="2025-07-11T00:27:30.751303747Z" level=info msg="StartContainer for \"b12933b303476fa0e51a04a97bd6549034817e1fab35d6a103d4f5fa93c21c55\" returns successfully" Jul 11 00:27:30.962575 kubelet[2560]: I0711 00:27:30.962499 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-f7c9fffd4-rk9nh" podStartSLOduration=25.057082158 podStartE2EDuration="36.962479344s" podCreationTimestamp="2025-07-11 00:26:54 +0000 UTC" firstStartedPulling="2025-07-11 00:27:18.687489398 +0000 UTC m=+42.279982908" lastFinishedPulling="2025-07-11 00:27:30.592886585 +0000 UTC m=+54.185380094" observedRunningTime="2025-07-11 00:27:30.880170746 +0000 UTC m=+54.472664265" watchObservedRunningTime="2025-07-11 00:27:30.962479344 +0000 UTC m=+54.554972853" Jul 11 00:27:32.744811 containerd[1465]: time="2025-07-11T00:27:32.744735369Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:27:32.745863 containerd[1465]: time="2025-07-11T00:27:32.745783217Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 11 00:27:32.747185 containerd[1465]: time="2025-07-11T00:27:32.747152403Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:27:32.750281 containerd[1465]: time="2025-07-11T00:27:32.750204516Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:27:32.753481 containerd[1465]: time="2025-07-11T00:27:32.753433489Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 2.160300728s" Jul 11 00:27:32.753481 containerd[1465]: time="2025-07-11T00:27:32.753477601Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 11 00:27:32.790209 systemd[1]: Started sshd@15-10.0.0.159:22-10.0.0.1:35586.service - OpenSSH per-connection server daemon (10.0.0.1:35586). Jul 11 00:27:32.845537 containerd[1465]: time="2025-07-11T00:27:32.844572882Z" level=info msg="CreateContainer within sandbox \"8f66db4219a036cc72578215e1b4f56b14543e12c108e67bb817117a2b4174da\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 11 00:27:32.858219 sshd[5693]: Accepted publickey for core from 10.0.0.1 port 35586 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:27:32.860540 sshd[5693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:27:32.865689 systemd-logind[1445]: New session 16 of user core. Jul 11 00:27:32.870891 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 11 00:27:33.225379 sshd[5693]: pam_unix(sshd:session): session closed for user core Jul 11 00:27:33.230555 systemd[1]: sshd@15-10.0.0.159:22-10.0.0.1:35586.service: Deactivated successfully. Jul 11 00:27:33.233164 systemd[1]: session-16.scope: Deactivated successfully. Jul 11 00:27:33.233856 systemd-logind[1445]: Session 16 logged out. Waiting for processes to exit. Jul 11 00:27:33.235053 systemd-logind[1445]: Removed session 16. Jul 11 00:27:33.473891 containerd[1465]: time="2025-07-11T00:27:33.473815254Z" level=info msg="CreateContainer within sandbox \"8f66db4219a036cc72578215e1b4f56b14543e12c108e67bb817117a2b4174da\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"a5e4bba17eafb8c27d7e798830d85c20cb724da96f5ce438745d43245d7440c0\"" Jul 11 00:27:33.474677 containerd[1465]: time="2025-07-11T00:27:33.474572855Z" level=info msg="StartContainer for \"a5e4bba17eafb8c27d7e798830d85c20cb724da96f5ce438745d43245d7440c0\"" Jul 11 00:27:33.512885 systemd[1]: Started cri-containerd-a5e4bba17eafb8c27d7e798830d85c20cb724da96f5ce438745d43245d7440c0.scope - libcontainer container a5e4bba17eafb8c27d7e798830d85c20cb724da96f5ce438745d43245d7440c0. Jul 11 00:27:33.547105 containerd[1465]: time="2025-07-11T00:27:33.547052484Z" level=info msg="StartContainer for \"a5e4bba17eafb8c27d7e798830d85c20cb724da96f5ce438745d43245d7440c0\" returns successfully" Jul 11 00:27:33.565521 containerd[1465]: time="2025-07-11T00:27:33.564880106Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 11 00:27:35.874411 containerd[1465]: time="2025-07-11T00:27:35.874316044Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:27:35.932573 containerd[1465]: time="2025-07-11T00:27:35.932502835Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 11 00:27:36.009207 containerd[1465]: time="2025-07-11T00:27:36.009136639Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:27:36.053364 containerd[1465]: time="2025-07-11T00:27:36.053295007Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:27:36.054072 containerd[1465]: time="2025-07-11T00:27:36.054013389Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 2.488471671s" Jul 11 00:27:36.054155 containerd[1465]: time="2025-07-11T00:27:36.054069464Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 11 00:27:36.166639 containerd[1465]: time="2025-07-11T00:27:36.166482009Z" level=info msg="CreateContainer within sandbox \"8f66db4219a036cc72578215e1b4f56b14543e12c108e67bb817117a2b4174da\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 11 00:27:36.482674 containerd[1465]: time="2025-07-11T00:27:36.482239231Z" level=info msg="StopPodSandbox for \"1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f\"" Jul 11 00:27:36.930446 containerd[1465]: 2025-07-11 00:27:36.750 [WARNING][5762] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f647b777b--hj2zv-eth0", GenerateName:"calico-apiserver-6f647b777b-", Namespace:"calico-apiserver", SelfLink:"", UID:"ca320139-04b8-474f-b513-d5dae70779c9", ResourceVersion:"1110", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f647b777b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4c3620b231bd77ab885314b9a835ded0db44271822c2226b377c41cab1660b54", Pod:"calico-apiserver-6f647b777b-hj2zv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1e2b514fea0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:27:36.930446 containerd[1465]: 2025-07-11 00:27:36.750 [INFO][5762] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f" Jul 11 00:27:36.930446 containerd[1465]: 2025-07-11 00:27:36.750 [INFO][5762] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f" iface="eth0" netns="" Jul 11 00:27:36.930446 containerd[1465]: 2025-07-11 00:27:36.751 [INFO][5762] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f" Jul 11 00:27:36.930446 containerd[1465]: 2025-07-11 00:27:36.751 [INFO][5762] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f" Jul 11 00:27:36.930446 containerd[1465]: 2025-07-11 00:27:36.776 [INFO][5773] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f" HandleID="k8s-pod-network.1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f" Workload="localhost-k8s-calico--apiserver--6f647b777b--hj2zv-eth0" Jul 11 00:27:36.930446 containerd[1465]: 2025-07-11 00:27:36.777 [INFO][5773] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:27:36.930446 containerd[1465]: 2025-07-11 00:27:36.777 [INFO][5773] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:27:36.930446 containerd[1465]: 2025-07-11 00:27:36.808 [WARNING][5773] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f" HandleID="k8s-pod-network.1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f" Workload="localhost-k8s-calico--apiserver--6f647b777b--hj2zv-eth0" Jul 11 00:27:36.930446 containerd[1465]: 2025-07-11 00:27:36.808 [INFO][5773] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f" HandleID="k8s-pod-network.1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f" Workload="localhost-k8s-calico--apiserver--6f647b777b--hj2zv-eth0" Jul 11 00:27:36.930446 containerd[1465]: 2025-07-11 00:27:36.923 [INFO][5773] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:27:36.930446 containerd[1465]: 2025-07-11 00:27:36.926 [INFO][5762] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f" Jul 11 00:27:36.980459 containerd[1465]: time="2025-07-11T00:27:36.938736508Z" level=info msg="TearDown network for sandbox \"1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f\" successfully" Jul 11 00:27:36.980459 containerd[1465]: time="2025-07-11T00:27:36.938776202Z" level=info msg="StopPodSandbox for \"1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f\" returns successfully" Jul 11 00:27:37.010327 containerd[1465]: time="2025-07-11T00:27:37.010246491Z" level=info msg="RemovePodSandbox for \"1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f\"" Jul 11 00:27:37.013285 containerd[1465]: time="2025-07-11T00:27:37.013239899Z" level=info msg="Forcibly stopping sandbox \"1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f\"" Jul 11 00:27:37.206890 containerd[1465]: 2025-07-11 00:27:37.170 [WARNING][5790] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f647b777b--hj2zv-eth0", GenerateName:"calico-apiserver-6f647b777b-", Namespace:"calico-apiserver", SelfLink:"", UID:"ca320139-04b8-474f-b513-d5dae70779c9", ResourceVersion:"1110", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f647b777b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4c3620b231bd77ab885314b9a835ded0db44271822c2226b377c41cab1660b54", Pod:"calico-apiserver-6f647b777b-hj2zv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1e2b514fea0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:27:37.206890 containerd[1465]: 2025-07-11 00:27:37.170 [INFO][5790] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f" Jul 11 00:27:37.206890 containerd[1465]: 2025-07-11 00:27:37.170 [INFO][5790] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f" iface="eth0" netns="" Jul 11 00:27:37.206890 containerd[1465]: 2025-07-11 00:27:37.170 [INFO][5790] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f" Jul 11 00:27:37.206890 containerd[1465]: 2025-07-11 00:27:37.170 [INFO][5790] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f" Jul 11 00:27:37.206890 containerd[1465]: 2025-07-11 00:27:37.192 [INFO][5798] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f" HandleID="k8s-pod-network.1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f" Workload="localhost-k8s-calico--apiserver--6f647b777b--hj2zv-eth0" Jul 11 00:27:37.206890 containerd[1465]: 2025-07-11 00:27:37.192 [INFO][5798] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:27:37.206890 containerd[1465]: 2025-07-11 00:27:37.192 [INFO][5798] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:27:37.206890 containerd[1465]: 2025-07-11 00:27:37.198 [WARNING][5798] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f" HandleID="k8s-pod-network.1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f" Workload="localhost-k8s-calico--apiserver--6f647b777b--hj2zv-eth0" Jul 11 00:27:37.206890 containerd[1465]: 2025-07-11 00:27:37.198 [INFO][5798] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f" HandleID="k8s-pod-network.1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f" Workload="localhost-k8s-calico--apiserver--6f647b777b--hj2zv-eth0" Jul 11 00:27:37.206890 containerd[1465]: 2025-07-11 00:27:37.200 [INFO][5798] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:27:37.206890 containerd[1465]: 2025-07-11 00:27:37.203 [INFO][5790] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f" Jul 11 00:27:37.206890 containerd[1465]: time="2025-07-11T00:27:37.206820390Z" level=info msg="TearDown network for sandbox \"1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f\" successfully" Jul 11 00:27:37.449436 containerd[1465]: time="2025-07-11T00:27:37.449352663Z" level=info msg="CreateContainer within sandbox \"8f66db4219a036cc72578215e1b4f56b14543e12c108e67bb817117a2b4174da\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2ca0397b71cfcf4a874756aea16d4d5cd73f4743ec7fad25bea454fafff7637b\"" Jul 11 00:27:37.449910 containerd[1465]: time="2025-07-11T00:27:37.449882834Z" level=info msg="StartContainer for \"2ca0397b71cfcf4a874756aea16d4d5cd73f4743ec7fad25bea454fafff7637b\"" Jul 11 00:27:37.450244 containerd[1465]: time="2025-07-11T00:27:37.450176282Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:27:37.487778 containerd[1465]: time="2025-07-11T00:27:37.487497827Z" level=info msg="RemovePodSandbox \"1ecc60ef04e69c4c7a82db952a3d17e0212acca63115b1d0cd8208b48c2ee95f\" returns successfully" Jul 11 00:27:37.499768 systemd[1]: Started cri-containerd-2ca0397b71cfcf4a874756aea16d4d5cd73f4743ec7fad25bea454fafff7637b.scope - libcontainer container 2ca0397b71cfcf4a874756aea16d4d5cd73f4743ec7fad25bea454fafff7637b. Jul 11 00:27:37.502536 containerd[1465]: time="2025-07-11T00:27:37.502015182Z" level=info msg="StopPodSandbox for \"759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28\"" Jul 11 00:27:37.817728 containerd[1465]: time="2025-07-11T00:27:37.817524260Z" level=info msg="StartContainer for \"2ca0397b71cfcf4a874756aea16d4d5cd73f4743ec7fad25bea454fafff7637b\" returns successfully" Jul 11 00:27:37.939951 containerd[1465]: 2025-07-11 00:27:37.585 [WARNING][5841] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28" WorkloadEndpoint="localhost-k8s-whisker--85d6c9788d--fh75b-eth0" Jul 11 00:27:37.939951 containerd[1465]: 2025-07-11 00:27:37.586 [INFO][5841] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28" Jul 11 00:27:37.939951 containerd[1465]: 2025-07-11 00:27:37.586 [INFO][5841] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28" iface="eth0" netns="" Jul 11 00:27:37.939951 containerd[1465]: 2025-07-11 00:27:37.586 [INFO][5841] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28" Jul 11 00:27:37.939951 containerd[1465]: 2025-07-11 00:27:37.586 [INFO][5841] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28" Jul 11 00:27:37.939951 containerd[1465]: 2025-07-11 00:27:37.690 [INFO][5861] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28" HandleID="k8s-pod-network.759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28" Workload="localhost-k8s-whisker--85d6c9788d--fh75b-eth0" Jul 11 00:27:37.939951 containerd[1465]: 2025-07-11 00:27:37.691 [INFO][5861] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:27:37.939951 containerd[1465]: 2025-07-11 00:27:37.691 [INFO][5861] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:27:37.939951 containerd[1465]: 2025-07-11 00:27:37.897 [WARNING][5861] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28" HandleID="k8s-pod-network.759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28" Workload="localhost-k8s-whisker--85d6c9788d--fh75b-eth0" Jul 11 00:27:37.939951 containerd[1465]: 2025-07-11 00:27:37.897 [INFO][5861] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28" HandleID="k8s-pod-network.759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28" Workload="localhost-k8s-whisker--85d6c9788d--fh75b-eth0" Jul 11 00:27:37.939951 containerd[1465]: 2025-07-11 00:27:37.932 [INFO][5861] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:27:37.939951 containerd[1465]: 2025-07-11 00:27:37.936 [INFO][5841] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28" Jul 11 00:27:37.940696 containerd[1465]: time="2025-07-11T00:27:37.940007151Z" level=info msg="TearDown network for sandbox \"759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28\" successfully" Jul 11 00:27:37.940696 containerd[1465]: time="2025-07-11T00:27:37.940042938Z" level=info msg="StopPodSandbox for \"759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28\" returns successfully" Jul 11 00:27:37.940778 containerd[1465]: time="2025-07-11T00:27:37.940736856Z" level=info msg="RemovePodSandbox for \"759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28\"" Jul 11 00:27:37.940823 containerd[1465]: time="2025-07-11T00:27:37.940790196Z" level=info msg="Forcibly stopping sandbox \"759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28\"" Jul 11 00:27:38.051161 kubelet[2560]: I0711 00:27:38.051078 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-xd24s" podStartSLOduration=27.064483833 podStartE2EDuration="44.051056382s" podCreationTimestamp="2025-07-11 00:26:54 +0000 UTC" firstStartedPulling="2025-07-11 00:27:19.068428725 +0000 UTC m=+42.660922234" lastFinishedPulling="2025-07-11 00:27:36.055001274 +0000 UTC m=+59.647494783" observedRunningTime="2025-07-11 00:27:38.042766925 +0000 UTC m=+61.635260434" watchObservedRunningTime="2025-07-11 00:27:38.051056382 +0000 UTC m=+61.643549891" Jul 11 00:27:38.083871 containerd[1465]: 2025-07-11 00:27:38.044 [WARNING][5881] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28" WorkloadEndpoint="localhost-k8s-whisker--85d6c9788d--fh75b-eth0" Jul 11 00:27:38.083871 containerd[1465]: 2025-07-11 00:27:38.044 [INFO][5881] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28" Jul 11 00:27:38.083871 containerd[1465]: 2025-07-11 00:27:38.044 [INFO][5881] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28" iface="eth0" netns="" Jul 11 00:27:38.083871 containerd[1465]: 2025-07-11 00:27:38.044 [INFO][5881] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28" Jul 11 00:27:38.083871 containerd[1465]: 2025-07-11 00:27:38.044 [INFO][5881] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28" Jul 11 00:27:38.083871 containerd[1465]: 2025-07-11 00:27:38.069 [INFO][5890] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28" HandleID="k8s-pod-network.759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28" Workload="localhost-k8s-whisker--85d6c9788d--fh75b-eth0" Jul 11 00:27:38.083871 containerd[1465]: 2025-07-11 00:27:38.069 [INFO][5890] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:27:38.083871 containerd[1465]: 2025-07-11 00:27:38.069 [INFO][5890] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:27:38.083871 containerd[1465]: 2025-07-11 00:27:38.075 [WARNING][5890] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28" HandleID="k8s-pod-network.759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28" Workload="localhost-k8s-whisker--85d6c9788d--fh75b-eth0" Jul 11 00:27:38.083871 containerd[1465]: 2025-07-11 00:27:38.075 [INFO][5890] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28" HandleID="k8s-pod-network.759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28" Workload="localhost-k8s-whisker--85d6c9788d--fh75b-eth0" Jul 11 00:27:38.083871 containerd[1465]: 2025-07-11 00:27:38.077 [INFO][5890] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:27:38.083871 containerd[1465]: 2025-07-11 00:27:38.080 [INFO][5881] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28" Jul 11 00:27:38.083871 containerd[1465]: time="2025-07-11T00:27:38.083752222Z" level=info msg="TearDown network for sandbox \"759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28\" successfully" Jul 11 00:27:38.089637 containerd[1465]: time="2025-07-11T00:27:38.089488258Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:27:38.089797 containerd[1465]: time="2025-07-11T00:27:38.089651083Z" level=info msg="RemovePodSandbox \"759ec0e5347cde947e434955541ba8ecac6abe74bac09b53b6e714599ea24d28\" returns successfully" Jul 11 00:27:38.090216 containerd[1465]: time="2025-07-11T00:27:38.090168752Z" level=info msg="StopPodSandbox for \"4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a\"" Jul 11 00:27:38.231733 containerd[1465]: 2025-07-11 00:27:38.133 [WARNING][5908] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--f7c9fffd4--rk9nh-eth0", GenerateName:"calico-kube-controllers-f7c9fffd4-", Namespace:"calico-system", SelfLink:"", UID:"9bfd05fa-8a91-44eb-8f96-a9e542aaa056", ResourceVersion:"1218", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f7c9fffd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fe63ae533620bda24bb7b2f9f1b2829c9d0b98ff67df7bef9880e8bfdca5124a", Pod:"calico-kube-controllers-f7c9fffd4-rk9nh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6ffcc6d4897", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:27:38.231733 containerd[1465]: 2025-07-11 00:27:38.133 [INFO][5908] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a" Jul 11 00:27:38.231733 containerd[1465]: 2025-07-11 00:27:38.133 [INFO][5908] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a" iface="eth0" netns="" Jul 11 00:27:38.231733 containerd[1465]: 2025-07-11 00:27:38.133 [INFO][5908] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a" Jul 11 00:27:38.231733 containerd[1465]: 2025-07-11 00:27:38.133 [INFO][5908] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a" Jul 11 00:27:38.231733 containerd[1465]: 2025-07-11 00:27:38.160 [INFO][5917] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a" HandleID="k8s-pod-network.4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a" Workload="localhost-k8s-calico--kube--controllers--f7c9fffd4--rk9nh-eth0" Jul 11 00:27:38.231733 containerd[1465]: 2025-07-11 00:27:38.160 [INFO][5917] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:27:38.231733 containerd[1465]: 2025-07-11 00:27:38.160 [INFO][5917] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:27:38.231733 containerd[1465]: 2025-07-11 00:27:38.220 [WARNING][5917] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a" HandleID="k8s-pod-network.4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a" Workload="localhost-k8s-calico--kube--controllers--f7c9fffd4--rk9nh-eth0" Jul 11 00:27:38.231733 containerd[1465]: 2025-07-11 00:27:38.220 [INFO][5917] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a" HandleID="k8s-pod-network.4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a" Workload="localhost-k8s-calico--kube--controllers--f7c9fffd4--rk9nh-eth0" Jul 11 00:27:38.231733 containerd[1465]: 2025-07-11 00:27:38.222 [INFO][5917] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:27:38.231733 containerd[1465]: 2025-07-11 00:27:38.227 [INFO][5908] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a" Jul 11 00:27:38.232240 containerd[1465]: time="2025-07-11T00:27:38.231787822Z" level=info msg="TearDown network for sandbox \"4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a\" successfully" Jul 11 00:27:38.232240 containerd[1465]: time="2025-07-11T00:27:38.231832446Z" level=info msg="StopPodSandbox for \"4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a\" returns successfully" Jul 11 00:27:38.232643 containerd[1465]: time="2025-07-11T00:27:38.232542424Z" level=info msg="RemovePodSandbox for \"4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a\"" Jul 11 00:27:38.232838 containerd[1465]: time="2025-07-11T00:27:38.232679542Z" level=info msg="Forcibly stopping sandbox \"4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a\"" Jul 11 00:27:38.245045 systemd[1]: Started sshd@16-10.0.0.159:22-10.0.0.1:35592.service - OpenSSH per-connection server daemon (10.0.0.1:35592). Jul 11 00:27:38.307318 sshd[5932]: Accepted publickey for core from 10.0.0.1 port 35592 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:27:38.310137 sshd[5932]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:27:38.316777 systemd-logind[1445]: New session 17 of user core. Jul 11 00:27:38.326883 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 11 00:27:38.341553 containerd[1465]: 2025-07-11 00:27:38.295 [WARNING][5937] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--f7c9fffd4--rk9nh-eth0", GenerateName:"calico-kube-controllers-f7c9fffd4-", Namespace:"calico-system", SelfLink:"", UID:"9bfd05fa-8a91-44eb-8f96-a9e542aaa056", ResourceVersion:"1218", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f7c9fffd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fe63ae533620bda24bb7b2f9f1b2829c9d0b98ff67df7bef9880e8bfdca5124a", Pod:"calico-kube-controllers-f7c9fffd4-rk9nh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6ffcc6d4897", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:27:38.341553 containerd[1465]: 2025-07-11 00:27:38.295 [INFO][5937] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a" Jul 11 00:27:38.341553 containerd[1465]: 2025-07-11 00:27:38.295 [INFO][5937] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a" iface="eth0" netns="" Jul 11 00:27:38.341553 containerd[1465]: 2025-07-11 00:27:38.295 [INFO][5937] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a" Jul 11 00:27:38.341553 containerd[1465]: 2025-07-11 00:27:38.295 [INFO][5937] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a" Jul 11 00:27:38.341553 containerd[1465]: 2025-07-11 00:27:38.324 [INFO][5946] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a" HandleID="k8s-pod-network.4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a" Workload="localhost-k8s-calico--kube--controllers--f7c9fffd4--rk9nh-eth0" Jul 11 00:27:38.341553 containerd[1465]: 2025-07-11 00:27:38.324 [INFO][5946] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:27:38.341553 containerd[1465]: 2025-07-11 00:27:38.324 [INFO][5946] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:27:38.341553 containerd[1465]: 2025-07-11 00:27:38.334 [WARNING][5946] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a" HandleID="k8s-pod-network.4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a" Workload="localhost-k8s-calico--kube--controllers--f7c9fffd4--rk9nh-eth0" Jul 11 00:27:38.341553 containerd[1465]: 2025-07-11 00:27:38.334 [INFO][5946] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a" HandleID="k8s-pod-network.4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a" Workload="localhost-k8s-calico--kube--controllers--f7c9fffd4--rk9nh-eth0" Jul 11 00:27:38.341553 containerd[1465]: 2025-07-11 00:27:38.335 [INFO][5946] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:27:38.341553 containerd[1465]: 2025-07-11 00:27:38.338 [INFO][5937] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a" Jul 11 00:27:38.341553 containerd[1465]: time="2025-07-11T00:27:38.341527241Z" level=info msg="TearDown network for sandbox \"4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a\" successfully" Jul 11 00:27:38.346149 containerd[1465]: time="2025-07-11T00:27:38.346105490Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:27:38.346214 containerd[1465]: time="2025-07-11T00:27:38.346168808Z" level=info msg="RemovePodSandbox \"4d525d79b5135b1393f181dcef7286118a19f709ee9908e6ed321ba74c7b627a\" returns successfully" Jul 11 00:27:38.346884 containerd[1465]: time="2025-07-11T00:27:38.346841969Z" level=info msg="StopPodSandbox for \"1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547\"" Jul 11 00:27:38.434075 containerd[1465]: 2025-07-11 00:27:38.390 [WARNING][5964] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--dnd8p-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"1d7b0523-a28a-4b28-9a16-dbf8c602e2f1", ResourceVersion:"1180", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b60dc3d2c51cb01838fa7cb503e07f6f0953ad9421537be2175f95585ba19970", Pod:"goldmane-768f4c5c69-dnd8p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib9ad08f285b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:27:38.434075 containerd[1465]: 2025-07-11 00:27:38.391 [INFO][5964] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547" Jul 11 00:27:38.434075 containerd[1465]: 2025-07-11 00:27:38.391 [INFO][5964] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547" iface="eth0" netns="" Jul 11 00:27:38.434075 containerd[1465]: 2025-07-11 00:27:38.391 [INFO][5964] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547" Jul 11 00:27:38.434075 containerd[1465]: 2025-07-11 00:27:38.391 [INFO][5964] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547" Jul 11 00:27:38.434075 containerd[1465]: 2025-07-11 00:27:38.413 [INFO][5975] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547" HandleID="k8s-pod-network.1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547" Workload="localhost-k8s-goldmane--768f4c5c69--dnd8p-eth0" Jul 11 00:27:38.434075 containerd[1465]: 2025-07-11 00:27:38.413 [INFO][5975] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:27:38.434075 containerd[1465]: 2025-07-11 00:27:38.413 [INFO][5975] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:27:38.434075 containerd[1465]: 2025-07-11 00:27:38.419 [WARNING][5975] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547" HandleID="k8s-pod-network.1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547" Workload="localhost-k8s-goldmane--768f4c5c69--dnd8p-eth0" Jul 11 00:27:38.434075 containerd[1465]: 2025-07-11 00:27:38.420 [INFO][5975] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547" HandleID="k8s-pod-network.1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547" Workload="localhost-k8s-goldmane--768f4c5c69--dnd8p-eth0" Jul 11 00:27:38.434075 containerd[1465]: 2025-07-11 00:27:38.423 [INFO][5975] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:27:38.434075 containerd[1465]: 2025-07-11 00:27:38.427 [INFO][5964] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547" Jul 11 00:27:38.434075 containerd[1465]: time="2025-07-11T00:27:38.433856711Z" level=info msg="TearDown network for sandbox \"1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547\" successfully" Jul 11 00:27:38.434075 containerd[1465]: time="2025-07-11T00:27:38.433896225Z" level=info msg="StopPodSandbox for \"1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547\" returns successfully" Jul 11 00:27:38.434728 containerd[1465]: time="2025-07-11T00:27:38.434492231Z" level=info msg="RemovePodSandbox for \"1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547\"" Jul 11 00:27:38.434728 containerd[1465]: time="2025-07-11T00:27:38.434536343Z" level=info msg="Forcibly stopping sandbox \"1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547\"" Jul 11 00:27:38.526975 containerd[1465]: 2025-07-11 00:27:38.474 [WARNING][5997] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--dnd8p-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"1d7b0523-a28a-4b28-9a16-dbf8c602e2f1", ResourceVersion:"1180", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b60dc3d2c51cb01838fa7cb503e07f6f0953ad9421537be2175f95585ba19970", Pod:"goldmane-768f4c5c69-dnd8p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib9ad08f285b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:27:38.526975 containerd[1465]: 2025-07-11 00:27:38.475 [INFO][5997] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547" Jul 11 00:27:38.526975 containerd[1465]: 2025-07-11 00:27:38.475 [INFO][5997] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547" iface="eth0" netns="" Jul 11 00:27:38.526975 containerd[1465]: 2025-07-11 00:27:38.475 [INFO][5997] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547" Jul 11 00:27:38.526975 containerd[1465]: 2025-07-11 00:27:38.475 [INFO][5997] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547" Jul 11 00:27:38.526975 containerd[1465]: 2025-07-11 00:27:38.510 [INFO][6006] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547" HandleID="k8s-pod-network.1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547" Workload="localhost-k8s-goldmane--768f4c5c69--dnd8p-eth0" Jul 11 00:27:38.526975 containerd[1465]: 2025-07-11 00:27:38.510 [INFO][6006] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:27:38.526975 containerd[1465]: 2025-07-11 00:27:38.510 [INFO][6006] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:27:38.526975 containerd[1465]: 2025-07-11 00:27:38.516 [WARNING][6006] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547" HandleID="k8s-pod-network.1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547" Workload="localhost-k8s-goldmane--768f4c5c69--dnd8p-eth0" Jul 11 00:27:38.526975 containerd[1465]: 2025-07-11 00:27:38.517 [INFO][6006] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547" HandleID="k8s-pod-network.1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547" Workload="localhost-k8s-goldmane--768f4c5c69--dnd8p-eth0" Jul 11 00:27:38.526975 containerd[1465]: 2025-07-11 00:27:38.518 [INFO][6006] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:27:38.526975 containerd[1465]: 2025-07-11 00:27:38.523 [INFO][5997] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547" Jul 11 00:27:38.527928 containerd[1465]: time="2025-07-11T00:27:38.527024592Z" level=info msg="TearDown network for sandbox \"1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547\" successfully" Jul 11 00:27:38.540534 containerd[1465]: time="2025-07-11T00:27:38.540475940Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:27:38.540633 containerd[1465]: time="2025-07-11T00:27:38.540560267Z" level=info msg="RemovePodSandbox \"1850103efcebe378ec12e6f4ea07bb569a3c0eaae6db755bba77a1f6732ce547\" returns successfully" Jul 11 00:27:38.541202 containerd[1465]: time="2025-07-11T00:27:38.541142998Z" level=info msg="StopPodSandbox for \"bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef\"" Jul 11 00:27:38.627657 containerd[1465]: 2025-07-11 00:27:38.583 [WARNING][6024] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--gvf85-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"532c872b-897c-4658-b37f-c0b4508abd55", ResourceVersion:"1123", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b2da956b8ef3c0e5d7bb09b98b014293f006c42457c4dcdbb16c79ea1b08b9d3", Pod:"coredns-674b8bbfcf-gvf85", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8adc612b392", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:27:38.627657 containerd[1465]: 2025-07-11 00:27:38.584 [INFO][6024] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef" Jul 11 00:27:38.627657 containerd[1465]: 2025-07-11 00:27:38.584 [INFO][6024] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef" iface="eth0" netns="" Jul 11 00:27:38.627657 containerd[1465]: 2025-07-11 00:27:38.584 [INFO][6024] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef" Jul 11 00:27:38.627657 containerd[1465]: 2025-07-11 00:27:38.584 [INFO][6024] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef" Jul 11 00:27:38.627657 containerd[1465]: 2025-07-11 00:27:38.611 [INFO][6033] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef" HandleID="k8s-pod-network.bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef" Workload="localhost-k8s-coredns--674b8bbfcf--gvf85-eth0" Jul 11 00:27:38.627657 containerd[1465]: 2025-07-11 00:27:38.611 [INFO][6033] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:27:38.627657 containerd[1465]: 2025-07-11 00:27:38.611 [INFO][6033] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:27:38.627657 containerd[1465]: 2025-07-11 00:27:38.618 [WARNING][6033] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef" HandleID="k8s-pod-network.bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef" Workload="localhost-k8s-coredns--674b8bbfcf--gvf85-eth0" Jul 11 00:27:38.627657 containerd[1465]: 2025-07-11 00:27:38.618 [INFO][6033] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef" HandleID="k8s-pod-network.bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef" Workload="localhost-k8s-coredns--674b8bbfcf--gvf85-eth0" Jul 11 00:27:38.627657 containerd[1465]: 2025-07-11 00:27:38.619 [INFO][6033] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:27:38.627657 containerd[1465]: 2025-07-11 00:27:38.624 [INFO][6024] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef" Jul 11 00:27:38.628213 containerd[1465]: time="2025-07-11T00:27:38.627770546Z" level=info msg="TearDown network for sandbox \"bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef\" successfully" Jul 11 00:27:38.628213 containerd[1465]: time="2025-07-11T00:27:38.627806163Z" level=info msg="StopPodSandbox for \"bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef\" returns successfully" Jul 11 00:27:38.628276 containerd[1465]: time="2025-07-11T00:27:38.628253129Z" level=info msg="RemovePodSandbox for \"bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef\"" Jul 11 00:27:38.628303 containerd[1465]: time="2025-07-11T00:27:38.628283716Z" level=info msg="Forcibly stopping sandbox \"bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef\"" Jul 11 00:27:38.640041 kubelet[2560]: I0711 00:27:38.640001 2560 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 11 00:27:38.643198 kubelet[2560]: I0711 00:27:38.643100 2560 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 11 00:27:38.673538 sshd[5932]: pam_unix(sshd:session): session closed for user core Jul 11 00:27:38.679387 systemd[1]: sshd@16-10.0.0.159:22-10.0.0.1:35592.service: Deactivated successfully. Jul 11 00:27:38.682526 systemd[1]: session-17.scope: Deactivated successfully. Jul 11 00:27:38.685411 systemd-logind[1445]: Session 17 logged out. Waiting for processes to exit. Jul 11 00:27:38.686910 systemd-logind[1445]: Removed session 17. Jul 11 00:27:38.712395 containerd[1465]: 2025-07-11 00:27:38.670 [WARNING][6051] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--gvf85-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"532c872b-897c-4658-b37f-c0b4508abd55", ResourceVersion:"1123", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b2da956b8ef3c0e5d7bb09b98b014293f006c42457c4dcdbb16c79ea1b08b9d3", Pod:"coredns-674b8bbfcf-gvf85", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8adc612b392", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:27:38.712395 containerd[1465]: 2025-07-11 00:27:38.670 [INFO][6051] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef" Jul 11 00:27:38.712395 containerd[1465]: 2025-07-11 00:27:38.670 [INFO][6051] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef" iface="eth0" netns="" Jul 11 00:27:38.712395 containerd[1465]: 2025-07-11 00:27:38.670 [INFO][6051] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef" Jul 11 00:27:38.712395 containerd[1465]: 2025-07-11 00:27:38.670 [INFO][6051] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef" Jul 11 00:27:38.712395 containerd[1465]: 2025-07-11 00:27:38.699 [INFO][6059] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef" HandleID="k8s-pod-network.bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef" Workload="localhost-k8s-coredns--674b8bbfcf--gvf85-eth0" Jul 11 00:27:38.712395 containerd[1465]: 2025-07-11 00:27:38.700 [INFO][6059] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:27:38.712395 containerd[1465]: 2025-07-11 00:27:38.700 [INFO][6059] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:27:38.712395 containerd[1465]: 2025-07-11 00:27:38.705 [WARNING][6059] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef" HandleID="k8s-pod-network.bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef" Workload="localhost-k8s-coredns--674b8bbfcf--gvf85-eth0" Jul 11 00:27:38.712395 containerd[1465]: 2025-07-11 00:27:38.705 [INFO][6059] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef" HandleID="k8s-pod-network.bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef" Workload="localhost-k8s-coredns--674b8bbfcf--gvf85-eth0" Jul 11 00:27:38.712395 containerd[1465]: 2025-07-11 00:27:38.706 [INFO][6059] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:27:38.712395 containerd[1465]: 2025-07-11 00:27:38.709 [INFO][6051] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef" Jul 11 00:27:38.713063 containerd[1465]: time="2025-07-11T00:27:38.712436623Z" level=info msg="TearDown network for sandbox \"bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef\" successfully" Jul 11 00:27:38.717112 containerd[1465]: time="2025-07-11T00:27:38.717072069Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:27:38.717179 containerd[1465]: time="2025-07-11T00:27:38.717139926Z" level=info msg="RemovePodSandbox \"bea1d3608b35df18f221ede0076be62fc6f169504e4cce6699d2089a87910fef\" returns successfully" Jul 11 00:27:38.717806 containerd[1465]: time="2025-07-11T00:27:38.717759005Z" level=info msg="StopPodSandbox for \"b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08\"" Jul 11 00:27:38.794496 containerd[1465]: 2025-07-11 00:27:38.755 [WARNING][6078] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f647b777b--qhnsp-eth0", GenerateName:"calico-apiserver-6f647b777b-", Namespace:"calico-apiserver", SelfLink:"", UID:"d4787e60-b0e6-42f0-b414-39732f919000", ResourceVersion:"1167", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f647b777b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1a9af3873cb7c7c2d6501ee2a258757429a5eb1c33bdf2fe68189bf10987c4b6", Pod:"calico-apiserver-6f647b777b-qhnsp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali94d89816dd3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:27:38.794496 containerd[1465]: 2025-07-11 00:27:38.756 [INFO][6078] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08" Jul 11 00:27:38.794496 containerd[1465]: 2025-07-11 00:27:38.756 [INFO][6078] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08" iface="eth0" netns="" Jul 11 00:27:38.794496 containerd[1465]: 2025-07-11 00:27:38.756 [INFO][6078] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08" Jul 11 00:27:38.794496 containerd[1465]: 2025-07-11 00:27:38.756 [INFO][6078] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08" Jul 11 00:27:38.794496 containerd[1465]: 2025-07-11 00:27:38.778 [INFO][6087] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08" HandleID="k8s-pod-network.b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08" Workload="localhost-k8s-calico--apiserver--6f647b777b--qhnsp-eth0" Jul 11 00:27:38.794496 containerd[1465]: 2025-07-11 00:27:38.778 [INFO][6087] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:27:38.794496 containerd[1465]: 2025-07-11 00:27:38.778 [INFO][6087] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:27:38.794496 containerd[1465]: 2025-07-11 00:27:38.786 [WARNING][6087] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08" HandleID="k8s-pod-network.b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08" Workload="localhost-k8s-calico--apiserver--6f647b777b--qhnsp-eth0" Jul 11 00:27:38.794496 containerd[1465]: 2025-07-11 00:27:38.786 [INFO][6087] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08" HandleID="k8s-pod-network.b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08" Workload="localhost-k8s-calico--apiserver--6f647b777b--qhnsp-eth0" Jul 11 00:27:38.794496 containerd[1465]: 2025-07-11 00:27:38.787 [INFO][6087] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:27:38.794496 containerd[1465]: 2025-07-11 00:27:38.790 [INFO][6078] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08" Jul 11 00:27:38.795102 containerd[1465]: time="2025-07-11T00:27:38.794548418Z" level=info msg="TearDown network for sandbox \"b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08\" successfully" Jul 11 00:27:38.795102 containerd[1465]: time="2025-07-11T00:27:38.794582141Z" level=info msg="StopPodSandbox for \"b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08\" returns successfully" Jul 11 00:27:38.795215 containerd[1465]: time="2025-07-11T00:27:38.795183587Z" level=info msg="RemovePodSandbox for \"b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08\"" Jul 11 00:27:38.795252 containerd[1465]: time="2025-07-11T00:27:38.795216499Z" level=info msg="Forcibly stopping sandbox \"b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08\"" Jul 11 00:27:38.874061 containerd[1465]: 2025-07-11 00:27:38.838 [WARNING][6105] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f647b777b--qhnsp-eth0", GenerateName:"calico-apiserver-6f647b777b-", Namespace:"calico-apiserver", SelfLink:"", UID:"d4787e60-b0e6-42f0-b414-39732f919000", ResourceVersion:"1167", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f647b777b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1a9af3873cb7c7c2d6501ee2a258757429a5eb1c33bdf2fe68189bf10987c4b6", Pod:"calico-apiserver-6f647b777b-qhnsp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali94d89816dd3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:27:38.874061 containerd[1465]: 2025-07-11 00:27:38.838 [INFO][6105] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08" Jul 11 00:27:38.874061 containerd[1465]: 2025-07-11 00:27:38.838 [INFO][6105] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08" iface="eth0" netns="" Jul 11 00:27:38.874061 containerd[1465]: 2025-07-11 00:27:38.838 [INFO][6105] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08" Jul 11 00:27:38.874061 containerd[1465]: 2025-07-11 00:27:38.838 [INFO][6105] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08" Jul 11 00:27:38.874061 containerd[1465]: 2025-07-11 00:27:38.859 [INFO][6113] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08" HandleID="k8s-pod-network.b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08" Workload="localhost-k8s-calico--apiserver--6f647b777b--qhnsp-eth0" Jul 11 00:27:38.874061 containerd[1465]: 2025-07-11 00:27:38.859 [INFO][6113] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:27:38.874061 containerd[1465]: 2025-07-11 00:27:38.859 [INFO][6113] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:27:38.874061 containerd[1465]: 2025-07-11 00:27:38.865 [WARNING][6113] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08" HandleID="k8s-pod-network.b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08" Workload="localhost-k8s-calico--apiserver--6f647b777b--qhnsp-eth0" Jul 11 00:27:38.874061 containerd[1465]: 2025-07-11 00:27:38.865 [INFO][6113] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08" HandleID="k8s-pod-network.b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08" Workload="localhost-k8s-calico--apiserver--6f647b777b--qhnsp-eth0" Jul 11 00:27:38.874061 containerd[1465]: 2025-07-11 00:27:38.867 [INFO][6113] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:27:38.874061 containerd[1465]: 2025-07-11 00:27:38.870 [INFO][6105] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08" Jul 11 00:27:38.874721 containerd[1465]: time="2025-07-11T00:27:38.874088480Z" level=info msg="TearDown network for sandbox \"b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08\" successfully" Jul 11 00:27:39.009466 containerd[1465]: time="2025-07-11T00:27:39.009320280Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:27:39.009466 containerd[1465]: time="2025-07-11T00:27:39.009418203Z" level=info msg="RemovePodSandbox \"b2d6f5de69317aa88533c223fb75a91f2a56a57e0949abe1d47d0b1803251b08\" returns successfully" Jul 11 00:27:39.010590 containerd[1465]: time="2025-07-11T00:27:39.010484411Z" level=info msg="StopPodSandbox for \"0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd\"" Jul 11 00:27:39.088714 containerd[1465]: 2025-07-11 00:27:39.048 [WARNING][6131] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xd24s-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7d661932-2475-4fb4-890b-1d7cc7f7d3fc", ResourceVersion:"1251", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8f66db4219a036cc72578215e1b4f56b14543e12c108e67bb817117a2b4174da", Pod:"csi-node-driver-xd24s", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4209e93a805", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:27:39.088714 containerd[1465]: 2025-07-11 00:27:39.048 [INFO][6131] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd" Jul 11 00:27:39.088714 containerd[1465]: 2025-07-11 00:27:39.048 [INFO][6131] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd" iface="eth0" netns="" Jul 11 00:27:39.088714 containerd[1465]: 2025-07-11 00:27:39.048 [INFO][6131] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd" Jul 11 00:27:39.088714 containerd[1465]: 2025-07-11 00:27:39.048 [INFO][6131] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd" Jul 11 00:27:39.088714 containerd[1465]: 2025-07-11 00:27:39.073 [INFO][6141] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd" HandleID="k8s-pod-network.0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd" Workload="localhost-k8s-csi--node--driver--xd24s-eth0" Jul 11 00:27:39.088714 containerd[1465]: 2025-07-11 00:27:39.073 [INFO][6141] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:27:39.088714 containerd[1465]: 2025-07-11 00:27:39.073 [INFO][6141] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:27:39.088714 containerd[1465]: 2025-07-11 00:27:39.079 [WARNING][6141] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd" HandleID="k8s-pod-network.0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd" Workload="localhost-k8s-csi--node--driver--xd24s-eth0" Jul 11 00:27:39.088714 containerd[1465]: 2025-07-11 00:27:39.079 [INFO][6141] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd" HandleID="k8s-pod-network.0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd" Workload="localhost-k8s-csi--node--driver--xd24s-eth0" Jul 11 00:27:39.088714 containerd[1465]: 2025-07-11 00:27:39.081 [INFO][6141] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:27:39.088714 containerd[1465]: 2025-07-11 00:27:39.085 [INFO][6131] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd" Jul 11 00:27:39.089245 containerd[1465]: time="2025-07-11T00:27:39.088753513Z" level=info msg="TearDown network for sandbox \"0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd\" successfully" Jul 11 00:27:39.089245 containerd[1465]: time="2025-07-11T00:27:39.088791644Z" level=info msg="StopPodSandbox for \"0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd\" returns successfully" Jul 11 00:27:39.089453 containerd[1465]: time="2025-07-11T00:27:39.089413569Z" level=info msg="RemovePodSandbox for \"0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd\"" Jul 11 00:27:39.089499 containerd[1465]: time="2025-07-11T00:27:39.089457582Z" level=info msg="Forcibly stopping sandbox \"0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd\"" Jul 11 00:27:39.166573 containerd[1465]: 2025-07-11 00:27:39.130 [WARNING][6159] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xd24s-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7d661932-2475-4fb4-890b-1d7cc7f7d3fc", ResourceVersion:"1251", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8f66db4219a036cc72578215e1b4f56b14543e12c108e67bb817117a2b4174da", Pod:"csi-node-driver-xd24s", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4209e93a805", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:27:39.166573 containerd[1465]: 2025-07-11 00:27:39.131 [INFO][6159] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd" Jul 11 00:27:39.166573 containerd[1465]: 2025-07-11 00:27:39.131 [INFO][6159] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd" iface="eth0" netns="" Jul 11 00:27:39.166573 containerd[1465]: 2025-07-11 00:27:39.131 [INFO][6159] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd" Jul 11 00:27:39.166573 containerd[1465]: 2025-07-11 00:27:39.131 [INFO][6159] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd" Jul 11 00:27:39.166573 containerd[1465]: 2025-07-11 00:27:39.152 [INFO][6167] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd" HandleID="k8s-pod-network.0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd" Workload="localhost-k8s-csi--node--driver--xd24s-eth0" Jul 11 00:27:39.166573 containerd[1465]: 2025-07-11 00:27:39.152 [INFO][6167] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:27:39.166573 containerd[1465]: 2025-07-11 00:27:39.152 [INFO][6167] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:27:39.166573 containerd[1465]: 2025-07-11 00:27:39.159 [WARNING][6167] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd" HandleID="k8s-pod-network.0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd" Workload="localhost-k8s-csi--node--driver--xd24s-eth0" Jul 11 00:27:39.166573 containerd[1465]: 2025-07-11 00:27:39.159 [INFO][6167] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd" HandleID="k8s-pod-network.0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd" Workload="localhost-k8s-csi--node--driver--xd24s-eth0" Jul 11 00:27:39.166573 containerd[1465]: 2025-07-11 00:27:39.160 [INFO][6167] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:27:39.166573 containerd[1465]: 2025-07-11 00:27:39.163 [INFO][6159] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd" Jul 11 00:27:39.167065 containerd[1465]: time="2025-07-11T00:27:39.166662079Z" level=info msg="TearDown network for sandbox \"0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd\" successfully" Jul 11 00:27:39.671464 containerd[1465]: time="2025-07-11T00:27:39.671399505Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:27:39.671658 containerd[1465]: time="2025-07-11T00:27:39.671501857Z" level=info msg="RemovePodSandbox \"0133404fd2cafa4b2919611c3132da12ecb69592ce38aba778f262310e285ccd\" returns successfully" Jul 11 00:27:39.672067 containerd[1465]: time="2025-07-11T00:27:39.672045205Z" level=info msg="StopPodSandbox for \"de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9\"" Jul 11 00:27:39.750027 containerd[1465]: 2025-07-11 00:27:39.712 [WARNING][6185] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f7b5d6c54--pbqrn-eth0", GenerateName:"calico-apiserver-5f7b5d6c54-", Namespace:"calico-apiserver", SelfLink:"", UID:"26a3e5b9-9cc0-4afc-9ba0-86cf4b152857", ResourceVersion:"1201", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f7b5d6c54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9faf57772507de039b382dbe1857facc7ef0ee51abc0fb3ec5dd6c5b2f4c0bd4", Pod:"calico-apiserver-5f7b5d6c54-pbqrn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7f574f60324", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:27:39.750027 containerd[1465]: 2025-07-11 00:27:39.713 [INFO][6185] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9" Jul 11 00:27:39.750027 containerd[1465]: 2025-07-11 00:27:39.713 [INFO][6185] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9" iface="eth0" netns="" Jul 11 00:27:39.750027 containerd[1465]: 2025-07-11 00:27:39.713 [INFO][6185] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9" Jul 11 00:27:39.750027 containerd[1465]: 2025-07-11 00:27:39.713 [INFO][6185] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9" Jul 11 00:27:39.750027 containerd[1465]: 2025-07-11 00:27:39.735 [INFO][6194] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9" HandleID="k8s-pod-network.de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9" Workload="localhost-k8s-calico--apiserver--5f7b5d6c54--pbqrn-eth0" Jul 11 00:27:39.750027 containerd[1465]: 2025-07-11 00:27:39.735 [INFO][6194] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:27:39.750027 containerd[1465]: 2025-07-11 00:27:39.735 [INFO][6194] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:27:39.750027 containerd[1465]: 2025-07-11 00:27:39.742 [WARNING][6194] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9" HandleID="k8s-pod-network.de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9" Workload="localhost-k8s-calico--apiserver--5f7b5d6c54--pbqrn-eth0" Jul 11 00:27:39.750027 containerd[1465]: 2025-07-11 00:27:39.742 [INFO][6194] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9" HandleID="k8s-pod-network.de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9" Workload="localhost-k8s-calico--apiserver--5f7b5d6c54--pbqrn-eth0" Jul 11 00:27:39.750027 containerd[1465]: 2025-07-11 00:27:39.743 [INFO][6194] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:27:39.750027 containerd[1465]: 2025-07-11 00:27:39.746 [INFO][6185] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9" Jul 11 00:27:39.750540 containerd[1465]: time="2025-07-11T00:27:39.750085828Z" level=info msg="TearDown network for sandbox \"de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9\" successfully" Jul 11 00:27:39.750540 containerd[1465]: time="2025-07-11T00:27:39.750122678Z" level=info msg="StopPodSandbox for \"de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9\" returns successfully" Jul 11 00:27:39.750746 containerd[1465]: time="2025-07-11T00:27:39.750702835Z" level=info msg="RemovePodSandbox for \"de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9\"" Jul 11 00:27:39.750746 containerd[1465]: time="2025-07-11T00:27:39.750744222Z" level=info msg="Forcibly stopping sandbox \"de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9\"" Jul 11 00:27:39.828402 containerd[1465]: 2025-07-11 00:27:39.790 [WARNING][6212] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f7b5d6c54--pbqrn-eth0", GenerateName:"calico-apiserver-5f7b5d6c54-", Namespace:"calico-apiserver", SelfLink:"", UID:"26a3e5b9-9cc0-4afc-9ba0-86cf4b152857", ResourceVersion:"1201", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f7b5d6c54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9faf57772507de039b382dbe1857facc7ef0ee51abc0fb3ec5dd6c5b2f4c0bd4", Pod:"calico-apiserver-5f7b5d6c54-pbqrn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7f574f60324", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:27:39.828402 containerd[1465]: 2025-07-11 00:27:39.791 [INFO][6212] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9" Jul 11 00:27:39.828402 containerd[1465]: 2025-07-11 00:27:39.791 [INFO][6212] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9" iface="eth0" netns="" Jul 11 00:27:39.828402 containerd[1465]: 2025-07-11 00:27:39.791 [INFO][6212] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9" Jul 11 00:27:39.828402 containerd[1465]: 2025-07-11 00:27:39.791 [INFO][6212] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9" Jul 11 00:27:39.828402 containerd[1465]: 2025-07-11 00:27:39.813 [INFO][6223] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9" HandleID="k8s-pod-network.de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9" Workload="localhost-k8s-calico--apiserver--5f7b5d6c54--pbqrn-eth0" Jul 11 00:27:39.828402 containerd[1465]: 2025-07-11 00:27:39.814 [INFO][6223] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:27:39.828402 containerd[1465]: 2025-07-11 00:27:39.814 [INFO][6223] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:27:39.828402 containerd[1465]: 2025-07-11 00:27:39.820 [WARNING][6223] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9" HandleID="k8s-pod-network.de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9" Workload="localhost-k8s-calico--apiserver--5f7b5d6c54--pbqrn-eth0" Jul 11 00:27:39.828402 containerd[1465]: 2025-07-11 00:27:39.820 [INFO][6223] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9" HandleID="k8s-pod-network.de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9" Workload="localhost-k8s-calico--apiserver--5f7b5d6c54--pbqrn-eth0" Jul 11 00:27:39.828402 containerd[1465]: 2025-07-11 00:27:39.821 [INFO][6223] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:27:39.828402 containerd[1465]: 2025-07-11 00:27:39.824 [INFO][6212] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9" Jul 11 00:27:39.828872 containerd[1465]: time="2025-07-11T00:27:39.828452373Z" level=info msg="TearDown network for sandbox \"de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9\" successfully" Jul 11 00:27:39.843139 containerd[1465]: time="2025-07-11T00:27:39.843081994Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:27:39.843217 containerd[1465]: time="2025-07-11T00:27:39.843167204Z" level=info msg="RemovePodSandbox \"de592b086f94cff9c1124c6e7634a856dd369654169a07a7cb3c419537a877e9\" returns successfully" Jul 11 00:27:39.843957 containerd[1465]: time="2025-07-11T00:27:39.843792275Z" level=info msg="StopPodSandbox for \"7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257\"" Jul 11 00:27:39.927294 containerd[1465]: 2025-07-11 00:27:39.883 [WARNING][6241] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--msvlj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"73fc6509-dcba-4609-91c7-d051cb3bbfc4", ResourceVersion:"1089", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4e6ef8d93f1975aefd1f8ee540b2a00d3fed07305474c13c54848801ced7bc01", Pod:"coredns-674b8bbfcf-msvlj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1aa84b469b9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:27:39.927294 containerd[1465]: 2025-07-11 00:27:39.884 [INFO][6241] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257" Jul 11 00:27:39.927294 containerd[1465]: 2025-07-11 00:27:39.884 [INFO][6241] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257" iface="eth0" netns="" Jul 11 00:27:39.927294 containerd[1465]: 2025-07-11 00:27:39.884 [INFO][6241] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257" Jul 11 00:27:39.927294 containerd[1465]: 2025-07-11 00:27:39.884 [INFO][6241] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257" Jul 11 00:27:39.927294 containerd[1465]: 2025-07-11 00:27:39.911 [INFO][6251] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257" HandleID="k8s-pod-network.7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257" Workload="localhost-k8s-coredns--674b8bbfcf--msvlj-eth0" Jul 11 00:27:39.927294 containerd[1465]: 2025-07-11 00:27:39.911 [INFO][6251] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:27:39.927294 containerd[1465]: 2025-07-11 00:27:39.911 [INFO][6251] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:27:39.927294 containerd[1465]: 2025-07-11 00:27:39.919 [WARNING][6251] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257" HandleID="k8s-pod-network.7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257" Workload="localhost-k8s-coredns--674b8bbfcf--msvlj-eth0" Jul 11 00:27:39.927294 containerd[1465]: 2025-07-11 00:27:39.919 [INFO][6251] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257" HandleID="k8s-pod-network.7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257" Workload="localhost-k8s-coredns--674b8bbfcf--msvlj-eth0" Jul 11 00:27:39.927294 containerd[1465]: 2025-07-11 00:27:39.921 [INFO][6251] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:27:39.927294 containerd[1465]: 2025-07-11 00:27:39.924 [INFO][6241] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257" Jul 11 00:27:39.927294 containerd[1465]: time="2025-07-11T00:27:39.927251932Z" level=info msg="TearDown network for sandbox \"7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257\" successfully" Jul 11 00:27:39.927294 containerd[1465]: time="2025-07-11T00:27:39.927276818Z" level=info msg="StopPodSandbox for \"7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257\" returns successfully" Jul 11 00:27:39.927950 containerd[1465]: time="2025-07-11T00:27:39.927774120Z" level=info msg="RemovePodSandbox for \"7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257\"" Jul 11 00:27:39.927950 containerd[1465]: time="2025-07-11T00:27:39.927797955Z" level=info msg="Forcibly stopping sandbox \"7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257\"" Jul 11 00:27:40.010361 containerd[1465]: 2025-07-11 00:27:39.970 [WARNING][6268] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--msvlj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"73fc6509-dcba-4609-91c7-d051cb3bbfc4", ResourceVersion:"1089", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4e6ef8d93f1975aefd1f8ee540b2a00d3fed07305474c13c54848801ced7bc01", Pod:"coredns-674b8bbfcf-msvlj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1aa84b469b9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:27:40.010361 containerd[1465]: 2025-07-11 00:27:39.970 [INFO][6268] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257" Jul 11 00:27:40.010361 containerd[1465]: 2025-07-11 00:27:39.970 [INFO][6268] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257" iface="eth0" netns="" Jul 11 00:27:40.010361 containerd[1465]: 2025-07-11 00:27:39.970 [INFO][6268] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257" Jul 11 00:27:40.010361 containerd[1465]: 2025-07-11 00:27:39.970 [INFO][6268] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257" Jul 11 00:27:40.010361 containerd[1465]: 2025-07-11 00:27:39.994 [INFO][6277] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257" HandleID="k8s-pod-network.7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257" Workload="localhost-k8s-coredns--674b8bbfcf--msvlj-eth0" Jul 11 00:27:40.010361 containerd[1465]: 2025-07-11 00:27:39.994 [INFO][6277] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:27:40.010361 containerd[1465]: 2025-07-11 00:27:39.994 [INFO][6277] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:27:40.010361 containerd[1465]: 2025-07-11 00:27:40.001 [WARNING][6277] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257" HandleID="k8s-pod-network.7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257" Workload="localhost-k8s-coredns--674b8bbfcf--msvlj-eth0" Jul 11 00:27:40.010361 containerd[1465]: 2025-07-11 00:27:40.001 [INFO][6277] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257" HandleID="k8s-pod-network.7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257" Workload="localhost-k8s-coredns--674b8bbfcf--msvlj-eth0" Jul 11 00:27:40.010361 containerd[1465]: 2025-07-11 00:27:40.003 [INFO][6277] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:27:40.010361 containerd[1465]: 2025-07-11 00:27:40.007 [INFO][6268] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257" Jul 11 00:27:40.011681 containerd[1465]: time="2025-07-11T00:27:40.010391484Z" level=info msg="TearDown network for sandbox \"7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257\" successfully" Jul 11 00:27:40.015861 containerd[1465]: time="2025-07-11T00:27:40.015811229Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:27:40.015861 containerd[1465]: time="2025-07-11T00:27:40.015882012Z" level=info msg="RemovePodSandbox \"7374d5a058c629df0c5cda78a47217a9d319e877dcbd1ae405718f1fec34d257\" returns successfully" Jul 11 00:27:43.690303 systemd[1]: Started sshd@17-10.0.0.159:22-10.0.0.1:42120.service - OpenSSH per-connection server daemon (10.0.0.1:42120). Jul 11 00:27:43.727163 sshd[6289]: Accepted publickey for core from 10.0.0.1 port 42120 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:27:43.728902 sshd[6289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:27:43.733204 systemd-logind[1445]: New session 18 of user core. Jul 11 00:27:43.740744 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 11 00:27:43.883799 sshd[6289]: pam_unix(sshd:session): session closed for user core Jul 11 00:27:43.892007 systemd[1]: sshd@17-10.0.0.159:22-10.0.0.1:42120.service: Deactivated successfully. Jul 11 00:27:43.894196 systemd[1]: session-18.scope: Deactivated successfully. Jul 11 00:27:43.895771 systemd-logind[1445]: Session 18 logged out. Waiting for processes to exit. Jul 11 00:27:43.901884 systemd[1]: Started sshd@18-10.0.0.159:22-10.0.0.1:42134.service - OpenSSH per-connection server daemon (10.0.0.1:42134). Jul 11 00:27:43.903011 systemd-logind[1445]: Removed session 18. Jul 11 00:27:43.932007 sshd[6304]: Accepted publickey for core from 10.0.0.1 port 42134 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:27:43.933696 sshd[6304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:27:43.937897 systemd-logind[1445]: New session 19 of user core. Jul 11 00:27:43.945747 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 11 00:27:44.143755 sshd[6304]: pam_unix(sshd:session): session closed for user core Jul 11 00:27:44.163125 systemd[1]: sshd@18-10.0.0.159:22-10.0.0.1:42134.service: Deactivated successfully. Jul 11 00:27:44.165741 systemd[1]: session-19.scope: Deactivated successfully. Jul 11 00:27:44.167553 systemd-logind[1445]: Session 19 logged out. Waiting for processes to exit. Jul 11 00:27:44.176816 systemd[1]: Started sshd@19-10.0.0.159:22-10.0.0.1:42136.service - OpenSSH per-connection server daemon (10.0.0.1:42136). Jul 11 00:27:44.177970 systemd-logind[1445]: Removed session 19. Jul 11 00:27:44.217130 sshd[6318]: Accepted publickey for core from 10.0.0.1 port 42136 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:27:44.218957 sshd[6318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:27:44.223341 systemd-logind[1445]: New session 20 of user core. Jul 11 00:27:44.230776 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 11 00:27:44.783843 systemd[1]: run-containerd-runc-k8s.io-d924ceeb6ef2ae6c1a3040803b4c14b9d2cfc4afb5a34ff2b32959d5892f9cfc-runc.tamRrl.mount: Deactivated successfully. Jul 11 00:27:45.058635 sshd[6318]: pam_unix(sshd:session): session closed for user core Jul 11 00:27:45.070338 systemd[1]: sshd@19-10.0.0.159:22-10.0.0.1:42136.service: Deactivated successfully. Jul 11 00:27:45.076935 systemd[1]: session-20.scope: Deactivated successfully. Jul 11 00:27:45.082907 systemd-logind[1445]: Session 20 logged out. Waiting for processes to exit. Jul 11 00:27:45.087950 systemd[1]: Started sshd@20-10.0.0.159:22-10.0.0.1:42148.service - OpenSSH per-connection server daemon (10.0.0.1:42148). Jul 11 00:27:45.092561 systemd-logind[1445]: Removed session 20. Jul 11 00:27:45.121958 kubelet[2560]: I0711 00:27:45.121900 2560 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:27:45.126622 sshd[6361]: Accepted publickey for core from 10.0.0.1 port 42148 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:27:45.128754 sshd[6361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:27:45.134267 systemd-logind[1445]: New session 21 of user core. Jul 11 00:27:45.142953 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 11 00:27:45.452705 sshd[6361]: pam_unix(sshd:session): session closed for user core Jul 11 00:27:45.461759 systemd[1]: sshd@20-10.0.0.159:22-10.0.0.1:42148.service: Deactivated successfully. Jul 11 00:27:45.464143 systemd[1]: session-21.scope: Deactivated successfully. Jul 11 00:27:45.464923 systemd-logind[1445]: Session 21 logged out. Waiting for processes to exit. Jul 11 00:27:45.477022 systemd[1]: Started sshd@21-10.0.0.159:22-10.0.0.1:42164.service - OpenSSH per-connection server daemon (10.0.0.1:42164). Jul 11 00:27:45.478772 systemd-logind[1445]: Removed session 21. Jul 11 00:27:45.514202 sshd[6375]: Accepted publickey for core from 10.0.0.1 port 42164 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:27:45.516269 sshd[6375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:27:45.520975 systemd-logind[1445]: New session 22 of user core. Jul 11 00:27:45.528792 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 11 00:27:45.646571 sshd[6375]: pam_unix(sshd:session): session closed for user core Jul 11 00:27:45.650456 systemd[1]: sshd@21-10.0.0.159:22-10.0.0.1:42164.service: Deactivated successfully. Jul 11 00:27:45.652562 systemd[1]: session-22.scope: Deactivated successfully. Jul 11 00:27:45.653209 systemd-logind[1445]: Session 22 logged out. Waiting for processes to exit. Jul 11 00:27:45.653982 systemd-logind[1445]: Removed session 22. Jul 11 00:27:47.504218 kubelet[2560]: E0711 00:27:47.504158 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:27:49.827843 kubelet[2560]: I0711 00:27:49.827244 2560 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:27:50.051545 containerd[1465]: time="2025-07-11T00:27:50.051476948Z" level=info msg="StopContainer for \"4c413424f24cbc292dc20eb4f6e946b97c1b45cf4a1c0f9c0b23f02055893981\" with timeout 30 (s)" Jul 11 00:27:50.053812 containerd[1465]: time="2025-07-11T00:27:50.053767930Z" level=info msg="Stop container \"4c413424f24cbc292dc20eb4f6e946b97c1b45cf4a1c0f9c0b23f02055893981\" with signal terminated" Jul 11 00:27:50.074172 systemd[1]: cri-containerd-4c413424f24cbc292dc20eb4f6e946b97c1b45cf4a1c0f9c0b23f02055893981.scope: Deactivated successfully. Jul 11 00:27:50.074807 systemd[1]: cri-containerd-4c413424f24cbc292dc20eb4f6e946b97c1b45cf4a1c0f9c0b23f02055893981.scope: Consumed 1.002s CPU time. Jul 11 00:27:50.112076 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c413424f24cbc292dc20eb4f6e946b97c1b45cf4a1c0f9c0b23f02055893981-rootfs.mount: Deactivated successfully. Jul 11 00:27:50.120828 containerd[1465]: time="2025-07-11T00:27:50.108199170Z" level=info msg="shim disconnected" id=4c413424f24cbc292dc20eb4f6e946b97c1b45cf4a1c0f9c0b23f02055893981 namespace=k8s.io Jul 11 00:27:50.129700 containerd[1465]: time="2025-07-11T00:27:50.129551217Z" level=warning msg="cleaning up after shim disconnected" id=4c413424f24cbc292dc20eb4f6e946b97c1b45cf4a1c0f9c0b23f02055893981 namespace=k8s.io Jul 11 00:27:50.129700 containerd[1465]: time="2025-07-11T00:27:50.129632491Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:27:50.167313 containerd[1465]: time="2025-07-11T00:27:50.167238043Z" level=info msg="StopContainer for \"4c413424f24cbc292dc20eb4f6e946b97c1b45cf4a1c0f9c0b23f02055893981\" returns successfully" Jul 11 00:27:50.168108 containerd[1465]: time="2025-07-11T00:27:50.168047995Z" level=info msg="StopPodSandbox for \"1a9af3873cb7c7c2d6501ee2a258757429a5eb1c33bdf2fe68189bf10987c4b6\"" Jul 11 00:27:50.168108 containerd[1465]: time="2025-07-11T00:27:50.168104873Z" level=info msg="Container to stop \"4c413424f24cbc292dc20eb4f6e946b97c1b45cf4a1c0f9c0b23f02055893981\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:27:50.171740 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1a9af3873cb7c7c2d6501ee2a258757429a5eb1c33bdf2fe68189bf10987c4b6-shm.mount: Deactivated successfully. Jul 11 00:27:50.180185 systemd[1]: cri-containerd-1a9af3873cb7c7c2d6501ee2a258757429a5eb1c33bdf2fe68189bf10987c4b6.scope: Deactivated successfully. Jul 11 00:27:50.216447 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1a9af3873cb7c7c2d6501ee2a258757429a5eb1c33bdf2fe68189bf10987c4b6-rootfs.mount: Deactivated successfully. Jul 11 00:27:50.219883 containerd[1465]: time="2025-07-11T00:27:50.219638053Z" level=info msg="shim disconnected" id=1a9af3873cb7c7c2d6501ee2a258757429a5eb1c33bdf2fe68189bf10987c4b6 namespace=k8s.io Jul 11 00:27:50.219883 containerd[1465]: time="2025-07-11T00:27:50.219704719Z" level=warning msg="cleaning up after shim disconnected" id=1a9af3873cb7c7c2d6501ee2a258757429a5eb1c33bdf2fe68189bf10987c4b6 namespace=k8s.io Jul 11 00:27:50.219883 containerd[1465]: time="2025-07-11T00:27:50.219714567Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:27:50.329154 systemd-networkd[1391]: cali94d89816dd3: Link DOWN Jul 11 00:27:50.329175 systemd-networkd[1391]: cali94d89816dd3: Lost carrier Jul 11 00:27:50.414553 containerd[1465]: 2025-07-11 00:27:50.326 [INFO][6467] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1a9af3873cb7c7c2d6501ee2a258757429a5eb1c33bdf2fe68189bf10987c4b6" Jul 11 00:27:50.414553 containerd[1465]: 2025-07-11 00:27:50.327 [INFO][6467] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1a9af3873cb7c7c2d6501ee2a258757429a5eb1c33bdf2fe68189bf10987c4b6" iface="eth0" netns="/var/run/netns/cni-5ce2d484-9122-f7c7-cbf1-a1833785a339" Jul 11 00:27:50.414553 containerd[1465]: 2025-07-11 00:27:50.327 [INFO][6467] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1a9af3873cb7c7c2d6501ee2a258757429a5eb1c33bdf2fe68189bf10987c4b6" iface="eth0" netns="/var/run/netns/cni-5ce2d484-9122-f7c7-cbf1-a1833785a339" Jul 11 00:27:50.414553 containerd[1465]: 2025-07-11 00:27:50.337 [INFO][6467] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="1a9af3873cb7c7c2d6501ee2a258757429a5eb1c33bdf2fe68189bf10987c4b6" after=10.313914ms iface="eth0" netns="/var/run/netns/cni-5ce2d484-9122-f7c7-cbf1-a1833785a339" Jul 11 00:27:50.414553 containerd[1465]: 2025-07-11 00:27:50.337 [INFO][6467] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1a9af3873cb7c7c2d6501ee2a258757429a5eb1c33bdf2fe68189bf10987c4b6" Jul 11 00:27:50.414553 containerd[1465]: 2025-07-11 00:27:50.337 [INFO][6467] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1a9af3873cb7c7c2d6501ee2a258757429a5eb1c33bdf2fe68189bf10987c4b6" Jul 11 00:27:50.414553 containerd[1465]: 2025-07-11 00:27:50.364 [INFO][6478] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1a9af3873cb7c7c2d6501ee2a258757429a5eb1c33bdf2fe68189bf10987c4b6" HandleID="k8s-pod-network.1a9af3873cb7c7c2d6501ee2a258757429a5eb1c33bdf2fe68189bf10987c4b6" Workload="localhost-k8s-calico--apiserver--6f647b777b--qhnsp-eth0" Jul 11 00:27:50.414553 containerd[1465]: 2025-07-11 00:27:50.364 [INFO][6478] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:27:50.414553 containerd[1465]: 2025-07-11 00:27:50.364 [INFO][6478] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:27:50.414553 containerd[1465]: 2025-07-11 00:27:50.404 [INFO][6478] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="1a9af3873cb7c7c2d6501ee2a258757429a5eb1c33bdf2fe68189bf10987c4b6" HandleID="k8s-pod-network.1a9af3873cb7c7c2d6501ee2a258757429a5eb1c33bdf2fe68189bf10987c4b6" Workload="localhost-k8s-calico--apiserver--6f647b777b--qhnsp-eth0" Jul 11 00:27:50.414553 containerd[1465]: 2025-07-11 00:27:50.404 [INFO][6478] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1a9af3873cb7c7c2d6501ee2a258757429a5eb1c33bdf2fe68189bf10987c4b6" HandleID="k8s-pod-network.1a9af3873cb7c7c2d6501ee2a258757429a5eb1c33bdf2fe68189bf10987c4b6" Workload="localhost-k8s-calico--apiserver--6f647b777b--qhnsp-eth0" Jul 11 00:27:50.414553 containerd[1465]: 2025-07-11 00:27:50.406 [INFO][6478] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:27:50.414553 containerd[1465]: 2025-07-11 00:27:50.410 [INFO][6467] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1a9af3873cb7c7c2d6501ee2a258757429a5eb1c33bdf2fe68189bf10987c4b6" Jul 11 00:27:50.415310 containerd[1465]: time="2025-07-11T00:27:50.414884548Z" level=info msg="TearDown network for sandbox \"1a9af3873cb7c7c2d6501ee2a258757429a5eb1c33bdf2fe68189bf10987c4b6\" successfully" Jul 11 00:27:50.415310 containerd[1465]: time="2025-07-11T00:27:50.414925705Z" level=info msg="StopPodSandbox for \"1a9af3873cb7c7c2d6501ee2a258757429a5eb1c33bdf2fe68189bf10987c4b6\" returns successfully" Jul 11 00:27:50.419704 systemd[1]: run-netns-cni\x2d5ce2d484\x2d9122\x2df7c7\x2dcbf1\x2da1833785a339.mount: Deactivated successfully. Jul 11 00:27:50.522287 kubelet[2560]: I0711 00:27:50.522217 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d4787e60-b0e6-42f0-b414-39732f919000-calico-apiserver-certs\") pod \"d4787e60-b0e6-42f0-b414-39732f919000\" (UID: \"d4787e60-b0e6-42f0-b414-39732f919000\") " Jul 11 00:27:50.522287 kubelet[2560]: I0711 00:27:50.522292 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjsnh\" (UniqueName: \"kubernetes.io/projected/d4787e60-b0e6-42f0-b414-39732f919000-kube-api-access-bjsnh\") pod \"d4787e60-b0e6-42f0-b414-39732f919000\" (UID: \"d4787e60-b0e6-42f0-b414-39732f919000\") " Jul 11 00:27:50.547520 systemd[1]: var-lib-kubelet-pods-d4787e60\x2db0e6\x2d42f0\x2db414\x2d39732f919000-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbjsnh.mount: Deactivated successfully. Jul 11 00:27:50.547710 systemd[1]: var-lib-kubelet-pods-d4787e60\x2db0e6\x2d42f0\x2db414\x2d39732f919000-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Jul 11 00:27:50.554636 kubelet[2560]: I0711 00:27:50.553094 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4787e60-b0e6-42f0-b414-39732f919000-kube-api-access-bjsnh" (OuterVolumeSpecName: "kube-api-access-bjsnh") pod "d4787e60-b0e6-42f0-b414-39732f919000" (UID: "d4787e60-b0e6-42f0-b414-39732f919000"). InnerVolumeSpecName "kube-api-access-bjsnh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 11 00:27:50.554753 kubelet[2560]: I0711 00:27:50.553088 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4787e60-b0e6-42f0-b414-39732f919000-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "d4787e60-b0e6-42f0-b414-39732f919000" (UID: "d4787e60-b0e6-42f0-b414-39732f919000"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 11 00:27:50.622673 kubelet[2560]: I0711 00:27:50.622601 2560 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bjsnh\" (UniqueName: \"kubernetes.io/projected/d4787e60-b0e6-42f0-b414-39732f919000-kube-api-access-bjsnh\") on node \"localhost\" DevicePath \"\"" Jul 11 00:27:50.622673 kubelet[2560]: I0711 00:27:50.622653 2560 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d4787e60-b0e6-42f0-b414-39732f919000-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" Jul 11 00:27:50.672951 systemd[1]: Started sshd@22-10.0.0.159:22-10.0.0.1:41174.service - OpenSSH per-connection server daemon (10.0.0.1:41174). Jul 11 00:27:50.704927 sshd[6494]: Accepted publickey for core from 10.0.0.1 port 41174 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:27:50.706640 sshd[6494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:27:50.710785 systemd-logind[1445]: New session 23 of user core. Jul 11 00:27:50.715755 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 11 00:27:50.981771 systemd[1]: Removed slice kubepods-besteffort-podd4787e60_b0e6_42f0_b414_39732f919000.slice - libcontainer container kubepods-besteffort-podd4787e60_b0e6_42f0_b414_39732f919000.slice. Jul 11 00:27:50.981899 systemd[1]: kubepods-besteffort-podd4787e60_b0e6_42f0_b414_39732f919000.slice: Consumed 1.029s CPU time. Jul 11 00:27:51.004663 kubelet[2560]: I0711 00:27:51.004506 2560 scope.go:117] "RemoveContainer" containerID="4c413424f24cbc292dc20eb4f6e946b97c1b45cf4a1c0f9c0b23f02055893981" Jul 11 00:27:51.014122 containerd[1465]: time="2025-07-11T00:27:51.013997536Z" level=info msg="RemoveContainer for \"4c413424f24cbc292dc20eb4f6e946b97c1b45cf4a1c0f9c0b23f02055893981\"" Jul 11 00:27:51.021848 containerd[1465]: time="2025-07-11T00:27:51.021268585Z" level=info msg="RemoveContainer for \"4c413424f24cbc292dc20eb4f6e946b97c1b45cf4a1c0f9c0b23f02055893981\" returns successfully" Jul 11 00:27:51.037899 kubelet[2560]: I0711 00:27:51.037784 2560 scope.go:117] "RemoveContainer" containerID="4c413424f24cbc292dc20eb4f6e946b97c1b45cf4a1c0f9c0b23f02055893981" Jul 11 00:27:51.055304 sshd[6494]: pam_unix(sshd:session): session closed for user core Jul 11 00:27:51.063593 containerd[1465]: time="2025-07-11T00:27:51.047690003Z" level=error msg="ContainerStatus for \"4c413424f24cbc292dc20eb4f6e946b97c1b45cf4a1c0f9c0b23f02055893981\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4c413424f24cbc292dc20eb4f6e946b97c1b45cf4a1c0f9c0b23f02055893981\": not found" Jul 11 00:27:51.062877 systemd-logind[1445]: Session 23 logged out. Waiting for processes to exit. Jul 11 00:27:51.067922 systemd[1]: sshd@22-10.0.0.159:22-10.0.0.1:41174.service: Deactivated successfully. Jul 11 00:27:51.070464 systemd[1]: session-23.scope: Deactivated successfully. Jul 11 00:27:51.073381 systemd-logind[1445]: Removed session 23. Jul 11 00:27:51.076149 kubelet[2560]: E0711 00:27:51.075989 2560 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4c413424f24cbc292dc20eb4f6e946b97c1b45cf4a1c0f9c0b23f02055893981\": not found" containerID="4c413424f24cbc292dc20eb4f6e946b97c1b45cf4a1c0f9c0b23f02055893981" Jul 11 00:27:51.076989 kubelet[2560]: I0711 00:27:51.076952 2560 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4c413424f24cbc292dc20eb4f6e946b97c1b45cf4a1c0f9c0b23f02055893981"} err="failed to get container status \"4c413424f24cbc292dc20eb4f6e946b97c1b45cf4a1c0f9c0b23f02055893981\": rpc error: code = NotFound desc = an error occurred when try to find container \"4c413424f24cbc292dc20eb4f6e946b97c1b45cf4a1c0f9c0b23f02055893981\": not found" Jul 11 00:27:52.501449 kubelet[2560]: I0711 00:27:52.501392 2560 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4787e60-b0e6-42f0-b414-39732f919000" path="/var/lib/kubelet/pods/d4787e60-b0e6-42f0-b414-39732f919000/volumes" Jul 11 00:27:55.498796 kubelet[2560]: E0711 00:27:55.498742 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:27:56.075945 systemd[1]: Started sshd@23-10.0.0.159:22-10.0.0.1:41182.service - OpenSSH per-connection server daemon (10.0.0.1:41182). Jul 11 00:27:56.117025 sshd[6523]: Accepted publickey for core from 10.0.0.1 port 41182 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:27:56.120443 sshd[6523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:27:56.127798 systemd-logind[1445]: New session 24 of user core. Jul 11 00:27:56.135755 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 11 00:27:56.399422 sshd[6523]: pam_unix(sshd:session): session closed for user core Jul 11 00:27:56.404671 systemd[1]: sshd@23-10.0.0.159:22-10.0.0.1:41182.service: Deactivated successfully. Jul 11 00:27:56.407442 systemd[1]: session-24.scope: Deactivated successfully. Jul 11 00:27:56.408180 systemd-logind[1445]: Session 24 logged out. Waiting for processes to exit. Jul 11 00:27:56.409324 systemd-logind[1445]: Removed session 24. Jul 11 00:27:58.225017 containerd[1465]: time="2025-07-11T00:27:58.224953147Z" level=info msg="StopContainer for \"027df56831d1b4d5d76d96a53bc8b89b7f825dec1bc813191ba2cd2e27c5551f\" with timeout 30 (s)" Jul 11 00:27:58.226062 containerd[1465]: time="2025-07-11T00:27:58.226035574Z" level=info msg="Stop container \"027df56831d1b4d5d76d96a53bc8b89b7f825dec1bc813191ba2cd2e27c5551f\" with signal terminated" Jul 11 00:27:58.248707 systemd[1]: cri-containerd-027df56831d1b4d5d76d96a53bc8b89b7f825dec1bc813191ba2cd2e27c5551f.scope: Deactivated successfully. Jul 11 00:27:58.273171 containerd[1465]: time="2025-07-11T00:27:58.273093750Z" level=info msg="shim disconnected" id=027df56831d1b4d5d76d96a53bc8b89b7f825dec1bc813191ba2cd2e27c5551f namespace=k8s.io Jul 11 00:27:58.273171 containerd[1465]: time="2025-07-11T00:27:58.273162471Z" level=warning msg="cleaning up after shim disconnected" id=027df56831d1b4d5d76d96a53bc8b89b7f825dec1bc813191ba2cd2e27c5551f namespace=k8s.io Jul 11 00:27:58.273171 containerd[1465]: time="2025-07-11T00:27:58.273172349Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:27:58.275304 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-027df56831d1b4d5d76d96a53bc8b89b7f825dec1bc813191ba2cd2e27c5551f-rootfs.mount: Deactivated successfully. Jul 11 00:27:58.479376 containerd[1465]: time="2025-07-11T00:27:58.479188486Z" level=info msg="StopContainer for \"027df56831d1b4d5d76d96a53bc8b89b7f825dec1bc813191ba2cd2e27c5551f\" returns successfully" Jul 11 00:27:58.479933 containerd[1465]: time="2025-07-11T00:27:58.479901011Z" level=info msg="StopPodSandbox for \"4c3620b231bd77ab885314b9a835ded0db44271822c2226b377c41cab1660b54\"" Jul 11 00:27:58.480042 containerd[1465]: time="2025-07-11T00:27:58.479959031Z" level=info msg="Container to stop \"027df56831d1b4d5d76d96a53bc8b89b7f825dec1bc813191ba2cd2e27c5551f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:27:58.484856 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4c3620b231bd77ab885314b9a835ded0db44271822c2226b377c41cab1660b54-shm.mount: Deactivated successfully. Jul 11 00:27:58.488751 systemd[1]: cri-containerd-4c3620b231bd77ab885314b9a835ded0db44271822c2226b377c41cab1660b54.scope: Deactivated successfully. Jul 11 00:27:58.517404 containerd[1465]: time="2025-07-11T00:27:58.517170444Z" level=info msg="shim disconnected" id=4c3620b231bd77ab885314b9a835ded0db44271822c2226b377c41cab1660b54 namespace=k8s.io Jul 11 00:27:58.517404 containerd[1465]: time="2025-07-11T00:27:58.517234026Z" level=warning msg="cleaning up after shim disconnected" id=4c3620b231bd77ab885314b9a835ded0db44271822c2226b377c41cab1660b54 namespace=k8s.io Jul 11 00:27:58.517404 containerd[1465]: time="2025-07-11T00:27:58.517246289Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:27:58.518703 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c3620b231bd77ab885314b9a835ded0db44271822c2226b377c41cab1660b54-rootfs.mount: Deactivated successfully. Jul 11 00:27:58.671389 systemd-networkd[1391]: cali1e2b514fea0: Link DOWN Jul 11 00:27:58.671401 systemd-networkd[1391]: cali1e2b514fea0: Lost carrier Jul 11 00:27:58.765577 containerd[1465]: 2025-07-11 00:27:58.669 [INFO][6633] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4c3620b231bd77ab885314b9a835ded0db44271822c2226b377c41cab1660b54" Jul 11 00:27:58.765577 containerd[1465]: 2025-07-11 00:27:58.669 [INFO][6633] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4c3620b231bd77ab885314b9a835ded0db44271822c2226b377c41cab1660b54" iface="eth0" netns="/var/run/netns/cni-3f4deaca-efd0-2c7d-31ee-b599ecaa9617" Jul 11 00:27:58.765577 containerd[1465]: 2025-07-11 00:27:58.670 [INFO][6633] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4c3620b231bd77ab885314b9a835ded0db44271822c2226b377c41cab1660b54" iface="eth0" netns="/var/run/netns/cni-3f4deaca-efd0-2c7d-31ee-b599ecaa9617" Jul 11 00:27:58.765577 containerd[1465]: 2025-07-11 00:27:58.682 [INFO][6633] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="4c3620b231bd77ab885314b9a835ded0db44271822c2226b377c41cab1660b54" after=12.939989ms iface="eth0" netns="/var/run/netns/cni-3f4deaca-efd0-2c7d-31ee-b599ecaa9617" Jul 11 00:27:58.765577 containerd[1465]: 2025-07-11 00:27:58.682 [INFO][6633] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4c3620b231bd77ab885314b9a835ded0db44271822c2226b377c41cab1660b54" Jul 11 00:27:58.765577 containerd[1465]: 2025-07-11 00:27:58.682 [INFO][6633] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4c3620b231bd77ab885314b9a835ded0db44271822c2226b377c41cab1660b54" Jul 11 00:27:58.765577 containerd[1465]: 2025-07-11 00:27:58.707 [INFO][6647] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4c3620b231bd77ab885314b9a835ded0db44271822c2226b377c41cab1660b54" HandleID="k8s-pod-network.4c3620b231bd77ab885314b9a835ded0db44271822c2226b377c41cab1660b54" Workload="localhost-k8s-calico--apiserver--6f647b777b--hj2zv-eth0" Jul 11 00:27:58.765577 containerd[1465]: 2025-07-11 00:27:58.707 [INFO][6647] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:27:58.765577 containerd[1465]: 2025-07-11 00:27:58.707 [INFO][6647] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:27:58.765577 containerd[1465]: 2025-07-11 00:27:58.757 [INFO][6647] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="4c3620b231bd77ab885314b9a835ded0db44271822c2226b377c41cab1660b54" HandleID="k8s-pod-network.4c3620b231bd77ab885314b9a835ded0db44271822c2226b377c41cab1660b54" Workload="localhost-k8s-calico--apiserver--6f647b777b--hj2zv-eth0" Jul 11 00:27:58.765577 containerd[1465]: 2025-07-11 00:27:58.757 [INFO][6647] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4c3620b231bd77ab885314b9a835ded0db44271822c2226b377c41cab1660b54" HandleID="k8s-pod-network.4c3620b231bd77ab885314b9a835ded0db44271822c2226b377c41cab1660b54" Workload="localhost-k8s-calico--apiserver--6f647b777b--hj2zv-eth0" Jul 11 00:27:58.765577 containerd[1465]: 2025-07-11 00:27:58.758 [INFO][6647] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:27:58.765577 containerd[1465]: 2025-07-11 00:27:58.761 [INFO][6633] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4c3620b231bd77ab885314b9a835ded0db44271822c2226b377c41cab1660b54" Jul 11 00:27:58.771293 systemd[1]: run-netns-cni\x2d3f4deaca\x2defd0\x2d2c7d\x2d31ee\x2db599ecaa9617.mount: Deactivated successfully. Jul 11 00:27:58.776718 containerd[1465]: time="2025-07-11T00:27:58.769079041Z" level=info msg="TearDown network for sandbox \"4c3620b231bd77ab885314b9a835ded0db44271822c2226b377c41cab1660b54\" successfully" Jul 11 00:27:58.776718 containerd[1465]: time="2025-07-11T00:27:58.776693472Z" level=info msg="StopPodSandbox for \"4c3620b231bd77ab885314b9a835ded0db44271822c2226b377c41cab1660b54\" returns successfully" Jul 11 00:27:58.882412 kubelet[2560]: I0711 00:27:58.882347 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qc7sm\" (UniqueName: \"kubernetes.io/projected/ca320139-04b8-474f-b513-d5dae70779c9-kube-api-access-qc7sm\") pod \"ca320139-04b8-474f-b513-d5dae70779c9\" (UID: \"ca320139-04b8-474f-b513-d5dae70779c9\") " Jul 11 00:27:58.882412 kubelet[2560]: I0711 00:27:58.882416 2560 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ca320139-04b8-474f-b513-d5dae70779c9-calico-apiserver-certs\") pod \"ca320139-04b8-474f-b513-d5dae70779c9\" (UID: \"ca320139-04b8-474f-b513-d5dae70779c9\") " Jul 11 00:27:58.888977 kubelet[2560]: I0711 00:27:58.888922 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca320139-04b8-474f-b513-d5dae70779c9-kube-api-access-qc7sm" (OuterVolumeSpecName: "kube-api-access-qc7sm") pod "ca320139-04b8-474f-b513-d5dae70779c9" (UID: "ca320139-04b8-474f-b513-d5dae70779c9"). InnerVolumeSpecName "kube-api-access-qc7sm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 11 00:27:58.889101 kubelet[2560]: I0711 00:27:58.889030 2560 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca320139-04b8-474f-b513-d5dae70779c9-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "ca320139-04b8-474f-b513-d5dae70779c9" (UID: "ca320139-04b8-474f-b513-d5dae70779c9"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 11 00:27:58.891660 systemd[1]: var-lib-kubelet-pods-ca320139\x2d04b8\x2d474f\x2db513\x2dd5dae70779c9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqc7sm.mount: Deactivated successfully. Jul 11 00:27:58.891825 systemd[1]: var-lib-kubelet-pods-ca320139\x2d04b8\x2d474f\x2db513\x2dd5dae70779c9-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Jul 11 00:27:58.980583 kubelet[2560]: I0711 00:27:58.980515 2560 scope.go:117] "RemoveContainer" containerID="027df56831d1b4d5d76d96a53bc8b89b7f825dec1bc813191ba2cd2e27c5551f" Jul 11 00:27:58.985094 kubelet[2560]: I0711 00:27:58.985042 2560 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qc7sm\" (UniqueName: \"kubernetes.io/projected/ca320139-04b8-474f-b513-d5dae70779c9-kube-api-access-qc7sm\") on node \"localhost\" DevicePath \"\"" Jul 11 00:27:58.985094 kubelet[2560]: I0711 00:27:58.985068 2560 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ca320139-04b8-474f-b513-d5dae70779c9-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" Jul 11 00:27:58.985371 containerd[1465]: time="2025-07-11T00:27:58.985199209Z" level=info msg="RemoveContainer for \"027df56831d1b4d5d76d96a53bc8b89b7f825dec1bc813191ba2cd2e27c5551f\"" Jul 11 00:27:58.988366 systemd[1]: Removed slice kubepods-besteffort-podca320139_04b8_474f_b513_d5dae70779c9.slice - libcontainer container kubepods-besteffort-podca320139_04b8_474f_b513_d5dae70779c9.slice. Jul 11 00:27:58.988461 systemd[1]: kubepods-besteffort-podca320139_04b8_474f_b513_d5dae70779c9.slice: Consumed 1.016s CPU time. Jul 11 00:27:58.990638 containerd[1465]: time="2025-07-11T00:27:58.990559646Z" level=info msg="RemoveContainer for \"027df56831d1b4d5d76d96a53bc8b89b7f825dec1bc813191ba2cd2e27c5551f\" returns successfully" Jul 11 00:27:58.990897 kubelet[2560]: I0711 00:27:58.990857 2560 scope.go:117] "RemoveContainer" containerID="027df56831d1b4d5d76d96a53bc8b89b7f825dec1bc813191ba2cd2e27c5551f" Jul 11 00:27:58.991300 containerd[1465]: time="2025-07-11T00:27:58.991249607Z" level=error msg="ContainerStatus for \"027df56831d1b4d5d76d96a53bc8b89b7f825dec1bc813191ba2cd2e27c5551f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"027df56831d1b4d5d76d96a53bc8b89b7f825dec1bc813191ba2cd2e27c5551f\": not found" Jul 11 00:27:58.991458 kubelet[2560]: E0711 00:27:58.991433 2560 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"027df56831d1b4d5d76d96a53bc8b89b7f825dec1bc813191ba2cd2e27c5551f\": not found" containerID="027df56831d1b4d5d76d96a53bc8b89b7f825dec1bc813191ba2cd2e27c5551f" Jul 11 00:27:58.991515 kubelet[2560]: I0711 00:27:58.991465 2560 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"027df56831d1b4d5d76d96a53bc8b89b7f825dec1bc813191ba2cd2e27c5551f"} err="failed to get container status \"027df56831d1b4d5d76d96a53bc8b89b7f825dec1bc813191ba2cd2e27c5551f\": rpc error: code = NotFound desc = an error occurred when try to find container \"027df56831d1b4d5d76d96a53bc8b89b7f825dec1bc813191ba2cd2e27c5551f\": not found" Jul 11 00:28:00.514075 kubelet[2560]: I0711 00:28:00.514015 2560 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca320139-04b8-474f-b513-d5dae70779c9" path="/var/lib/kubelet/pods/ca320139-04b8-474f-b513-d5dae70779c9/volumes" Jul 11 00:28:01.414406 systemd[1]: Started sshd@24-10.0.0.159:22-10.0.0.1:56566.service - OpenSSH per-connection server daemon (10.0.0.1:56566). Jul 11 00:28:01.475016 sshd[6682]: Accepted publickey for core from 10.0.0.1 port 56566 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:28:01.477332 sshd[6682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:28:01.482884 systemd-logind[1445]: New session 25 of user core. Jul 11 00:28:01.488757 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 11 00:28:01.641666 sshd[6682]: pam_unix(sshd:session): session closed for user core Jul 11 00:28:01.646542 systemd[1]: sshd@24-10.0.0.159:22-10.0.0.1:56566.service: Deactivated successfully. Jul 11 00:28:01.650030 systemd[1]: session-25.scope: Deactivated successfully. Jul 11 00:28:01.650846 systemd-logind[1445]: Session 25 logged out. Waiting for processes to exit. Jul 11 00:28:01.651998 systemd-logind[1445]: Removed session 25.