Jul 11 00:12:38.886398 kernel: Linux version 6.6.96-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Jul 10 22:46:23 -00 2025 Jul 11 00:12:38.886419 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=836a88100946313ecc17069f63287f86e7d91f7c389df4bcd3f3e02beb9683e1 Jul 11 00:12:38.886430 kernel: BIOS-provided physical RAM map: Jul 11 00:12:38.886437 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 11 00:12:38.886443 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 11 00:12:38.886449 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 11 00:12:38.886456 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jul 11 00:12:38.886462 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jul 11 00:12:38.886468 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 11 00:12:38.886477 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 11 00:12:38.886483 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 11 00:12:38.886489 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 11 00:12:38.886495 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 11 00:12:38.886502 kernel: NX (Execute Disable) protection: active Jul 11 00:12:38.886518 kernel: APIC: Static calls initialized Jul 11 00:12:38.886528 kernel: SMBIOS 2.8 present. Jul 11 00:12:38.886548 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jul 11 00:12:38.886555 kernel: Hypervisor detected: KVM Jul 11 00:12:38.886562 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 11 00:12:38.886569 kernel: kvm-clock: using sched offset of 3034754708 cycles Jul 11 00:12:38.886575 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 11 00:12:38.886583 kernel: tsc: Detected 2794.748 MHz processor Jul 11 00:12:38.886590 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 11 00:12:38.886597 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 11 00:12:38.886604 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jul 11 00:12:38.886614 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 11 00:12:38.886621 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 11 00:12:38.886628 kernel: Using GB pages for direct mapping Jul 11 00:12:38.886635 kernel: ACPI: Early table checksum verification disabled Jul 11 00:12:38.886641 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jul 11 00:12:38.886648 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:12:38.886655 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:12:38.886662 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:12:38.886671 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jul 11 00:12:38.886678 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:12:38.886685 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:12:38.886692 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:12:38.886699 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:12:38.886706 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jul 11 00:12:38.886713 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jul 11 00:12:38.886723 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jul 11 00:12:38.886733 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jul 11 00:12:38.886740 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jul 11 00:12:38.886747 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jul 11 00:12:38.886754 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jul 11 00:12:38.886761 kernel: No NUMA configuration found Jul 11 00:12:38.886768 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jul 11 00:12:38.886775 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jul 11 00:12:38.886785 kernel: Zone ranges: Jul 11 00:12:38.886792 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 11 00:12:38.886799 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jul 11 00:12:38.886806 kernel: Normal empty Jul 11 00:12:38.886813 kernel: Movable zone start for each node Jul 11 00:12:38.886820 kernel: Early memory node ranges Jul 11 00:12:38.886827 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 11 00:12:38.886835 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jul 11 00:12:38.886842 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jul 11 00:12:38.886851 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 11 00:12:38.886858 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 11 00:12:38.886866 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jul 11 00:12:38.886873 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 11 00:12:38.886880 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 11 00:12:38.886887 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 11 00:12:38.886894 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 11 00:12:38.886901 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 11 00:12:38.886908 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 11 00:12:38.886918 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 11 00:12:38.886925 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 11 00:12:38.886932 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 11 00:12:38.886939 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 11 00:12:38.886946 kernel: TSC deadline timer available Jul 11 00:12:38.886953 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jul 11 00:12:38.886960 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 11 00:12:38.886967 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 11 00:12:38.886975 kernel: kvm-guest: setup PV sched yield Jul 11 00:12:38.886984 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 11 00:12:38.886991 kernel: Booting paravirtualized kernel on KVM Jul 11 00:12:38.886998 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 11 00:12:38.887006 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 11 00:12:38.887013 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u524288 Jul 11 00:12:38.887020 kernel: pcpu-alloc: s197096 r8192 d32280 u524288 alloc=1*2097152 Jul 11 00:12:38.887027 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 11 00:12:38.887034 kernel: kvm-guest: PV spinlocks enabled Jul 11 00:12:38.887041 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 11 00:12:38.887052 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=836a88100946313ecc17069f63287f86e7d91f7c389df4bcd3f3e02beb9683e1 Jul 11 00:12:38.887059 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 11 00:12:38.887067 kernel: random: crng init done Jul 11 00:12:38.887074 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 11 00:12:38.887081 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 11 00:12:38.887088 kernel: Fallback order for Node 0: 0 Jul 11 00:12:38.887095 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jul 11 00:12:38.887102 kernel: Policy zone: DMA32 Jul 11 00:12:38.887112 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 11 00:12:38.887119 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2295K rwdata, 22744K rodata, 42872K init, 2320K bss, 136900K reserved, 0K cma-reserved) Jul 11 00:12:38.887126 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 11 00:12:38.887134 kernel: ftrace: allocating 37966 entries in 149 pages Jul 11 00:12:38.887141 kernel: ftrace: allocated 149 pages with 4 groups Jul 11 00:12:38.887148 kernel: Dynamic Preempt: voluntary Jul 11 00:12:38.887155 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 11 00:12:38.887162 kernel: rcu: RCU event tracing is enabled. Jul 11 00:12:38.887170 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 11 00:12:38.887180 kernel: Trampoline variant of Tasks RCU enabled. Jul 11 00:12:38.887187 kernel: Rude variant of Tasks RCU enabled. Jul 11 00:12:38.887194 kernel: Tracing variant of Tasks RCU enabled. Jul 11 00:12:38.887201 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 11 00:12:38.887208 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 11 00:12:38.887215 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 11 00:12:38.887223 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 11 00:12:38.887230 kernel: Console: colour VGA+ 80x25 Jul 11 00:12:38.887237 kernel: printk: console [ttyS0] enabled Jul 11 00:12:38.887244 kernel: ACPI: Core revision 20230628 Jul 11 00:12:38.887253 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 11 00:12:38.887260 kernel: APIC: Switch to symmetric I/O mode setup Jul 11 00:12:38.887268 kernel: x2apic enabled Jul 11 00:12:38.887275 kernel: APIC: Switched APIC routing to: physical x2apic Jul 11 00:12:38.887282 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 11 00:12:38.887289 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 11 00:12:38.887297 kernel: kvm-guest: setup PV IPIs Jul 11 00:12:38.887313 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 11 00:12:38.887321 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 11 00:12:38.887328 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jul 11 00:12:38.887336 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 11 00:12:38.887347 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 11 00:12:38.887356 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 11 00:12:38.887365 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 11 00:12:38.887373 kernel: Spectre V2 : Mitigation: Retpolines Jul 11 00:12:38.887381 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 11 00:12:38.887390 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 11 00:12:38.887398 kernel: RETBleed: Mitigation: untrained return thunk Jul 11 00:12:38.887405 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 11 00:12:38.887413 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 11 00:12:38.887421 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 11 00:12:38.887429 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 11 00:12:38.887436 kernel: x86/bugs: return thunk changed Jul 11 00:12:38.887444 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 11 00:12:38.887454 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 11 00:12:38.887461 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 11 00:12:38.887469 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 11 00:12:38.887476 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 11 00:12:38.887484 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 11 00:12:38.887491 kernel: Freeing SMP alternatives memory: 32K Jul 11 00:12:38.887499 kernel: pid_max: default: 32768 minimum: 301 Jul 11 00:12:38.887514 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 11 00:12:38.887521 kernel: landlock: Up and running. Jul 11 00:12:38.887563 kernel: SELinux: Initializing. Jul 11 00:12:38.887581 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 11 00:12:38.887589 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 11 00:12:38.887597 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 11 00:12:38.887604 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 00:12:38.887612 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 00:12:38.887620 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 00:12:38.887627 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 11 00:12:38.887635 kernel: ... version: 0 Jul 11 00:12:38.887646 kernel: ... bit width: 48 Jul 11 00:12:38.887654 kernel: ... generic registers: 6 Jul 11 00:12:38.887661 kernel: ... value mask: 0000ffffffffffff Jul 11 00:12:38.887668 kernel: ... max period: 00007fffffffffff Jul 11 00:12:38.887676 kernel: ... fixed-purpose events: 0 Jul 11 00:12:38.887683 kernel: ... event mask: 000000000000003f Jul 11 00:12:38.887691 kernel: signal: max sigframe size: 1776 Jul 11 00:12:38.887698 kernel: rcu: Hierarchical SRCU implementation. Jul 11 00:12:38.887706 kernel: rcu: Max phase no-delay instances is 400. Jul 11 00:12:38.887716 kernel: smp: Bringing up secondary CPUs ... Jul 11 00:12:38.887724 kernel: smpboot: x86: Booting SMP configuration: Jul 11 00:12:38.887731 kernel: .... node #0, CPUs: #1 #2 #3 Jul 11 00:12:38.887738 kernel: smp: Brought up 1 node, 4 CPUs Jul 11 00:12:38.887746 kernel: smpboot: Max logical packages: 1 Jul 11 00:12:38.887753 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jul 11 00:12:38.887761 kernel: devtmpfs: initialized Jul 11 00:12:38.887768 kernel: x86/mm: Memory block size: 128MB Jul 11 00:12:38.887776 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 11 00:12:38.887783 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 11 00:12:38.887793 kernel: pinctrl core: initialized pinctrl subsystem Jul 11 00:12:38.887801 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 11 00:12:38.887808 kernel: audit: initializing netlink subsys (disabled) Jul 11 00:12:38.887816 kernel: audit: type=2000 audit(1752192758.485:1): state=initialized audit_enabled=0 res=1 Jul 11 00:12:38.887823 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 11 00:12:38.887830 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 11 00:12:38.887838 kernel: cpuidle: using governor menu Jul 11 00:12:38.887845 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 11 00:12:38.887855 kernel: dca service started, version 1.12.1 Jul 11 00:12:38.887863 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jul 11 00:12:38.887870 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jul 11 00:12:38.887878 kernel: PCI: Using configuration type 1 for base access Jul 11 00:12:38.887885 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 11 00:12:38.887893 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 11 00:12:38.887900 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 11 00:12:38.887908 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 11 00:12:38.887915 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 11 00:12:38.887925 kernel: ACPI: Added _OSI(Module Device) Jul 11 00:12:38.887932 kernel: ACPI: Added _OSI(Processor Device) Jul 11 00:12:38.887940 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 11 00:12:38.887947 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 11 00:12:38.887954 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 11 00:12:38.887962 kernel: ACPI: Interpreter enabled Jul 11 00:12:38.887969 kernel: ACPI: PM: (supports S0 S3 S5) Jul 11 00:12:38.887976 kernel: ACPI: Using IOAPIC for interrupt routing Jul 11 00:12:38.887984 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 11 00:12:38.887994 kernel: PCI: Using E820 reservations for host bridge windows Jul 11 00:12:38.888001 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 11 00:12:38.888009 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 11 00:12:38.888199 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 11 00:12:38.888335 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 11 00:12:38.888467 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 11 00:12:38.888477 kernel: PCI host bridge to bus 0000:00 Jul 11 00:12:38.888632 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 11 00:12:38.888759 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 11 00:12:38.888877 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 11 00:12:38.888993 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jul 11 00:12:38.889111 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 11 00:12:38.889227 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jul 11 00:12:38.889341 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 11 00:12:38.889490 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jul 11 00:12:38.889662 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jul 11 00:12:38.889792 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jul 11 00:12:38.889920 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jul 11 00:12:38.890045 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jul 11 00:12:38.890172 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 11 00:12:38.890324 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jul 11 00:12:38.890460 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jul 11 00:12:38.890624 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jul 11 00:12:38.890755 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jul 11 00:12:38.890890 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jul 11 00:12:38.891018 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jul 11 00:12:38.891144 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jul 11 00:12:38.891269 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jul 11 00:12:38.891416 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jul 11 00:12:38.891596 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jul 11 00:12:38.891731 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jul 11 00:12:38.891857 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jul 11 00:12:38.891984 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jul 11 00:12:38.892126 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jul 11 00:12:38.892260 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 11 00:12:38.892399 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jul 11 00:12:38.892557 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jul 11 00:12:38.892687 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jul 11 00:12:38.892819 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jul 11 00:12:38.892944 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jul 11 00:12:38.892954 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 11 00:12:38.892967 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 11 00:12:38.892974 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 11 00:12:38.892982 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 11 00:12:38.892989 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 11 00:12:38.892997 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 11 00:12:38.893005 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 11 00:12:38.893012 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 11 00:12:38.893020 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 11 00:12:38.893027 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 11 00:12:38.893037 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 11 00:12:38.893045 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 11 00:12:38.893052 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 11 00:12:38.893060 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 11 00:12:38.893067 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 11 00:12:38.893075 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 11 00:12:38.893083 kernel: iommu: Default domain type: Translated Jul 11 00:12:38.893090 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 11 00:12:38.893098 kernel: PCI: Using ACPI for IRQ routing Jul 11 00:12:38.893108 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 11 00:12:38.893115 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 11 00:12:38.893123 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jul 11 00:12:38.893247 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 11 00:12:38.893374 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 11 00:12:38.893498 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 11 00:12:38.893517 kernel: vgaarb: loaded Jul 11 00:12:38.893525 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 11 00:12:38.893550 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 11 00:12:38.893558 kernel: clocksource: Switched to clocksource kvm-clock Jul 11 00:12:38.893566 kernel: VFS: Disk quotas dquot_6.6.0 Jul 11 00:12:38.893574 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 11 00:12:38.893581 kernel: pnp: PnP ACPI init Jul 11 00:12:38.893719 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 11 00:12:38.893731 kernel: pnp: PnP ACPI: found 6 devices Jul 11 00:12:38.893739 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 11 00:12:38.893746 kernel: NET: Registered PF_INET protocol family Jul 11 00:12:38.893758 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 11 00:12:38.893765 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 11 00:12:38.893773 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 11 00:12:38.893780 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 11 00:12:38.893788 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 11 00:12:38.893796 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 11 00:12:38.893803 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 11 00:12:38.893811 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 11 00:12:38.893821 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 11 00:12:38.893828 kernel: NET: Registered PF_XDP protocol family Jul 11 00:12:38.893951 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 11 00:12:38.894068 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 11 00:12:38.894184 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 11 00:12:38.894299 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jul 11 00:12:38.894414 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 11 00:12:38.894556 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jul 11 00:12:38.894567 kernel: PCI: CLS 0 bytes, default 64 Jul 11 00:12:38.894580 kernel: Initialise system trusted keyrings Jul 11 00:12:38.894587 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 11 00:12:38.894595 kernel: Key type asymmetric registered Jul 11 00:12:38.894603 kernel: Asymmetric key parser 'x509' registered Jul 11 00:12:38.894610 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 11 00:12:38.894618 kernel: io scheduler mq-deadline registered Jul 11 00:12:38.894625 kernel: io scheduler kyber registered Jul 11 00:12:38.894633 kernel: io scheduler bfq registered Jul 11 00:12:38.894640 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 11 00:12:38.894651 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 11 00:12:38.894659 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 11 00:12:38.894667 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 11 00:12:38.894674 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 11 00:12:38.894682 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 11 00:12:38.894689 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 11 00:12:38.894697 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 11 00:12:38.894705 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 11 00:12:38.894851 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 11 00:12:38.894866 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 11 00:12:38.894986 kernel: rtc_cmos 00:04: registered as rtc0 Jul 11 00:12:38.895106 kernel: rtc_cmos 00:04: setting system clock to 2025-07-11T00:12:38 UTC (1752192758) Jul 11 00:12:38.895224 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 11 00:12:38.895234 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 11 00:12:38.895242 kernel: NET: Registered PF_INET6 protocol family Jul 11 00:12:38.895250 kernel: Segment Routing with IPv6 Jul 11 00:12:38.895257 kernel: In-situ OAM (IOAM) with IPv6 Jul 11 00:12:38.895269 kernel: NET: Registered PF_PACKET protocol family Jul 11 00:12:38.895277 kernel: Key type dns_resolver registered Jul 11 00:12:38.895284 kernel: IPI shorthand broadcast: enabled Jul 11 00:12:38.895292 kernel: sched_clock: Marking stable (699002051, 104378198)->(815664382, -12284133) Jul 11 00:12:38.895299 kernel: registered taskstats version 1 Jul 11 00:12:38.895307 kernel: Loading compiled-in X.509 certificates Jul 11 00:12:38.895314 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.96-flatcar: 5956f0842928c96096c398e9db55919cd236a39f' Jul 11 00:12:38.895322 kernel: Key type .fscrypt registered Jul 11 00:12:38.895329 kernel: Key type fscrypt-provisioning registered Jul 11 00:12:38.895339 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 11 00:12:38.895347 kernel: ima: Allocated hash algorithm: sha1 Jul 11 00:12:38.895355 kernel: ima: No architecture policies found Jul 11 00:12:38.895362 kernel: clk: Disabling unused clocks Jul 11 00:12:38.895370 kernel: Freeing unused kernel image (initmem) memory: 42872K Jul 11 00:12:38.895377 kernel: Write protecting the kernel read-only data: 36864k Jul 11 00:12:38.895385 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Jul 11 00:12:38.895392 kernel: Run /init as init process Jul 11 00:12:38.895402 kernel: with arguments: Jul 11 00:12:38.895410 kernel: /init Jul 11 00:12:38.895417 kernel: with environment: Jul 11 00:12:38.895424 kernel: HOME=/ Jul 11 00:12:38.895432 kernel: TERM=linux Jul 11 00:12:38.895439 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 11 00:12:38.895449 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 11 00:12:38.895459 systemd[1]: Detected virtualization kvm. Jul 11 00:12:38.895469 systemd[1]: Detected architecture x86-64. Jul 11 00:12:38.895477 systemd[1]: Running in initrd. Jul 11 00:12:38.895485 systemd[1]: No hostname configured, using default hostname. Jul 11 00:12:38.895493 systemd[1]: Hostname set to . Jul 11 00:12:38.895501 systemd[1]: Initializing machine ID from VM UUID. Jul 11 00:12:38.895518 systemd[1]: Queued start job for default target initrd.target. Jul 11 00:12:38.895526 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 00:12:38.895586 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 00:12:38.895599 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 11 00:12:38.895607 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 11 00:12:38.895627 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 11 00:12:38.895638 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 11 00:12:38.895648 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 11 00:12:38.895660 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 11 00:12:38.895668 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 00:12:38.895676 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 11 00:12:38.895685 systemd[1]: Reached target paths.target - Path Units. Jul 11 00:12:38.895693 systemd[1]: Reached target slices.target - Slice Units. Jul 11 00:12:38.895701 systemd[1]: Reached target swap.target - Swaps. Jul 11 00:12:38.895709 systemd[1]: Reached target timers.target - Timer Units. Jul 11 00:12:38.895717 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 11 00:12:38.895728 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 11 00:12:38.895736 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 11 00:12:38.895744 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 11 00:12:38.895753 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 11 00:12:38.895761 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 11 00:12:38.895769 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 00:12:38.895777 systemd[1]: Reached target sockets.target - Socket Units. Jul 11 00:12:38.895786 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 11 00:12:38.895794 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 11 00:12:38.895805 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 11 00:12:38.895813 systemd[1]: Starting systemd-fsck-usr.service... Jul 11 00:12:38.895821 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 11 00:12:38.895830 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 11 00:12:38.895838 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:12:38.895846 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 11 00:12:38.895854 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 00:12:38.895863 systemd[1]: Finished systemd-fsck-usr.service. Jul 11 00:12:38.895874 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 11 00:12:38.895902 systemd-journald[193]: Collecting audit messages is disabled. Jul 11 00:12:38.895924 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 11 00:12:38.895933 systemd-journald[193]: Journal started Jul 11 00:12:38.895955 systemd-journald[193]: Runtime Journal (/run/log/journal/f3bbd3f2525b4a3788d08c7ac8d9228c) is 6.0M, max 48.4M, 42.3M free. Jul 11 00:12:38.883517 systemd-modules-load[194]: Inserted module 'overlay' Jul 11 00:12:38.926797 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 11 00:12:38.926814 kernel: Bridge firewalling registered Jul 11 00:12:38.926824 systemd[1]: Started systemd-journald.service - Journal Service. Jul 11 00:12:38.910633 systemd-modules-load[194]: Inserted module 'br_netfilter' Jul 11 00:12:38.927031 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 11 00:12:38.928989 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:12:38.937725 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 11 00:12:38.939455 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 00:12:38.940712 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 11 00:12:38.944673 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 11 00:12:38.956201 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:12:38.956475 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 00:12:38.958940 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 00:12:38.961728 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 11 00:12:38.965865 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:12:38.967884 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 11 00:12:38.984225 dracut-cmdline[228]: dracut-dracut-053 Jul 11 00:12:38.987359 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=836a88100946313ecc17069f63287f86e7d91f7c389df4bcd3f3e02beb9683e1 Jul 11 00:12:38.998720 systemd-resolved[227]: Positive Trust Anchors: Jul 11 00:12:38.998741 systemd-resolved[227]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 00:12:38.998773 systemd-resolved[227]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 11 00:12:39.001349 systemd-resolved[227]: Defaulting to hostname 'linux'. Jul 11 00:12:39.002524 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 11 00:12:39.008097 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 11 00:12:39.072572 kernel: SCSI subsystem initialized Jul 11 00:12:39.082563 kernel: Loading iSCSI transport class v2.0-870. Jul 11 00:12:39.094569 kernel: iscsi: registered transport (tcp) Jul 11 00:12:39.115560 kernel: iscsi: registered transport (qla4xxx) Jul 11 00:12:39.115584 kernel: QLogic iSCSI HBA Driver Jul 11 00:12:39.169418 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 11 00:12:39.186820 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 11 00:12:39.213650 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 11 00:12:39.213706 kernel: device-mapper: uevent: version 1.0.3 Jul 11 00:12:39.214797 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 11 00:12:39.258555 kernel: raid6: avx2x4 gen() 29917 MB/s Jul 11 00:12:39.275562 kernel: raid6: avx2x2 gen() 31356 MB/s Jul 11 00:12:39.292585 kernel: raid6: avx2x1 gen() 25946 MB/s Jul 11 00:12:39.292619 kernel: raid6: using algorithm avx2x2 gen() 31356 MB/s Jul 11 00:12:39.310589 kernel: raid6: .... xor() 19981 MB/s, rmw enabled Jul 11 00:12:39.310619 kernel: raid6: using avx2x2 recovery algorithm Jul 11 00:12:39.330563 kernel: xor: automatically using best checksumming function avx Jul 11 00:12:39.484577 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 11 00:12:39.498936 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 11 00:12:39.512709 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 00:12:39.525684 systemd-udevd[412]: Using default interface naming scheme 'v255'. Jul 11 00:12:39.530549 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 00:12:39.540716 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 11 00:12:39.554892 dracut-pre-trigger[419]: rd.md=0: removing MD RAID activation Jul 11 00:12:39.590181 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 11 00:12:39.602703 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 11 00:12:39.667518 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 00:12:39.680165 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 11 00:12:39.692230 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 11 00:12:39.692835 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 11 00:12:39.695809 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 00:12:39.696019 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 11 00:12:39.704560 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 11 00:12:39.706113 kernel: cryptd: max_cpu_qlen set to 1000 Jul 11 00:12:39.708882 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 11 00:12:39.719676 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 11 00:12:39.731042 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 11 00:12:39.737623 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 11 00:12:39.737650 kernel: GPT:9289727 != 19775487 Jul 11 00:12:39.737660 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 11 00:12:39.737670 kernel: GPT:9289727 != 19775487 Jul 11 00:12:39.737679 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 11 00:12:39.737689 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:12:39.738553 kernel: AVX2 version of gcm_enc/dec engaged. Jul 11 00:12:39.739563 kernel: libata version 3.00 loaded. Jul 11 00:12:39.741564 kernel: AES CTR mode by8 optimization enabled Jul 11 00:12:39.744355 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 11 00:12:39.744617 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:12:39.747352 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 11 00:12:39.749813 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 00:12:39.752703 kernel: ahci 0000:00:1f.2: version 3.0 Jul 11 00:12:39.752886 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 11 00:12:39.750754 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:12:39.754692 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:12:39.760116 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jul 11 00:12:39.760368 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 11 00:12:39.761910 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:12:39.766579 kernel: scsi host0: ahci Jul 11 00:12:39.787766 kernel: scsi host1: ahci Jul 11 00:12:39.788108 kernel: BTRFS: device fsid 54fb9359-b495-4b0c-b313-b0e2955e4a38 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (470) Jul 11 00:12:39.788121 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (474) Jul 11 00:12:39.782863 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 11 00:12:39.821839 kernel: scsi host2: ahci Jul 11 00:12:39.822081 kernel: scsi host3: ahci Jul 11 00:12:39.822280 kernel: scsi host4: ahci Jul 11 00:12:39.822466 kernel: scsi host5: ahci Jul 11 00:12:39.822683 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jul 11 00:12:39.822697 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jul 11 00:12:39.822707 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jul 11 00:12:39.822722 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jul 11 00:12:39.822732 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jul 11 00:12:39.822742 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jul 11 00:12:39.824576 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:12:39.832646 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 11 00:12:39.847306 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 11 00:12:39.849939 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 11 00:12:39.858514 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 11 00:12:39.872693 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 11 00:12:39.876134 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 11 00:12:39.883944 disk-uuid[558]: Primary Header is updated. Jul 11 00:12:39.883944 disk-uuid[558]: Secondary Entries is updated. Jul 11 00:12:39.883944 disk-uuid[558]: Secondary Header is updated. Jul 11 00:12:39.887553 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:12:39.896359 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:12:39.901167 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:12:40.099661 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 11 00:12:40.099724 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 11 00:12:40.099746 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 11 00:12:40.100559 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 11 00:12:40.101583 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 11 00:12:40.102567 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 11 00:12:40.102581 kernel: ata3.00: applying bridge limits Jul 11 00:12:40.103552 kernel: ata3.00: configured for UDMA/100 Jul 11 00:12:40.105554 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 11 00:12:40.108575 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 11 00:12:40.150078 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 11 00:12:40.150304 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 11 00:12:40.164560 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 11 00:12:40.895565 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:12:40.895860 disk-uuid[560]: The operation has completed successfully. Jul 11 00:12:40.927127 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 11 00:12:40.927258 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 11 00:12:40.952704 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 11 00:12:40.956039 sh[591]: Success Jul 11 00:12:40.968567 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 11 00:12:41.019904 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 11 00:12:41.034028 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 11 00:12:41.038474 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 11 00:12:41.048626 kernel: BTRFS info (device dm-0): first mount of filesystem 54fb9359-b495-4b0c-b313-b0e2955e4a38 Jul 11 00:12:41.048681 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 11 00:12:41.048693 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 11 00:12:41.049598 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 11 00:12:41.050879 kernel: BTRFS info (device dm-0): using free space tree Jul 11 00:12:41.055257 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 11 00:12:41.057742 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 11 00:12:41.067738 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 11 00:12:41.070286 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 11 00:12:41.077694 kernel: BTRFS info (device vda6): first mount of filesystem 71430075-b555-475f-bdad-1d4a4e6e1dbe Jul 11 00:12:41.077728 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 11 00:12:41.077743 kernel: BTRFS info (device vda6): using free space tree Jul 11 00:12:41.080758 kernel: BTRFS info (device vda6): auto enabling async discard Jul 11 00:12:41.089331 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 11 00:12:41.090971 kernel: BTRFS info (device vda6): last unmount of filesystem 71430075-b555-475f-bdad-1d4a4e6e1dbe Jul 11 00:12:41.099873 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 11 00:12:41.105708 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 11 00:12:41.233253 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 11 00:12:41.277909 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 11 00:12:41.300935 ignition[673]: Ignition 2.19.0 Jul 11 00:12:41.300948 ignition[673]: Stage: fetch-offline Jul 11 00:12:41.301003 ignition[673]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:12:41.301017 ignition[673]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:12:41.301161 ignition[673]: parsed url from cmdline: "" Jul 11 00:12:41.301166 ignition[673]: no config URL provided Jul 11 00:12:41.301171 ignition[673]: reading system config file "/usr/lib/ignition/user.ign" Jul 11 00:12:41.301181 ignition[673]: no config at "/usr/lib/ignition/user.ign" Jul 11 00:12:41.301215 ignition[673]: op(1): [started] loading QEMU firmware config module Jul 11 00:12:41.301220 ignition[673]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 11 00:12:41.312115 ignition[673]: op(1): [finished] loading QEMU firmware config module Jul 11 00:12:41.312146 ignition[673]: QEMU firmware config was not found. Ignoring... Jul 11 00:12:41.314116 systemd-networkd[776]: lo: Link UP Jul 11 00:12:41.314120 systemd-networkd[776]: lo: Gained carrier Jul 11 00:12:41.315838 systemd-networkd[776]: Enumeration completed Jul 11 00:12:41.316245 systemd-networkd[776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:12:41.316249 systemd-networkd[776]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 11 00:12:41.317410 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 11 00:12:41.317882 systemd[1]: Reached target network.target - Network. Jul 11 00:12:41.318171 systemd-networkd[776]: eth0: Link UP Jul 11 00:12:41.318175 systemd-networkd[776]: eth0: Gained carrier Jul 11 00:12:41.318182 systemd-networkd[776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:12:41.342672 systemd-networkd[776]: eth0: DHCPv4 address 10.0.0.36/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 11 00:12:41.363754 ignition[673]: parsing config with SHA512: aed6fe83e13c59981f86d7aba932a638374de48f5fd0087d5678fbcdaa14c01a63416cf5879825d1260fc1169273aae051ad46fddcd89ff545ea47cd9dc50c09 Jul 11 00:12:41.371477 unknown[673]: fetched base config from "system" Jul 11 00:12:41.372044 unknown[673]: fetched user config from "qemu" Jul 11 00:12:41.372484 ignition[673]: fetch-offline: fetch-offline passed Jul 11 00:12:41.372600 ignition[673]: Ignition finished successfully Jul 11 00:12:41.377144 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 11 00:12:41.379499 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 11 00:12:41.395677 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 11 00:12:41.417009 ignition[783]: Ignition 2.19.0 Jul 11 00:12:41.417021 ignition[783]: Stage: kargs Jul 11 00:12:41.417213 ignition[783]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:12:41.417226 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:12:41.418058 ignition[783]: kargs: kargs passed Jul 11 00:12:41.418113 ignition[783]: Ignition finished successfully Jul 11 00:12:41.424667 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 11 00:12:41.436658 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 11 00:12:41.450823 ignition[791]: Ignition 2.19.0 Jul 11 00:12:41.450842 ignition[791]: Stage: disks Jul 11 00:12:41.451011 ignition[791]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:12:41.451022 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:12:41.454777 ignition[791]: disks: disks passed Jul 11 00:12:41.454831 ignition[791]: Ignition finished successfully Jul 11 00:12:41.458129 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 11 00:12:41.458408 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 11 00:12:41.460031 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 11 00:12:41.462083 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 11 00:12:41.462392 systemd[1]: Reached target sysinit.target - System Initialization. Jul 11 00:12:41.462717 systemd[1]: Reached target basic.target - Basic System. Jul 11 00:12:41.477687 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 11 00:12:41.495570 systemd-resolved[227]: Detected conflict on linux IN A 10.0.0.36 Jul 11 00:12:41.495609 systemd-resolved[227]: Hostname conflict, changing published hostname from 'linux' to 'linux10'. Jul 11 00:12:41.498986 systemd-fsck[802]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 11 00:12:41.505183 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 11 00:12:41.513735 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 11 00:12:41.607556 kernel: EXT4-fs (vda9): mounted filesystem 66ba5133-8c5a-461b-b2c1-a823c72af79b r/w with ordered data mode. Quota mode: none. Jul 11 00:12:41.607844 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 11 00:12:41.610108 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 11 00:12:41.628629 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 11 00:12:41.631063 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 11 00:12:41.633487 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 11 00:12:41.633543 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 11 00:12:41.633568 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 11 00:12:41.641137 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (810) Jul 11 00:12:41.641174 kernel: BTRFS info (device vda6): first mount of filesystem 71430075-b555-475f-bdad-1d4a4e6e1dbe Jul 11 00:12:41.641189 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 11 00:12:41.641991 kernel: BTRFS info (device vda6): using free space tree Jul 11 00:12:41.643072 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 11 00:12:41.645837 kernel: BTRFS info (device vda6): auto enabling async discard Jul 11 00:12:41.654686 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 11 00:12:41.657823 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 11 00:12:41.688234 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Jul 11 00:12:41.692285 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Jul 11 00:12:41.697197 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory Jul 11 00:12:41.701985 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory Jul 11 00:12:41.788646 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 11 00:12:41.794779 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 11 00:12:41.797545 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 11 00:12:41.805556 kernel: BTRFS info (device vda6): last unmount of filesystem 71430075-b555-475f-bdad-1d4a4e6e1dbe Jul 11 00:12:41.823701 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 11 00:12:41.850044 ignition[923]: INFO : Ignition 2.19.0 Jul 11 00:12:41.850044 ignition[923]: INFO : Stage: mount Jul 11 00:12:41.851904 ignition[923]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:12:41.851904 ignition[923]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:12:41.854354 ignition[923]: INFO : mount: mount passed Jul 11 00:12:41.855262 ignition[923]: INFO : Ignition finished successfully Jul 11 00:12:41.857245 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 11 00:12:41.868644 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 11 00:12:42.048157 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 11 00:12:42.075714 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 11 00:12:42.083572 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (938) Jul 11 00:12:42.083598 kernel: BTRFS info (device vda6): first mount of filesystem 71430075-b555-475f-bdad-1d4a4e6e1dbe Jul 11 00:12:42.084911 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 11 00:12:42.084935 kernel: BTRFS info (device vda6): using free space tree Jul 11 00:12:42.088558 kernel: BTRFS info (device vda6): auto enabling async discard Jul 11 00:12:42.089395 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 11 00:12:42.119244 ignition[955]: INFO : Ignition 2.19.0 Jul 11 00:12:42.119244 ignition[955]: INFO : Stage: files Jul 11 00:12:42.120869 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:12:42.120869 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:12:42.120869 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Jul 11 00:12:42.124601 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 11 00:12:42.124601 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 11 00:12:42.129137 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 11 00:12:42.130525 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 11 00:12:42.132162 unknown[955]: wrote ssh authorized keys file for user: core Jul 11 00:12:42.133354 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 11 00:12:42.134742 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 11 00:12:42.136703 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jul 11 00:12:42.469904 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 11 00:12:42.880060 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 11 00:12:42.880060 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 11 00:12:42.883884 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 11 00:12:42.883884 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 11 00:12:42.883884 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 11 00:12:42.883884 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 11 00:12:42.883884 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 11 00:12:42.883884 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 11 00:12:42.883884 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 11 00:12:42.883884 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 00:12:42.883884 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 00:12:42.883884 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 11 00:12:42.883884 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 11 00:12:42.883884 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 11 00:12:42.883884 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jul 11 00:12:42.956797 systemd-networkd[776]: eth0: Gained IPv6LL Jul 11 00:12:43.506661 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 11 00:12:44.201402 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 11 00:12:44.201402 ignition[955]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 11 00:12:44.205004 ignition[955]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 11 00:12:44.207393 ignition[955]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 11 00:12:44.207393 ignition[955]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 11 00:12:44.207393 ignition[955]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 11 00:12:44.211485 ignition[955]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 11 00:12:44.213320 ignition[955]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 11 00:12:44.213320 ignition[955]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 11 00:12:44.216298 ignition[955]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 11 00:12:44.246380 ignition[955]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 11 00:12:44.255720 ignition[955]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 11 00:12:44.257461 ignition[955]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 11 00:12:44.257461 ignition[955]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 11 00:12:44.257461 ignition[955]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 11 00:12:44.257461 ignition[955]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 11 00:12:44.257461 ignition[955]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 11 00:12:44.257461 ignition[955]: INFO : files: files passed Jul 11 00:12:44.257461 ignition[955]: INFO : Ignition finished successfully Jul 11 00:12:44.259593 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 11 00:12:44.270704 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 11 00:12:44.272561 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 11 00:12:44.276290 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 11 00:12:44.276413 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 11 00:12:44.285052 initrd-setup-root-after-ignition[983]: grep: /sysroot/oem/oem-release: No such file or directory Jul 11 00:12:44.288446 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:12:44.288446 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:12:44.291643 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:12:44.290871 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 11 00:12:44.293360 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 11 00:12:44.304686 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 11 00:12:44.333084 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 11 00:12:44.334138 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 11 00:12:44.336717 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 11 00:12:44.338890 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 11 00:12:44.340862 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 11 00:12:44.343150 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 11 00:12:44.363677 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 11 00:12:44.372797 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 11 00:12:44.385808 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 11 00:12:44.387087 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 00:12:44.389319 systemd[1]: Stopped target timers.target - Timer Units. Jul 11 00:12:44.391329 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 11 00:12:44.391454 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 11 00:12:44.394251 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 11 00:12:44.396298 systemd[1]: Stopped target basic.target - Basic System. Jul 11 00:12:44.398116 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 11 00:12:44.399986 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 11 00:12:44.402100 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 11 00:12:44.404240 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 11 00:12:44.406274 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 11 00:12:44.407301 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 11 00:12:44.407844 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 11 00:12:44.408137 systemd[1]: Stopped target swap.target - Swaps. Jul 11 00:12:44.408439 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 11 00:12:44.408586 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 11 00:12:44.416174 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 11 00:12:44.417172 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 00:12:44.419088 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 11 00:12:44.419201 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 00:12:44.421221 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 11 00:12:44.421331 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 11 00:12:44.423828 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 11 00:12:44.423940 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 11 00:12:44.424236 systemd[1]: Stopped target paths.target - Path Units. Jul 11 00:12:44.424549 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 11 00:12:44.428668 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 00:12:44.431205 systemd[1]: Stopped target slices.target - Slice Units. Jul 11 00:12:44.433056 systemd[1]: Stopped target sockets.target - Socket Units. Jul 11 00:12:44.434970 systemd[1]: iscsid.socket: Deactivated successfully. Jul 11 00:12:44.435150 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 11 00:12:44.437290 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 11 00:12:44.437395 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 11 00:12:44.439010 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 11 00:12:44.439154 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 11 00:12:44.440865 systemd[1]: ignition-files.service: Deactivated successfully. Jul 11 00:12:44.441005 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 11 00:12:44.453757 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 11 00:12:44.454842 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 11 00:12:44.455148 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 00:12:44.458937 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 11 00:12:44.462058 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 11 00:12:44.462295 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 00:12:44.464131 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 11 00:12:44.464300 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 11 00:12:44.474527 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 11 00:12:44.474664 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 11 00:12:44.485321 ignition[1009]: INFO : Ignition 2.19.0 Jul 11 00:12:44.485321 ignition[1009]: INFO : Stage: umount Jul 11 00:12:44.487034 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:12:44.487034 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:12:44.487034 ignition[1009]: INFO : umount: umount passed Jul 11 00:12:44.487034 ignition[1009]: INFO : Ignition finished successfully Jul 11 00:12:44.488530 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 11 00:12:44.488670 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 11 00:12:44.489309 systemd[1]: Stopped target network.target - Network. Jul 11 00:12:44.493019 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 11 00:12:44.493088 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 11 00:12:44.493985 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 11 00:12:44.494037 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 11 00:12:44.494284 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 11 00:12:44.494328 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 11 00:12:44.494949 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 11 00:12:44.494999 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 11 00:12:44.495400 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 11 00:12:44.500833 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 11 00:12:44.508090 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 11 00:12:44.508227 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 11 00:12:44.509425 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 11 00:12:44.509487 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 00:12:44.511664 systemd-networkd[776]: eth0: DHCPv6 lease lost Jul 11 00:12:44.513905 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 11 00:12:44.514035 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 11 00:12:44.514466 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 11 00:12:44.514508 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 11 00:12:44.527644 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 11 00:12:44.527713 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 11 00:12:44.527769 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 11 00:12:44.530810 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 11 00:12:44.530862 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:12:44.532918 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 11 00:12:44.532967 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 11 00:12:44.535272 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 00:12:44.546719 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 11 00:12:44.549236 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 11 00:12:44.549386 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 11 00:12:44.551751 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 11 00:12:44.552001 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 00:12:44.554451 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 11 00:12:44.554504 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 11 00:12:44.555458 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 11 00:12:44.555503 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 00:12:44.555900 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 11 00:12:44.555950 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 11 00:12:44.556575 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 11 00:12:44.556624 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 11 00:12:44.557381 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 11 00:12:44.557430 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:12:44.559137 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 11 00:12:44.567145 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 11 00:12:44.567207 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 00:12:44.567509 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 00:12:44.567572 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:12:44.578079 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 11 00:12:44.579209 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 11 00:12:44.700610 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 11 00:12:44.701604 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 11 00:12:44.703584 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 11 00:12:44.705577 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 11 00:12:44.705634 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 11 00:12:44.721685 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 11 00:12:44.728524 systemd[1]: Switching root. Jul 11 00:12:44.763350 systemd-journald[193]: Journal stopped Jul 11 00:12:45.790446 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Jul 11 00:12:45.790519 kernel: SELinux: policy capability network_peer_controls=1 Jul 11 00:12:45.790630 kernel: SELinux: policy capability open_perms=1 Jul 11 00:12:45.790644 kernel: SELinux: policy capability extended_socket_class=1 Jul 11 00:12:45.790658 kernel: SELinux: policy capability always_check_network=0 Jul 11 00:12:45.790670 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 11 00:12:45.790683 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 11 00:12:45.790696 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 11 00:12:45.790709 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 11 00:12:45.790722 kernel: audit: type=1403 audit(1752192765.075:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 11 00:12:45.790742 systemd[1]: Successfully loaded SELinux policy in 39.565ms. Jul 11 00:12:45.790778 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.798ms. Jul 11 00:12:45.790793 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 11 00:12:45.790807 systemd[1]: Detected virtualization kvm. Jul 11 00:12:45.790821 systemd[1]: Detected architecture x86-64. Jul 11 00:12:45.790836 systemd[1]: Detected first boot. Jul 11 00:12:45.790849 systemd[1]: Initializing machine ID from VM UUID. Jul 11 00:12:45.790874 zram_generator::config[1054]: No configuration found. Jul 11 00:12:45.790889 systemd[1]: Populated /etc with preset unit settings. Jul 11 00:12:45.790906 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 11 00:12:45.790921 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 11 00:12:45.790934 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 11 00:12:45.790949 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 11 00:12:45.790963 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 11 00:12:45.790977 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 11 00:12:45.790991 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 11 00:12:45.791005 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 11 00:12:45.791022 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 11 00:12:45.791036 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 11 00:12:45.791049 systemd[1]: Created slice user.slice - User and Session Slice. Jul 11 00:12:45.791064 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 00:12:45.791078 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 00:12:45.791092 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 11 00:12:45.791106 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 11 00:12:45.791121 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 11 00:12:45.791135 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 11 00:12:45.791151 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 11 00:12:45.791170 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 00:12:45.791191 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 11 00:12:45.791206 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 11 00:12:45.791220 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 11 00:12:45.791234 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 11 00:12:45.791248 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 00:12:45.791261 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 11 00:12:45.791279 systemd[1]: Reached target slices.target - Slice Units. Jul 11 00:12:45.791292 systemd[1]: Reached target swap.target - Swaps. Jul 11 00:12:45.791307 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 11 00:12:45.791321 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 11 00:12:45.791343 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 11 00:12:45.791358 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 11 00:12:45.791372 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 00:12:45.791386 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 11 00:12:45.791399 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 11 00:12:45.791416 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 11 00:12:45.791430 systemd[1]: Mounting media.mount - External Media Directory... Jul 11 00:12:45.791444 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:12:45.791458 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 11 00:12:45.791472 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 11 00:12:45.791486 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 11 00:12:45.791508 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 11 00:12:45.791527 systemd[1]: Reached target machines.target - Containers. Jul 11 00:12:45.791682 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 11 00:12:45.791700 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:12:45.791714 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 11 00:12:45.791728 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 11 00:12:45.791742 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:12:45.791756 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 11 00:12:45.791770 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:12:45.791784 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 11 00:12:45.791797 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:12:45.791814 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 11 00:12:45.791828 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 11 00:12:45.791842 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 11 00:12:45.791856 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 11 00:12:45.791870 systemd[1]: Stopped systemd-fsck-usr.service. Jul 11 00:12:45.791883 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 11 00:12:45.791897 kernel: fuse: init (API version 7.39) Jul 11 00:12:45.791910 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 11 00:12:45.791923 kernel: loop: module loaded Jul 11 00:12:45.791940 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 11 00:12:45.791954 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 11 00:12:45.791969 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 11 00:12:45.792007 systemd-journald[1117]: Collecting audit messages is disabled. Jul 11 00:12:45.792032 systemd[1]: verity-setup.service: Deactivated successfully. Jul 11 00:12:45.792046 systemd[1]: Stopped verity-setup.service. Jul 11 00:12:45.792060 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:12:45.792076 systemd-journald[1117]: Journal started Jul 11 00:12:45.792101 systemd-journald[1117]: Runtime Journal (/run/log/journal/f3bbd3f2525b4a3788d08c7ac8d9228c) is 6.0M, max 48.4M, 42.3M free. Jul 11 00:12:45.586038 systemd[1]: Queued start job for default target multi-user.target. Jul 11 00:12:45.606565 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 11 00:12:45.607008 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 11 00:12:45.796941 systemd[1]: Started systemd-journald.service - Journal Service. Jul 11 00:12:45.798205 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 11 00:12:45.799417 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 11 00:12:45.801487 systemd[1]: Mounted media.mount - External Media Directory. Jul 11 00:12:45.802635 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 11 00:12:45.803824 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 11 00:12:45.805051 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 11 00:12:45.806279 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 00:12:45.808098 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 11 00:12:45.808282 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 11 00:12:45.809556 kernel: ACPI: bus type drm_connector registered Jul 11 00:12:45.810289 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:12:45.810564 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:12:45.812076 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 00:12:45.812254 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 11 00:12:45.813796 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:12:45.813975 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:12:45.815527 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 11 00:12:45.815726 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 11 00:12:45.817257 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:12:45.817449 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:12:45.818874 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 11 00:12:45.820328 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 11 00:12:45.821986 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 11 00:12:45.837232 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 11 00:12:45.847695 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 11 00:12:45.850760 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 11 00:12:45.851882 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 11 00:12:45.851920 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 11 00:12:45.853946 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 11 00:12:45.856304 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 11 00:12:45.859056 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 11 00:12:45.860177 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:12:45.863654 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 11 00:12:45.866234 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 11 00:12:45.867501 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 00:12:45.868820 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 11 00:12:45.871759 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 11 00:12:45.874744 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 00:12:45.877048 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 11 00:12:45.885708 systemd-journald[1117]: Time spent on flushing to /var/log/journal/f3bbd3f2525b4a3788d08c7ac8d9228c is 23.979ms for 950 entries. Jul 11 00:12:45.885708 systemd-journald[1117]: System Journal (/var/log/journal/f3bbd3f2525b4a3788d08c7ac8d9228c) is 8.0M, max 195.6M, 187.6M free. Jul 11 00:12:45.923575 systemd-journald[1117]: Received client request to flush runtime journal. Jul 11 00:12:45.923622 kernel: loop0: detected capacity change from 0 to 140768 Jul 11 00:12:45.881392 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 11 00:12:45.882820 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 11 00:12:45.884269 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 11 00:12:45.887528 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 11 00:12:45.892851 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 11 00:12:45.903808 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 11 00:12:45.912701 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 11 00:12:45.918655 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 11 00:12:45.919997 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 00:12:45.925828 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 11 00:12:45.929067 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 11 00:12:45.935621 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 11 00:12:45.944849 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:12:45.948804 udevadm[1178]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 11 00:12:45.955921 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 11 00:12:45.956566 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 11 00:12:45.965396 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 11 00:12:45.968775 kernel: loop1: detected capacity change from 0 to 142488 Jul 11 00:12:45.974832 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 11 00:12:45.996884 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Jul 11 00:12:45.997356 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Jul 11 00:12:46.002667 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 00:12:46.003570 kernel: loop2: detected capacity change from 0 to 224512 Jul 11 00:12:46.053858 kernel: loop3: detected capacity change from 0 to 140768 Jul 11 00:12:46.071581 kernel: loop4: detected capacity change from 0 to 142488 Jul 11 00:12:46.081560 kernel: loop5: detected capacity change from 0 to 224512 Jul 11 00:12:46.088584 (sd-merge)[1192]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 11 00:12:46.089199 (sd-merge)[1192]: Merged extensions into '/usr'. Jul 11 00:12:46.093746 systemd[1]: Reloading requested from client PID 1167 ('systemd-sysext') (unit systemd-sysext.service)... Jul 11 00:12:46.093763 systemd[1]: Reloading... Jul 11 00:12:46.184563 zram_generator::config[1218]: No configuration found. Jul 11 00:12:46.267857 ldconfig[1162]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 11 00:12:46.320552 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:12:46.370005 systemd[1]: Reloading finished in 275 ms. Jul 11 00:12:46.401792 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 11 00:12:46.403458 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 11 00:12:46.418731 systemd[1]: Starting ensure-sysext.service... Jul 11 00:12:46.421329 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 11 00:12:46.429813 systemd[1]: Reloading requested from client PID 1255 ('systemctl') (unit ensure-sysext.service)... Jul 11 00:12:46.429831 systemd[1]: Reloading... Jul 11 00:12:46.484674 zram_generator::config[1279]: No configuration found. Jul 11 00:12:46.507245 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 11 00:12:46.507649 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 11 00:12:46.508708 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 11 00:12:46.509017 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Jul 11 00:12:46.509095 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Jul 11 00:12:46.512408 systemd-tmpfiles[1256]: Detected autofs mount point /boot during canonicalization of boot. Jul 11 00:12:46.512420 systemd-tmpfiles[1256]: Skipping /boot Jul 11 00:12:46.525135 systemd-tmpfiles[1256]: Detected autofs mount point /boot during canonicalization of boot. Jul 11 00:12:46.525147 systemd-tmpfiles[1256]: Skipping /boot Jul 11 00:12:46.609681 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:12:46.659893 systemd[1]: Reloading finished in 229 ms. Jul 11 00:12:46.679090 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 11 00:12:46.691002 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 00:12:46.699499 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 11 00:12:46.702229 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 11 00:12:46.704573 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 11 00:12:46.708471 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 11 00:12:46.714604 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 00:12:46.717214 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 11 00:12:46.721754 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:12:46.722105 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:12:46.726776 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:12:46.729824 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:12:46.733759 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:12:46.735044 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:12:46.739488 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 11 00:12:46.740556 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:12:46.742349 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:12:46.742647 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:12:46.744253 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:12:46.744434 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:12:46.753294 systemd-udevd[1329]: Using default interface naming scheme 'v255'. Jul 11 00:12:46.753842 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:12:46.754047 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:12:46.755648 augenrules[1348]: No rules Jul 11 00:12:46.756147 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:12:46.759642 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:12:46.760853 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:12:46.761417 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:12:46.762327 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 11 00:12:46.764001 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 11 00:12:46.766187 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 11 00:12:46.768941 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:12:46.769219 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:12:46.776910 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:12:46.777165 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:12:46.780919 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:12:46.781166 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:12:46.782741 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 00:12:46.796079 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:12:46.796761 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:12:46.806751 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:12:46.809839 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 11 00:12:46.813335 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:12:46.817182 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:12:46.818766 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:12:46.824735 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 11 00:12:46.835775 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 11 00:12:46.837171 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:12:46.839738 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 11 00:12:46.850507 systemd[1]: Finished ensure-sysext.service. Jul 11 00:12:46.856737 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 11 00:12:46.859325 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:12:46.859506 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:12:46.861681 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 00:12:46.861863 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 11 00:12:46.863583 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:12:46.863938 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:12:46.865885 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:12:46.866228 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:12:46.891503 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 00:12:46.891634 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 11 00:12:46.899724 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 11 00:12:46.901592 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 11 00:12:46.915355 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 11 00:12:46.926471 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 11 00:12:46.942471 systemd-resolved[1325]: Positive Trust Anchors: Jul 11 00:12:46.942500 systemd-resolved[1325]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 00:12:46.942552 systemd-resolved[1325]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 11 00:12:46.949396 systemd-resolved[1325]: Defaulting to hostname 'linux'. Jul 11 00:12:46.955480 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 11 00:12:46.957563 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1381) Jul 11 00:12:46.957993 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 11 00:12:46.986545 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 11 00:12:46.988143 systemd-networkd[1388]: lo: Link UP Jul 11 00:12:46.988165 systemd-networkd[1388]: lo: Gained carrier Jul 11 00:12:46.990464 systemd-networkd[1388]: Enumeration completed Jul 11 00:12:46.990582 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 11 00:12:46.991290 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:12:46.991295 systemd-networkd[1388]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 11 00:12:46.991933 systemd[1]: Reached target network.target - Network. Jul 11 00:12:46.992872 systemd-networkd[1388]: eth0: Link UP Jul 11 00:12:46.992883 systemd-networkd[1388]: eth0: Gained carrier Jul 11 00:12:46.992905 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:12:47.002614 kernel: ACPI: button: Power Button [PWRF] Jul 11 00:12:47.003858 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 11 00:12:47.010600 systemd-networkd[1388]: eth0: DHCPv4 address 10.0.0.36/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 11 00:12:47.016743 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 11 00:12:47.678900 systemd-resolved[1325]: Clock change detected. Flushing caches. Jul 11 00:12:47.678973 systemd-timesyncd[1399]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 11 00:12:47.679055 systemd-timesyncd[1399]: Initial clock synchronization to Fri 2025-07-11 00:12:47.678841 UTC. Jul 11 00:12:47.679838 systemd[1]: Reached target time-set.target - System Time Set. Jul 11 00:12:47.702803 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 11 00:12:47.708247 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 11 00:12:47.709320 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jul 11 00:12:47.709523 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 11 00:12:47.709794 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 11 00:12:47.721943 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 11 00:12:47.752107 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:12:47.763793 kernel: mousedev: PS/2 mouse device common for all mice Jul 11 00:12:47.774948 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 11 00:12:47.834086 kernel: kvm_amd: TSC scaling supported Jul 11 00:12:47.834171 kernel: kvm_amd: Nested Virtualization enabled Jul 11 00:12:47.834189 kernel: kvm_amd: Nested Paging enabled Jul 11 00:12:47.835204 kernel: kvm_amd: LBR virtualization supported Jul 11 00:12:47.835227 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 11 00:12:47.835866 kernel: kvm_amd: Virtual GIF supported Jul 11 00:12:47.859795 kernel: EDAC MC: Ver: 3.0.0 Jul 11 00:12:47.901103 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 11 00:12:47.902796 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:12:47.915943 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 11 00:12:47.924679 lvm[1424]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 11 00:12:47.959772 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 11 00:12:47.961230 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 11 00:12:47.962300 systemd[1]: Reached target sysinit.target - System Initialization. Jul 11 00:12:47.963435 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 11 00:12:47.964654 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 11 00:12:47.966061 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 11 00:12:47.967194 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 11 00:12:47.968536 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 11 00:12:47.969714 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 11 00:12:47.969743 systemd[1]: Reached target paths.target - Path Units. Jul 11 00:12:47.970629 systemd[1]: Reached target timers.target - Timer Units. Jul 11 00:12:47.972528 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 11 00:12:47.975448 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 11 00:12:47.985289 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 11 00:12:47.987644 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 11 00:12:47.989236 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 11 00:12:47.990423 systemd[1]: Reached target sockets.target - Socket Units. Jul 11 00:12:47.991390 systemd[1]: Reached target basic.target - Basic System. Jul 11 00:12:47.992374 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 11 00:12:47.992401 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 11 00:12:47.993462 systemd[1]: Starting containerd.service - containerd container runtime... Jul 11 00:12:47.995591 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 11 00:12:47.997834 lvm[1428]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 11 00:12:47.999869 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 11 00:12:48.006124 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 11 00:12:48.007259 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 11 00:12:48.009739 jq[1431]: false Jul 11 00:12:48.009924 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 11 00:12:48.013862 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 11 00:12:48.016124 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 11 00:12:48.019920 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 11 00:12:48.025064 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 11 00:12:48.026111 dbus-daemon[1430]: [system] SELinux support is enabled Jul 11 00:12:48.026556 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 11 00:12:48.027073 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 11 00:12:48.027994 systemd[1]: Starting update-engine.service - Update Engine... Jul 11 00:12:48.030965 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 11 00:12:48.033392 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 11 00:12:48.038227 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 11 00:12:48.039432 extend-filesystems[1432]: Found loop3 Jul 11 00:12:48.039432 extend-filesystems[1432]: Found loop4 Jul 11 00:12:48.039432 extend-filesystems[1432]: Found loop5 Jul 11 00:12:48.039432 extend-filesystems[1432]: Found sr0 Jul 11 00:12:48.039432 extend-filesystems[1432]: Found vda Jul 11 00:12:48.039432 extend-filesystems[1432]: Found vda1 Jul 11 00:12:48.039432 extend-filesystems[1432]: Found vda2 Jul 11 00:12:48.039432 extend-filesystems[1432]: Found vda3 Jul 11 00:12:48.039432 extend-filesystems[1432]: Found usr Jul 11 00:12:48.039432 extend-filesystems[1432]: Found vda4 Jul 11 00:12:48.039432 extend-filesystems[1432]: Found vda6 Jul 11 00:12:48.039432 extend-filesystems[1432]: Found vda7 Jul 11 00:12:48.039432 extend-filesystems[1432]: Found vda9 Jul 11 00:12:48.039432 extend-filesystems[1432]: Checking size of /dev/vda9 Jul 11 00:12:48.061932 jq[1443]: true Jul 11 00:12:48.062073 update_engine[1441]: I20250711 00:12:48.053797 1441 main.cc:92] Flatcar Update Engine starting Jul 11 00:12:48.062073 update_engine[1441]: I20250711 00:12:48.055456 1441 update_check_scheduler.cc:74] Next update check in 6m46s Jul 11 00:12:48.043393 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 11 00:12:48.043656 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 11 00:12:48.051973 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 11 00:12:48.052210 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 11 00:12:48.064991 (ntainerd)[1453]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 11 00:12:48.066298 systemd[1]: motdgen.service: Deactivated successfully. Jul 11 00:12:48.067847 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 11 00:12:48.077454 jq[1449]: true Jul 11 00:12:48.078368 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 11 00:12:48.078445 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 11 00:12:48.081014 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 11 00:12:48.081034 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 11 00:12:48.082804 extend-filesystems[1432]: Resized partition /dev/vda9 Jul 11 00:12:48.086627 extend-filesystems[1466]: resize2fs 1.47.1 (20-May-2024) Jul 11 00:12:48.093785 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1359) Jul 11 00:12:48.099293 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 11 00:12:48.099082 systemd-logind[1439]: Watching system buttons on /dev/input/event1 (Power Button) Jul 11 00:12:48.099570 tar[1448]: linux-amd64/LICENSE Jul 11 00:12:48.099101 systemd-logind[1439]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 11 00:12:48.099275 systemd[1]: Started update-engine.service - Update Engine. Jul 11 00:12:48.100837 tar[1448]: linux-amd64/helm Jul 11 00:12:48.101065 systemd-logind[1439]: New seat seat0. Jul 11 00:12:48.114968 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 11 00:12:48.116210 systemd[1]: Started systemd-logind.service - User Login Management. Jul 11 00:12:48.139851 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 11 00:12:48.172086 extend-filesystems[1466]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 11 00:12:48.172086 extend-filesystems[1466]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 11 00:12:48.172086 extend-filesystems[1466]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 11 00:12:48.176337 extend-filesystems[1432]: Resized filesystem in /dev/vda9 Jul 11 00:12:48.188920 bash[1484]: Updated "/home/core/.ssh/authorized_keys" Jul 11 00:12:48.189385 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 11 00:12:48.189751 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 11 00:12:48.192504 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 11 00:12:48.195439 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 11 00:12:48.204779 locksmithd[1472]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 11 00:12:48.234384 sshd_keygen[1457]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 11 00:12:48.267342 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 11 00:12:48.274017 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 11 00:12:48.283816 systemd[1]: issuegen.service: Deactivated successfully. Jul 11 00:12:48.284131 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 11 00:12:48.292964 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 11 00:12:48.403881 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 11 00:12:48.418212 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 11 00:12:48.421192 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 11 00:12:48.423074 systemd[1]: Reached target getty.target - Login Prompts. Jul 11 00:12:48.464213 containerd[1453]: time="2025-07-11T00:12:48.464086722Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 11 00:12:48.515314 containerd[1453]: time="2025-07-11T00:12:48.515177415Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:12:48.517537 containerd[1453]: time="2025-07-11T00:12:48.517367763Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.96-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:12:48.517537 containerd[1453]: time="2025-07-11T00:12:48.517418979Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 11 00:12:48.517537 containerd[1453]: time="2025-07-11T00:12:48.517439678Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 11 00:12:48.517897 containerd[1453]: time="2025-07-11T00:12:48.517855297Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 11 00:12:48.517897 containerd[1453]: time="2025-07-11T00:12:48.517883109Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 11 00:12:48.518046 containerd[1453]: time="2025-07-11T00:12:48.517967588Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:12:48.518046 containerd[1453]: time="2025-07-11T00:12:48.517981965Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:12:48.518278 containerd[1453]: time="2025-07-11T00:12:48.518244337Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:12:48.518278 containerd[1453]: time="2025-07-11T00:12:48.518266298Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 11 00:12:48.518324 containerd[1453]: time="2025-07-11T00:12:48.518280234Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:12:48.518324 containerd[1453]: time="2025-07-11T00:12:48.518291495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 11 00:12:48.518426 containerd[1453]: time="2025-07-11T00:12:48.518407363Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:12:48.518737 containerd[1453]: time="2025-07-11T00:12:48.518708036Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:12:48.518892 containerd[1453]: time="2025-07-11T00:12:48.518864480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:12:48.518892 containerd[1453]: time="2025-07-11T00:12:48.518882794Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 11 00:12:48.519150 containerd[1453]: time="2025-07-11T00:12:48.519123856Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 11 00:12:48.519212 containerd[1453]: time="2025-07-11T00:12:48.519195440Z" level=info msg="metadata content store policy set" policy=shared Jul 11 00:12:48.525286 containerd[1453]: time="2025-07-11T00:12:48.525258120Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 11 00:12:48.525338 containerd[1453]: time="2025-07-11T00:12:48.525312432Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 11 00:12:48.525338 containerd[1453]: time="2025-07-11T00:12:48.525329704Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 11 00:12:48.525377 containerd[1453]: time="2025-07-11T00:12:48.525358398Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 11 00:12:48.525377 containerd[1453]: time="2025-07-11T00:12:48.525373667Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 11 00:12:48.525575 containerd[1453]: time="2025-07-11T00:12:48.525553955Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 11 00:12:48.525915 containerd[1453]: time="2025-07-11T00:12:48.525894363Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 11 00:12:48.526069 containerd[1453]: time="2025-07-11T00:12:48.526050947Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 11 00:12:48.526091 containerd[1453]: time="2025-07-11T00:12:48.526070393Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 11 00:12:48.526091 containerd[1453]: time="2025-07-11T00:12:48.526083097Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 11 00:12:48.526128 containerd[1453]: time="2025-07-11T00:12:48.526110408Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 11 00:12:48.526128 containerd[1453]: time="2025-07-11T00:12:48.526125086Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 11 00:12:48.526177 containerd[1453]: time="2025-07-11T00:12:48.526136858Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 11 00:12:48.526177 containerd[1453]: time="2025-07-11T00:12:48.526153419Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 11 00:12:48.526177 containerd[1453]: time="2025-07-11T00:12:48.526167936Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 11 00:12:48.526231 containerd[1453]: time="2025-07-11T00:12:48.526184227Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 11 00:12:48.526231 containerd[1453]: time="2025-07-11T00:12:48.526197522Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 11 00:12:48.526231 containerd[1453]: time="2025-07-11T00:12:48.526210556Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 11 00:12:48.526289 containerd[1453]: time="2025-07-11T00:12:48.526238308Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 11 00:12:48.526289 containerd[1453]: time="2025-07-11T00:12:48.526267353Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 11 00:12:48.526289 containerd[1453]: time="2025-07-11T00:12:48.526280968Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 11 00:12:48.526349 containerd[1453]: time="2025-07-11T00:12:48.526293372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 11 00:12:48.526349 containerd[1453]: time="2025-07-11T00:12:48.526305945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 11 00:12:48.526349 containerd[1453]: time="2025-07-11T00:12:48.526318539Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 11 00:12:48.526349 containerd[1453]: time="2025-07-11T00:12:48.526334258Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 11 00:12:48.526349 containerd[1453]: time="2025-07-11T00:12:48.526346642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 11 00:12:48.526440 containerd[1453]: time="2025-07-11T00:12:48.526361129Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 11 00:12:48.526440 containerd[1453]: time="2025-07-11T00:12:48.526378842Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 11 00:12:48.526440 containerd[1453]: time="2025-07-11T00:12:48.526390514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 11 00:12:48.526440 containerd[1453]: time="2025-07-11T00:12:48.526402186Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 11 00:12:48.526440 containerd[1453]: time="2025-07-11T00:12:48.526417234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 11 00:12:48.526440 containerd[1453]: time="2025-07-11T00:12:48.526436941Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 11 00:12:48.526567 containerd[1453]: time="2025-07-11T00:12:48.526460385Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 11 00:12:48.526567 containerd[1453]: time="2025-07-11T00:12:48.526474491Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 11 00:12:48.526567 containerd[1453]: time="2025-07-11T00:12:48.526487626Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 11 00:12:48.526626 containerd[1453]: time="2025-07-11T00:12:48.526567115Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 11 00:12:48.526626 containerd[1453]: time="2025-07-11T00:12:48.526587674Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 11 00:12:48.526626 containerd[1453]: time="2025-07-11T00:12:48.526598895Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 11 00:12:48.526626 containerd[1453]: time="2025-07-11T00:12:48.526615295Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 11 00:12:48.526626 containerd[1453]: time="2025-07-11T00:12:48.526625124Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 11 00:12:48.526738 containerd[1453]: time="2025-07-11T00:12:48.526649630Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 11 00:12:48.526738 containerd[1453]: time="2025-07-11T00:12:48.526663746Z" level=info msg="NRI interface is disabled by configuration." Jul 11 00:12:48.526738 containerd[1453]: time="2025-07-11T00:12:48.526676660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 11 00:12:48.527087 containerd[1453]: time="2025-07-11T00:12:48.527017560Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 11 00:12:48.527087 containerd[1453]: time="2025-07-11T00:12:48.527078163Z" level=info msg="Connect containerd service" Jul 11 00:12:48.527328 containerd[1453]: time="2025-07-11T00:12:48.527117738Z" level=info msg="using legacy CRI server" Jul 11 00:12:48.527328 containerd[1453]: time="2025-07-11T00:12:48.527125091Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 11 00:12:48.527328 containerd[1453]: time="2025-07-11T00:12:48.527241329Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 11 00:12:48.529398 containerd[1453]: time="2025-07-11T00:12:48.529364391Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 11 00:12:48.529655 containerd[1453]: time="2025-07-11T00:12:48.529573433Z" level=info msg="Start subscribing containerd event" Jul 11 00:12:48.529718 containerd[1453]: time="2025-07-11T00:12:48.529665976Z" level=info msg="Start recovering state" Jul 11 00:12:48.529837 containerd[1453]: time="2025-07-11T00:12:48.529805608Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 11 00:12:48.529897 containerd[1453]: time="2025-07-11T00:12:48.529871402Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 11 00:12:48.530347 containerd[1453]: time="2025-07-11T00:12:48.529812141Z" level=info msg="Start event monitor" Jul 11 00:12:48.530347 containerd[1453]: time="2025-07-11T00:12:48.530051810Z" level=info msg="Start snapshots syncer" Jul 11 00:12:48.530347 containerd[1453]: time="2025-07-11T00:12:48.530065796Z" level=info msg="Start cni network conf syncer for default" Jul 11 00:12:48.530347 containerd[1453]: time="2025-07-11T00:12:48.530074052Z" level=info msg="Start streaming server" Jul 11 00:12:48.530256 systemd[1]: Started containerd.service - containerd container runtime. Jul 11 00:12:48.530795 containerd[1453]: time="2025-07-11T00:12:48.530750781Z" level=info msg="containerd successfully booted in 0.068073s" Jul 11 00:12:48.655485 tar[1448]: linux-amd64/README.md Jul 11 00:12:48.672803 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 11 00:12:48.994023 systemd-networkd[1388]: eth0: Gained IPv6LL Jul 11 00:12:48.997914 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 11 00:12:48.999720 systemd[1]: Reached target network-online.target - Network is Online. Jul 11 00:12:49.012168 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 11 00:12:49.014990 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:12:49.017219 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 11 00:12:49.043318 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 11 00:12:49.045336 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 11 00:12:49.045612 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 11 00:12:49.048153 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 11 00:12:50.148727 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:12:50.150363 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 11 00:12:50.151591 systemd[1]: Startup finished in 830ms (kernel) + 6.383s (initrd) + 4.452s (userspace) = 11.666s. Jul 11 00:12:50.172114 (kubelet)[1543]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:12:50.762495 kubelet[1543]: E0711 00:12:50.762352 1543 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:12:50.766582 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:12:50.766818 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:12:50.767215 systemd[1]: kubelet.service: Consumed 1.595s CPU time. Jul 11 00:12:52.493547 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 11 00:12:52.495121 systemd[1]: Started sshd@0-10.0.0.36:22-10.0.0.1:39378.service - OpenSSH per-connection server daemon (10.0.0.1:39378). Jul 11 00:12:52.543720 sshd[1556]: Accepted publickey for core from 10.0.0.1 port 39378 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:12:52.545898 sshd[1556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:12:52.555396 systemd-logind[1439]: New session 1 of user core. Jul 11 00:12:52.556982 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 11 00:12:52.571022 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 11 00:12:52.583613 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 11 00:12:52.586664 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 11 00:12:52.595747 (systemd)[1560]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:12:52.699887 systemd[1560]: Queued start job for default target default.target. Jul 11 00:12:52.711156 systemd[1560]: Created slice app.slice - User Application Slice. Jul 11 00:12:52.711185 systemd[1560]: Reached target paths.target - Paths. Jul 11 00:12:52.711199 systemd[1560]: Reached target timers.target - Timers. Jul 11 00:12:52.712831 systemd[1560]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 11 00:12:52.725035 systemd[1560]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 11 00:12:52.725205 systemd[1560]: Reached target sockets.target - Sockets. Jul 11 00:12:52.725229 systemd[1560]: Reached target basic.target - Basic System. Jul 11 00:12:52.725276 systemd[1560]: Reached target default.target - Main User Target. Jul 11 00:12:52.725318 systemd[1560]: Startup finished in 122ms. Jul 11 00:12:52.725743 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 11 00:12:52.727493 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 11 00:12:52.788808 systemd[1]: Started sshd@1-10.0.0.36:22-10.0.0.1:39384.service - OpenSSH per-connection server daemon (10.0.0.1:39384). Jul 11 00:12:52.831474 sshd[1571]: Accepted publickey for core from 10.0.0.1 port 39384 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:12:52.833058 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:12:52.837375 systemd-logind[1439]: New session 2 of user core. Jul 11 00:12:52.852904 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 11 00:12:52.907018 sshd[1571]: pam_unix(sshd:session): session closed for user core Jul 11 00:12:52.918517 systemd[1]: sshd@1-10.0.0.36:22-10.0.0.1:39384.service: Deactivated successfully. Jul 11 00:12:52.920333 systemd[1]: session-2.scope: Deactivated successfully. Jul 11 00:12:52.921725 systemd-logind[1439]: Session 2 logged out. Waiting for processes to exit. Jul 11 00:12:52.932003 systemd[1]: Started sshd@2-10.0.0.36:22-10.0.0.1:39386.service - OpenSSH per-connection server daemon (10.0.0.1:39386). Jul 11 00:12:52.932834 systemd-logind[1439]: Removed session 2. Jul 11 00:12:52.963234 sshd[1578]: Accepted publickey for core from 10.0.0.1 port 39386 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:12:52.964826 sshd[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:12:52.968678 systemd-logind[1439]: New session 3 of user core. Jul 11 00:12:52.979885 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 11 00:12:53.029633 sshd[1578]: pam_unix(sshd:session): session closed for user core Jul 11 00:12:53.036412 systemd[1]: sshd@2-10.0.0.36:22-10.0.0.1:39386.service: Deactivated successfully. Jul 11 00:12:53.038294 systemd[1]: session-3.scope: Deactivated successfully. Jul 11 00:12:53.039741 systemd-logind[1439]: Session 3 logged out. Waiting for processes to exit. Jul 11 00:12:53.051072 systemd[1]: Started sshd@3-10.0.0.36:22-10.0.0.1:39392.service - OpenSSH per-connection server daemon (10.0.0.1:39392). Jul 11 00:12:53.052001 systemd-logind[1439]: Removed session 3. Jul 11 00:12:53.083778 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 39392 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:12:53.085708 sshd[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:12:53.090450 systemd-logind[1439]: New session 4 of user core. Jul 11 00:12:53.100957 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 11 00:12:53.156986 sshd[1585]: pam_unix(sshd:session): session closed for user core Jul 11 00:12:53.169621 systemd[1]: sshd@3-10.0.0.36:22-10.0.0.1:39392.service: Deactivated successfully. Jul 11 00:12:53.171375 systemd[1]: session-4.scope: Deactivated successfully. Jul 11 00:12:53.172990 systemd-logind[1439]: Session 4 logged out. Waiting for processes to exit. Jul 11 00:12:53.186011 systemd[1]: Started sshd@4-10.0.0.36:22-10.0.0.1:39404.service - OpenSSH per-connection server daemon (10.0.0.1:39404). Jul 11 00:12:53.186945 systemd-logind[1439]: Removed session 4. Jul 11 00:12:53.219974 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 39404 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:12:53.221931 sshd[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:12:53.226751 systemd-logind[1439]: New session 5 of user core. Jul 11 00:12:53.236898 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 11 00:12:53.297643 sudo[1595]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 11 00:12:53.298022 sudo[1595]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:12:53.330600 sudo[1595]: pam_unix(sudo:session): session closed for user root Jul 11 00:12:53.332886 sshd[1592]: pam_unix(sshd:session): session closed for user core Jul 11 00:12:53.344665 systemd[1]: sshd@4-10.0.0.36:22-10.0.0.1:39404.service: Deactivated successfully. Jul 11 00:12:53.346503 systemd[1]: session-5.scope: Deactivated successfully. Jul 11 00:12:53.348041 systemd-logind[1439]: Session 5 logged out. Waiting for processes to exit. Jul 11 00:12:53.349674 systemd[1]: Started sshd@5-10.0.0.36:22-10.0.0.1:39412.service - OpenSSH per-connection server daemon (10.0.0.1:39412). Jul 11 00:12:53.350514 systemd-logind[1439]: Removed session 5. Jul 11 00:12:53.389027 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 39412 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:12:53.390748 sshd[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:12:53.394670 systemd-logind[1439]: New session 6 of user core. Jul 11 00:12:53.406898 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 11 00:12:53.460734 sudo[1604]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 11 00:12:53.461103 sudo[1604]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:12:53.465425 sudo[1604]: pam_unix(sudo:session): session closed for user root Jul 11 00:12:53.471402 sudo[1603]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 11 00:12:53.471739 sudo[1603]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:12:53.490004 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 11 00:12:53.491857 auditctl[1607]: No rules Jul 11 00:12:53.493080 systemd[1]: audit-rules.service: Deactivated successfully. Jul 11 00:12:53.493325 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 11 00:12:53.495617 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 11 00:12:53.528990 augenrules[1625]: No rules Jul 11 00:12:53.531229 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 11 00:12:53.532634 sudo[1603]: pam_unix(sudo:session): session closed for user root Jul 11 00:12:53.534622 sshd[1600]: pam_unix(sshd:session): session closed for user core Jul 11 00:12:53.538982 systemd[1]: sshd@5-10.0.0.36:22-10.0.0.1:39412.service: Deactivated successfully. Jul 11 00:12:53.540734 systemd[1]: session-6.scope: Deactivated successfully. Jul 11 00:12:53.541302 systemd-logind[1439]: Session 6 logged out. Waiting for processes to exit. Jul 11 00:12:53.542337 systemd-logind[1439]: Removed session 6. Jul 11 00:12:53.580943 systemd[1]: Started sshd@6-10.0.0.36:22-10.0.0.1:39428.service - OpenSSH per-connection server daemon (10.0.0.1:39428). Jul 11 00:12:53.617275 sshd[1633]: Accepted publickey for core from 10.0.0.1 port 39428 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:12:53.618871 sshd[1633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:12:53.622689 systemd-logind[1439]: New session 7 of user core. Jul 11 00:12:53.632899 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 11 00:12:53.686335 sudo[1636]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 11 00:12:53.686692 sudo[1636]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:12:54.177985 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 11 00:12:54.178162 (dockerd)[1656]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 11 00:12:54.771738 dockerd[1656]: time="2025-07-11T00:12:54.771653481Z" level=info msg="Starting up" Jul 11 00:12:55.119001 systemd[1]: var-lib-docker-metacopy\x2dcheck248394175-merged.mount: Deactivated successfully. Jul 11 00:12:55.143163 dockerd[1656]: time="2025-07-11T00:12:55.143109103Z" level=info msg="Loading containers: start." Jul 11 00:12:55.264795 kernel: Initializing XFRM netlink socket Jul 11 00:12:55.344219 systemd-networkd[1388]: docker0: Link UP Jul 11 00:12:55.367314 dockerd[1656]: time="2025-07-11T00:12:55.367263432Z" level=info msg="Loading containers: done." Jul 11 00:12:55.384725 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1513540710-merged.mount: Deactivated successfully. Jul 11 00:12:55.388202 dockerd[1656]: time="2025-07-11T00:12:55.388160201Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 11 00:12:55.388333 dockerd[1656]: time="2025-07-11T00:12:55.388309902Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 11 00:12:55.388467 dockerd[1656]: time="2025-07-11T00:12:55.388446749Z" level=info msg="Daemon has completed initialization" Jul 11 00:12:55.427175 dockerd[1656]: time="2025-07-11T00:12:55.427094838Z" level=info msg="API listen on /run/docker.sock" Jul 11 00:12:55.427350 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 11 00:12:56.420336 containerd[1453]: time="2025-07-11T00:12:56.420214772Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 11 00:12:57.156099 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2329499391.mount: Deactivated successfully. Jul 11 00:12:58.198706 containerd[1453]: time="2025-07-11T00:12:58.198622601Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:12:58.199315 containerd[1453]: time="2025-07-11T00:12:58.199252863Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28799045" Jul 11 00:12:58.200798 containerd[1453]: time="2025-07-11T00:12:58.200739221Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:12:58.203457 containerd[1453]: time="2025-07-11T00:12:58.203429436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:12:58.204748 containerd[1453]: time="2025-07-11T00:12:58.204697404Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 1.784339223s" Jul 11 00:12:58.204815 containerd[1453]: time="2025-07-11T00:12:58.204751545Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jul 11 00:12:58.205592 containerd[1453]: time="2025-07-11T00:12:58.205559661Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 11 00:12:59.431052 containerd[1453]: time="2025-07-11T00:12:59.430961928Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:12:59.431784 containerd[1453]: time="2025-07-11T00:12:59.431686006Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24783912" Jul 11 00:12:59.432849 containerd[1453]: time="2025-07-11T00:12:59.432821024Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:12:59.435664 containerd[1453]: time="2025-07-11T00:12:59.435610235Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:12:59.436594 containerd[1453]: time="2025-07-11T00:12:59.436559365Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 1.23096517s" Jul 11 00:12:59.436638 containerd[1453]: time="2025-07-11T00:12:59.436594661Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jul 11 00:12:59.437406 containerd[1453]: time="2025-07-11T00:12:59.437383871Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 11 00:13:00.774920 containerd[1453]: time="2025-07-11T00:13:00.774846872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:00.775653 containerd[1453]: time="2025-07-11T00:13:00.775602238Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19176916" Jul 11 00:13:00.777068 containerd[1453]: time="2025-07-11T00:13:00.777016851Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:00.786719 containerd[1453]: time="2025-07-11T00:13:00.786665997Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:00.787699 containerd[1453]: time="2025-07-11T00:13:00.787667264Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 1.350252526s" Jul 11 00:13:00.787744 containerd[1453]: time="2025-07-11T00:13:00.787698974Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jul 11 00:13:00.788197 containerd[1453]: time="2025-07-11T00:13:00.788171019Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 11 00:13:01.017263 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 11 00:13:01.032003 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:13:01.259164 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:13:01.265289 (kubelet)[1875]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:13:01.333522 kubelet[1875]: E0711 00:13:01.332817 1875 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:13:01.340060 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:13:01.340303 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:13:02.167810 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1653734202.mount: Deactivated successfully. Jul 11 00:13:02.796539 containerd[1453]: time="2025-07-11T00:13:02.796460208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:02.797714 containerd[1453]: time="2025-07-11T00:13:02.797664707Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30895363" Jul 11 00:13:02.799078 containerd[1453]: time="2025-07-11T00:13:02.799038694Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:02.801972 containerd[1453]: time="2025-07-11T00:13:02.801910650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:02.802750 containerd[1453]: time="2025-07-11T00:13:02.802715840Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 2.014513502s" Jul 11 00:13:02.802799 containerd[1453]: time="2025-07-11T00:13:02.802774169Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jul 11 00:13:02.803440 containerd[1453]: time="2025-07-11T00:13:02.803379935Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 11 00:13:03.324604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2067527991.mount: Deactivated successfully. Jul 11 00:13:04.207173 containerd[1453]: time="2025-07-11T00:13:04.207100152Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:04.207905 containerd[1453]: time="2025-07-11T00:13:04.207833046Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 11 00:13:04.209167 containerd[1453]: time="2025-07-11T00:13:04.209110973Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:04.212378 containerd[1453]: time="2025-07-11T00:13:04.212351049Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:04.213481 containerd[1453]: time="2025-07-11T00:13:04.213432417Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.410011024s" Jul 11 00:13:04.213481 containerd[1453]: time="2025-07-11T00:13:04.213465549Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 11 00:13:04.213988 containerd[1453]: time="2025-07-11T00:13:04.213933998Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 11 00:13:04.719192 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount9273810.mount: Deactivated successfully. Jul 11 00:13:04.724415 containerd[1453]: time="2025-07-11T00:13:04.724380647Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:04.725116 containerd[1453]: time="2025-07-11T00:13:04.725019826Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 11 00:13:04.726125 containerd[1453]: time="2025-07-11T00:13:04.726058263Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:04.728282 containerd[1453]: time="2025-07-11T00:13:04.728244032Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:04.729034 containerd[1453]: time="2025-07-11T00:13:04.728994129Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 513.66705ms" Jul 11 00:13:04.729034 containerd[1453]: time="2025-07-11T00:13:04.729024366Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 11 00:13:04.729604 containerd[1453]: time="2025-07-11T00:13:04.729579266Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 11 00:13:05.265369 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4062131971.mount: Deactivated successfully. Jul 11 00:13:07.741740 containerd[1453]: time="2025-07-11T00:13:07.741661576Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:07.746519 containerd[1453]: time="2025-07-11T00:13:07.746469903Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Jul 11 00:13:07.748280 containerd[1453]: time="2025-07-11T00:13:07.748233340Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:07.759479 containerd[1453]: time="2025-07-11T00:13:07.759429076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:07.761432 containerd[1453]: time="2025-07-11T00:13:07.761359016Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.031748782s" Jul 11 00:13:07.761555 containerd[1453]: time="2025-07-11T00:13:07.761429728Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jul 11 00:13:10.122255 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:13:10.134015 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:13:10.162747 systemd[1]: Reloading requested from client PID 2032 ('systemctl') (unit session-7.scope)... Jul 11 00:13:10.162786 systemd[1]: Reloading... Jul 11 00:13:10.252145 zram_generator::config[2071]: No configuration found. Jul 11 00:13:10.484888 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:13:10.564183 systemd[1]: Reloading finished in 400 ms. Jul 11 00:13:10.614577 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 11 00:13:10.614700 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 11 00:13:10.615043 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:13:10.616952 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:13:10.790672 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:13:10.796163 (kubelet)[2120]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 11 00:13:10.913025 kubelet[2120]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:13:10.913025 kubelet[2120]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 11 00:13:10.913025 kubelet[2120]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:13:10.913553 kubelet[2120]: I0711 00:13:10.913198 2120 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 11 00:13:11.265902 kubelet[2120]: I0711 00:13:11.265738 2120 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 11 00:13:11.265902 kubelet[2120]: I0711 00:13:11.265803 2120 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 11 00:13:11.266158 kubelet[2120]: I0711 00:13:11.266127 2120 server.go:954] "Client rotation is on, will bootstrap in background" Jul 11 00:13:11.310784 kubelet[2120]: E0711 00:13:11.310712 2120 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:13:11.315632 kubelet[2120]: I0711 00:13:11.315601 2120 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 00:13:11.325747 kubelet[2120]: E0711 00:13:11.325695 2120 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 11 00:13:11.325747 kubelet[2120]: I0711 00:13:11.325740 2120 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 11 00:13:11.331117 kubelet[2120]: I0711 00:13:11.331078 2120 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 11 00:13:11.332988 kubelet[2120]: I0711 00:13:11.332926 2120 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 11 00:13:11.333174 kubelet[2120]: I0711 00:13:11.332972 2120 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 11 00:13:11.333368 kubelet[2120]: I0711 00:13:11.333179 2120 topology_manager.go:138] "Creating topology manager with none policy" Jul 11 00:13:11.333368 kubelet[2120]: I0711 00:13:11.333188 2120 container_manager_linux.go:304] "Creating device plugin manager" Jul 11 00:13:11.333368 kubelet[2120]: I0711 00:13:11.333347 2120 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:13:11.338415 kubelet[2120]: I0711 00:13:11.338373 2120 kubelet.go:446] "Attempting to sync node with API server" Jul 11 00:13:11.338415 kubelet[2120]: I0711 00:13:11.338407 2120 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 11 00:13:11.338508 kubelet[2120]: I0711 00:13:11.338433 2120 kubelet.go:352] "Adding apiserver pod source" Jul 11 00:13:11.338508 kubelet[2120]: I0711 00:13:11.338446 2120 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 11 00:13:11.341648 kubelet[2120]: W0711 00:13:11.341582 2120 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Jul 11 00:13:11.341720 kubelet[2120]: E0711 00:13:11.341663 2120 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:13:11.342722 kubelet[2120]: I0711 00:13:11.342650 2120 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 11 00:13:11.342938 kubelet[2120]: W0711 00:13:11.342900 2120 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Jul 11 00:13:11.342991 kubelet[2120]: E0711 00:13:11.342959 2120 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:13:11.343305 kubelet[2120]: I0711 00:13:11.343274 2120 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 11 00:13:11.344165 kubelet[2120]: W0711 00:13:11.344135 2120 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 11 00:13:11.346839 kubelet[2120]: I0711 00:13:11.346805 2120 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 11 00:13:11.346895 kubelet[2120]: I0711 00:13:11.346860 2120 server.go:1287] "Started kubelet" Jul 11 00:13:11.347122 kubelet[2120]: I0711 00:13:11.347094 2120 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 11 00:13:11.347254 kubelet[2120]: I0711 00:13:11.347183 2120 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 11 00:13:11.348289 kubelet[2120]: I0711 00:13:11.347591 2120 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 11 00:13:11.348289 kubelet[2120]: I0711 00:13:11.348097 2120 server.go:479] "Adding debug handlers to kubelet server" Jul 11 00:13:11.349271 kubelet[2120]: I0711 00:13:11.349241 2120 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 11 00:13:11.350735 kubelet[2120]: I0711 00:13:11.349608 2120 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 11 00:13:11.350735 kubelet[2120]: E0711 00:13:11.349959 2120 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:13:11.350735 kubelet[2120]: I0711 00:13:11.349990 2120 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 11 00:13:11.350735 kubelet[2120]: I0711 00:13:11.350155 2120 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 11 00:13:11.350735 kubelet[2120]: I0711 00:13:11.350207 2120 reconciler.go:26] "Reconciler: start to sync state" Jul 11 00:13:11.350735 kubelet[2120]: W0711 00:13:11.350579 2120 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Jul 11 00:13:11.350735 kubelet[2120]: E0711 00:13:11.350629 2120 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:13:11.351094 kubelet[2120]: I0711 00:13:11.351048 2120 factory.go:221] Registration of the systemd container factory successfully Jul 11 00:13:11.351385 kubelet[2120]: I0711 00:13:11.351126 2120 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 11 00:13:11.352266 kubelet[2120]: E0711 00:13:11.352234 2120 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 11 00:13:11.352881 kubelet[2120]: E0711 00:13:11.352851 2120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="200ms" Jul 11 00:13:11.358529 kubelet[2120]: I0711 00:13:11.358490 2120 factory.go:221] Registration of the containerd container factory successfully Jul 11 00:13:11.359463 kubelet[2120]: E0711 00:13:11.357041 2120 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.36:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.36:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18510a0f6a3a5cd1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-11 00:13:11.346830545 +0000 UTC m=+0.485905558,LastTimestamp:2025-07-11 00:13:11.346830545 +0000 UTC m=+0.485905558,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 11 00:13:11.374561 kubelet[2120]: I0711 00:13:11.374476 2120 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 11 00:13:11.375844 kubelet[2120]: I0711 00:13:11.375175 2120 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 11 00:13:11.375844 kubelet[2120]: I0711 00:13:11.375213 2120 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 11 00:13:11.375844 kubelet[2120]: I0711 00:13:11.375229 2120 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:13:11.376374 kubelet[2120]: I0711 00:13:11.376355 2120 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 11 00:13:11.376422 kubelet[2120]: I0711 00:13:11.376387 2120 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 11 00:13:11.376422 kubelet[2120]: I0711 00:13:11.376414 2120 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 11 00:13:11.376422 kubelet[2120]: I0711 00:13:11.376421 2120 kubelet.go:2382] "Starting kubelet main sync loop" Jul 11 00:13:11.376510 kubelet[2120]: E0711 00:13:11.376465 2120 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 11 00:13:11.376983 kubelet[2120]: W0711 00:13:11.376934 2120 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Jul 11 00:13:11.377035 kubelet[2120]: E0711 00:13:11.376995 2120 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:13:11.380378 kubelet[2120]: I0711 00:13:11.380357 2120 policy_none.go:49] "None policy: Start" Jul 11 00:13:11.380378 kubelet[2120]: I0711 00:13:11.380381 2120 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 11 00:13:11.380452 kubelet[2120]: I0711 00:13:11.380393 2120 state_mem.go:35] "Initializing new in-memory state store" Jul 11 00:13:11.391558 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 11 00:13:11.407386 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 11 00:13:11.410628 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 11 00:13:11.421725 kubelet[2120]: I0711 00:13:11.421680 2120 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 11 00:13:11.421955 kubelet[2120]: I0711 00:13:11.421930 2120 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 11 00:13:11.422031 kubelet[2120]: I0711 00:13:11.421956 2120 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 11 00:13:11.422493 kubelet[2120]: I0711 00:13:11.422230 2120 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 11 00:13:11.422929 kubelet[2120]: E0711 00:13:11.422902 2120 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 11 00:13:11.422986 kubelet[2120]: E0711 00:13:11.422955 2120 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 11 00:13:11.485425 systemd[1]: Created slice kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice - libcontainer container kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice. Jul 11 00:13:11.509375 kubelet[2120]: E0711 00:13:11.509339 2120 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:13:11.512665 systemd[1]: Created slice kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice - libcontainer container kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice. Jul 11 00:13:11.522223 kubelet[2120]: E0711 00:13:11.522109 2120 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:13:11.523664 kubelet[2120]: I0711 00:13:11.523645 2120 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:13:11.524103 kubelet[2120]: E0711 00:13:11.524074 2120 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" Jul 11 00:13:11.526476 systemd[1]: Created slice kubepods-burstable-podc385a3a5bf871a5a13247fc8a81d52e5.slice - libcontainer container kubepods-burstable-podc385a3a5bf871a5a13247fc8a81d52e5.slice. Jul 11 00:13:11.528316 kubelet[2120]: E0711 00:13:11.528282 2120 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:13:11.551460 kubelet[2120]: I0711 00:13:11.551416 2120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:13:11.551533 kubelet[2120]: I0711 00:13:11.551479 2120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:13:11.551561 kubelet[2120]: I0711 00:13:11.551543 2120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c385a3a5bf871a5a13247fc8a81d52e5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c385a3a5bf871a5a13247fc8a81d52e5\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:13:11.551934 kubelet[2120]: I0711 00:13:11.551581 2120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c385a3a5bf871a5a13247fc8a81d52e5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c385a3a5bf871a5a13247fc8a81d52e5\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:13:11.551934 kubelet[2120]: I0711 00:13:11.551937 2120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:13:11.552033 kubelet[2120]: I0711 00:13:11.551973 2120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:13:11.552033 kubelet[2120]: I0711 00:13:11.551997 2120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:13:11.552033 kubelet[2120]: I0711 00:13:11.552015 2120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 11 00:13:11.552145 kubelet[2120]: I0711 00:13:11.552053 2120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c385a3a5bf871a5a13247fc8a81d52e5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c385a3a5bf871a5a13247fc8a81d52e5\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:13:11.554024 kubelet[2120]: E0711 00:13:11.553992 2120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="400ms" Jul 11 00:13:11.726365 kubelet[2120]: I0711 00:13:11.726326 2120 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:13:11.726732 kubelet[2120]: E0711 00:13:11.726700 2120 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" Jul 11 00:13:11.810728 kubelet[2120]: E0711 00:13:11.810588 2120 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:13:11.811564 containerd[1453]: time="2025-07-11T00:13:11.811505296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,}" Jul 11 00:13:11.823545 kubelet[2120]: E0711 00:13:11.823509 2120 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:13:11.824079 containerd[1453]: time="2025-07-11T00:13:11.824041886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,}" Jul 11 00:13:11.829229 kubelet[2120]: E0711 00:13:11.829209 2120 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:13:11.829535 containerd[1453]: time="2025-07-11T00:13:11.829508929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c385a3a5bf871a5a13247fc8a81d52e5,Namespace:kube-system,Attempt:0,}" Jul 11 00:13:11.954608 kubelet[2120]: E0711 00:13:11.954546 2120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="800ms" Jul 11 00:13:12.129024 kubelet[2120]: I0711 00:13:12.128881 2120 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:13:12.129345 kubelet[2120]: E0711 00:13:12.129314 2120 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" Jul 11 00:13:12.192467 kubelet[2120]: W0711 00:13:12.192384 2120 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Jul 11 00:13:12.192467 kubelet[2120]: E0711 00:13:12.192462 2120 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:13:12.348064 kubelet[2120]: W0711 00:13:12.347993 2120 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Jul 11 00:13:12.348189 kubelet[2120]: E0711 00:13:12.348075 2120 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:13:12.386345 kubelet[2120]: W0711 00:13:12.386211 2120 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Jul 11 00:13:12.386345 kubelet[2120]: E0711 00:13:12.386286 2120 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:13:12.411284 kubelet[2120]: W0711 00:13:12.411233 2120 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Jul 11 00:13:12.411284 kubelet[2120]: E0711 00:13:12.411277 2120 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:13:12.567108 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3875869794.mount: Deactivated successfully. Jul 11 00:13:12.576984 containerd[1453]: time="2025-07-11T00:13:12.576934759Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:13:12.580545 containerd[1453]: time="2025-07-11T00:13:12.580506107Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 11 00:13:12.581635 containerd[1453]: time="2025-07-11T00:13:12.581588677Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:13:12.582416 containerd[1453]: time="2025-07-11T00:13:12.582391002Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:13:12.583730 containerd[1453]: time="2025-07-11T00:13:12.583669159Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:13:12.586300 containerd[1453]: time="2025-07-11T00:13:12.586267521Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 11 00:13:12.587302 containerd[1453]: time="2025-07-11T00:13:12.587225177Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 11 00:13:12.589314 containerd[1453]: time="2025-07-11T00:13:12.589288867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:13:12.590091 containerd[1453]: time="2025-07-11T00:13:12.590062258Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 778.458448ms" Jul 11 00:13:12.593740 containerd[1453]: time="2025-07-11T00:13:12.593702384Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 764.145376ms" Jul 11 00:13:12.596115 containerd[1453]: time="2025-07-11T00:13:12.596064915Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 771.946024ms" Jul 11 00:13:12.756359 kubelet[2120]: E0711 00:13:12.756217 2120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="1.6s" Jul 11 00:13:12.859231 containerd[1453]: time="2025-07-11T00:13:12.858830514Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:13:12.859231 containerd[1453]: time="2025-07-11T00:13:12.858955909Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:13:12.859231 containerd[1453]: time="2025-07-11T00:13:12.858994662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:13:12.859231 containerd[1453]: time="2025-07-11T00:13:12.859135777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:13:12.860071 containerd[1453]: time="2025-07-11T00:13:12.859987434Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:13:12.860595 containerd[1453]: time="2025-07-11T00:13:12.860270675Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:13:12.860595 containerd[1453]: time="2025-07-11T00:13:12.860353150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:13:12.860595 containerd[1453]: time="2025-07-11T00:13:12.860541373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:13:12.866942 containerd[1453]: time="2025-07-11T00:13:12.866721673Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:13:12.866942 containerd[1453]: time="2025-07-11T00:13:12.866800831Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:13:12.866942 containerd[1453]: time="2025-07-11T00:13:12.866816140Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:13:12.867186 containerd[1453]: time="2025-07-11T00:13:12.867084123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:13:12.912090 systemd[1]: Started cri-containerd-e1ae2b6dceea2e9b5f411c3ca9f42aaaf23bb4412ed14b8ae3f5c734841c165b.scope - libcontainer container e1ae2b6dceea2e9b5f411c3ca9f42aaaf23bb4412ed14b8ae3f5c734841c165b. Jul 11 00:13:12.924793 systemd[1]: Started cri-containerd-61b07aa383d6a10b54520e5e2d6cde279e5bd832d0b37baa9f2024dbb6c63eb2.scope - libcontainer container 61b07aa383d6a10b54520e5e2d6cde279e5bd832d0b37baa9f2024dbb6c63eb2. Jul 11 00:13:12.928920 systemd[1]: Started cri-containerd-559d9ecdc4eadc2e50cc5ffb0f1e95bbb1a5a73a262a014920c53327cbbc1c1a.scope - libcontainer container 559d9ecdc4eadc2e50cc5ffb0f1e95bbb1a5a73a262a014920c53327cbbc1c1a. Jul 11 00:13:12.932891 kubelet[2120]: I0711 00:13:12.932706 2120 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:13:12.933417 kubelet[2120]: E0711 00:13:12.933386 2120 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" Jul 11 00:13:12.984682 containerd[1453]: time="2025-07-11T00:13:12.983243984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"e1ae2b6dceea2e9b5f411c3ca9f42aaaf23bb4412ed14b8ae3f5c734841c165b\"" Jul 11 00:13:12.984961 kubelet[2120]: E0711 00:13:12.984742 2120 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:13:12.988200 containerd[1453]: time="2025-07-11T00:13:12.988152850Z" level=info msg="CreateContainer within sandbox \"e1ae2b6dceea2e9b5f411c3ca9f42aaaf23bb4412ed14b8ae3f5c734841c165b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 11 00:13:12.988887 containerd[1453]: time="2025-07-11T00:13:12.988845709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"61b07aa383d6a10b54520e5e2d6cde279e5bd832d0b37baa9f2024dbb6c63eb2\"" Jul 11 00:13:12.990778 containerd[1453]: time="2025-07-11T00:13:12.990726406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c385a3a5bf871a5a13247fc8a81d52e5,Namespace:kube-system,Attempt:0,} returns sandbox id \"559d9ecdc4eadc2e50cc5ffb0f1e95bbb1a5a73a262a014920c53327cbbc1c1a\"" Jul 11 00:13:12.991085 kubelet[2120]: E0711 00:13:12.991052 2120 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:13:12.991784 kubelet[2120]: E0711 00:13:12.991660 2120 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:13:12.993118 containerd[1453]: time="2025-07-11T00:13:12.993087724Z" level=info msg="CreateContainer within sandbox \"61b07aa383d6a10b54520e5e2d6cde279e5bd832d0b37baa9f2024dbb6c63eb2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 11 00:13:12.993735 containerd[1453]: time="2025-07-11T00:13:12.993702938Z" level=info msg="CreateContainer within sandbox \"559d9ecdc4eadc2e50cc5ffb0f1e95bbb1a5a73a262a014920c53327cbbc1c1a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 11 00:13:13.337963 kubelet[2120]: E0711 00:13:13.337881 2120 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:13:13.561268 containerd[1453]: time="2025-07-11T00:13:13.561213062Z" level=info msg="CreateContainer within sandbox \"e1ae2b6dceea2e9b5f411c3ca9f42aaaf23bb4412ed14b8ae3f5c734841c165b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d770b4cc9e0682b21245543fe0729dedf8a67776cd73e98463b6bf74d5176c7d\"" Jul 11 00:13:13.561949 containerd[1453]: time="2025-07-11T00:13:13.561919627Z" level=info msg="StartContainer for \"d770b4cc9e0682b21245543fe0729dedf8a67776cd73e98463b6bf74d5176c7d\"" Jul 11 00:13:13.579676 containerd[1453]: time="2025-07-11T00:13:13.579631313Z" level=info msg="CreateContainer within sandbox \"61b07aa383d6a10b54520e5e2d6cde279e5bd832d0b37baa9f2024dbb6c63eb2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"13eb7e9a18964cbde8a51a5ffa3b0038d7f835a61f0c0685ba5f45803d39cffb\"" Jul 11 00:13:13.580364 containerd[1453]: time="2025-07-11T00:13:13.580342908Z" level=info msg="StartContainer for \"13eb7e9a18964cbde8a51a5ffa3b0038d7f835a61f0c0685ba5f45803d39cffb\"" Jul 11 00:13:13.586719 containerd[1453]: time="2025-07-11T00:13:13.586678639Z" level=info msg="CreateContainer within sandbox \"559d9ecdc4eadc2e50cc5ffb0f1e95bbb1a5a73a262a014920c53327cbbc1c1a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fdf1c76cfc13038dfd756146cc112a74d820536ebac44255a7177ce784b12b2e\"" Jul 11 00:13:13.588196 containerd[1453]: time="2025-07-11T00:13:13.588090327Z" level=info msg="StartContainer for \"fdf1c76cfc13038dfd756146cc112a74d820536ebac44255a7177ce784b12b2e\"" Jul 11 00:13:13.595998 systemd[1]: Started cri-containerd-d770b4cc9e0682b21245543fe0729dedf8a67776cd73e98463b6bf74d5176c7d.scope - libcontainer container d770b4cc9e0682b21245543fe0729dedf8a67776cd73e98463b6bf74d5176c7d. Jul 11 00:13:13.619097 systemd[1]: Started cri-containerd-13eb7e9a18964cbde8a51a5ffa3b0038d7f835a61f0c0685ba5f45803d39cffb.scope - libcontainer container 13eb7e9a18964cbde8a51a5ffa3b0038d7f835a61f0c0685ba5f45803d39cffb. Jul 11 00:13:13.622381 systemd[1]: Started cri-containerd-fdf1c76cfc13038dfd756146cc112a74d820536ebac44255a7177ce784b12b2e.scope - libcontainer container fdf1c76cfc13038dfd756146cc112a74d820536ebac44255a7177ce784b12b2e. Jul 11 00:13:13.648819 containerd[1453]: time="2025-07-11T00:13:13.648729387Z" level=info msg="StartContainer for \"d770b4cc9e0682b21245543fe0729dedf8a67776cd73e98463b6bf74d5176c7d\" returns successfully" Jul 11 00:13:13.677243 containerd[1453]: time="2025-07-11T00:13:13.677188127Z" level=info msg="StartContainer for \"13eb7e9a18964cbde8a51a5ffa3b0038d7f835a61f0c0685ba5f45803d39cffb\" returns successfully" Jul 11 00:13:13.677393 containerd[1453]: time="2025-07-11T00:13:13.677285019Z" level=info msg="StartContainer for \"fdf1c76cfc13038dfd756146cc112a74d820536ebac44255a7177ce784b12b2e\" returns successfully" Jul 11 00:13:14.402268 kubelet[2120]: E0711 00:13:14.402193 2120 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:13:14.402691 kubelet[2120]: E0711 00:13:14.402392 2120 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:13:14.403251 kubelet[2120]: E0711 00:13:14.402709 2120 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:13:14.403251 kubelet[2120]: E0711 00:13:14.402915 2120 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:13:14.403629 kubelet[2120]: E0711 00:13:14.403609 2120 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:13:14.403718 kubelet[2120]: E0711 00:13:14.403703 2120 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:13:14.534723 kubelet[2120]: I0711 00:13:14.534662 2120 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:13:15.175525 kubelet[2120]: E0711 00:13:15.175490 2120 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 11 00:13:15.270993 kubelet[2120]: I0711 00:13:15.270903 2120 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 11 00:13:15.271146 kubelet[2120]: E0711 00:13:15.271022 2120 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 11 00:13:15.341977 kubelet[2120]: I0711 00:13:15.341880 2120 apiserver.go:52] "Watching apiserver" Jul 11 00:13:15.350671 kubelet[2120]: I0711 00:13:15.350586 2120 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 11 00:13:15.353156 kubelet[2120]: I0711 00:13:15.353120 2120 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 11 00:13:15.358390 kubelet[2120]: E0711 00:13:15.358361 2120 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 11 00:13:15.358390 kubelet[2120]: I0711 00:13:15.358387 2120 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 11 00:13:15.359772 kubelet[2120]: E0711 00:13:15.359719 2120 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 11 00:13:15.359816 kubelet[2120]: I0711 00:13:15.359787 2120 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 11 00:13:15.361120 kubelet[2120]: E0711 00:13:15.361093 2120 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 11 00:13:15.405200 kubelet[2120]: I0711 00:13:15.405162 2120 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 11 00:13:15.405741 kubelet[2120]: I0711 00:13:15.405313 2120 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 11 00:13:15.405741 kubelet[2120]: I0711 00:13:15.405641 2120 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 11 00:13:15.407913 kubelet[2120]: E0711 00:13:15.407788 2120 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 11 00:13:15.408007 kubelet[2120]: E0711 00:13:15.407956 2120 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:13:15.410348 kubelet[2120]: E0711 00:13:15.410292 2120 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 11 00:13:15.410488 kubelet[2120]: E0711 00:13:15.410417 2120 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:13:15.410488 kubelet[2120]: E0711 00:13:15.410481 2120 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 11 00:13:15.410592 kubelet[2120]: E0711 00:13:15.410570 2120 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:13:16.407208 kubelet[2120]: I0711 00:13:16.407165 2120 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 11 00:13:16.407786 kubelet[2120]: I0711 00:13:16.407361 2120 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 11 00:13:16.414279 kubelet[2120]: E0711 00:13:16.414226 2120 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:13:16.414460 kubelet[2120]: E0711 00:13:16.414226 2120 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:13:17.409083 kubelet[2120]: E0711 00:13:17.409030 2120 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:13:17.409754 kubelet[2120]: E0711 00:13:17.409137 2120 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:13:17.754753 systemd[1]: Reloading requested from client PID 2398 ('systemctl') (unit session-7.scope)... Jul 11 00:13:17.754803 systemd[1]: Reloading... Jul 11 00:13:17.840895 zram_generator::config[2437]: No configuration found. Jul 11 00:13:17.956963 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:13:18.051412 systemd[1]: Reloading finished in 296 ms. Jul 11 00:13:18.102999 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:13:18.132436 systemd[1]: kubelet.service: Deactivated successfully. Jul 11 00:13:18.132790 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:13:18.132855 systemd[1]: kubelet.service: Consumed 1.061s CPU time, 136.2M memory peak, 0B memory swap peak. Jul 11 00:13:18.145201 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:13:18.320446 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:13:18.326141 (kubelet)[2482]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 11 00:13:18.398073 kubelet[2482]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:13:18.398073 kubelet[2482]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 11 00:13:18.398073 kubelet[2482]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:13:18.398073 kubelet[2482]: I0711 00:13:18.398011 2482 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 11 00:13:18.406742 kubelet[2482]: I0711 00:13:18.406683 2482 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 11 00:13:18.406742 kubelet[2482]: I0711 00:13:18.406721 2482 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 11 00:13:18.407047 kubelet[2482]: I0711 00:13:18.407021 2482 server.go:954] "Client rotation is on, will bootstrap in background" Jul 11 00:13:18.408325 kubelet[2482]: I0711 00:13:18.408299 2482 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 11 00:13:18.412654 kubelet[2482]: I0711 00:13:18.412596 2482 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 00:13:18.421658 kubelet[2482]: E0711 00:13:18.421609 2482 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 11 00:13:18.421658 kubelet[2482]: I0711 00:13:18.421647 2482 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 11 00:13:18.428122 kubelet[2482]: I0711 00:13:18.428075 2482 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 11 00:13:18.428405 kubelet[2482]: I0711 00:13:18.428354 2482 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 11 00:13:18.428596 kubelet[2482]: I0711 00:13:18.428389 2482 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 11 00:13:18.428685 kubelet[2482]: I0711 00:13:18.428598 2482 topology_manager.go:138] "Creating topology manager with none policy" Jul 11 00:13:18.428685 kubelet[2482]: I0711 00:13:18.428608 2482 container_manager_linux.go:304] "Creating device plugin manager" Jul 11 00:13:18.428685 kubelet[2482]: I0711 00:13:18.428660 2482 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:13:18.428935 kubelet[2482]: I0711 00:13:18.428908 2482 kubelet.go:446] "Attempting to sync node with API server" Jul 11 00:13:18.428980 kubelet[2482]: I0711 00:13:18.428945 2482 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 11 00:13:18.428980 kubelet[2482]: I0711 00:13:18.428968 2482 kubelet.go:352] "Adding apiserver pod source" Jul 11 00:13:18.429052 kubelet[2482]: I0711 00:13:18.428981 2482 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 11 00:13:18.432784 kubelet[2482]: I0711 00:13:18.430281 2482 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 11 00:13:18.432784 kubelet[2482]: I0711 00:13:18.430750 2482 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 11 00:13:18.432784 kubelet[2482]: I0711 00:13:18.431335 2482 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 11 00:13:18.432784 kubelet[2482]: I0711 00:13:18.431367 2482 server.go:1287] "Started kubelet" Jul 11 00:13:18.433604 kubelet[2482]: I0711 00:13:18.433039 2482 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 11 00:13:18.433604 kubelet[2482]: I0711 00:13:18.433404 2482 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 11 00:13:18.433604 kubelet[2482]: I0711 00:13:18.433427 2482 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 11 00:13:18.436365 kubelet[2482]: I0711 00:13:18.436335 2482 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 11 00:13:18.436442 kubelet[2482]: I0711 00:13:18.436393 2482 server.go:479] "Adding debug handlers to kubelet server" Jul 11 00:13:18.439646 kubelet[2482]: I0711 00:13:18.439624 2482 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 11 00:13:18.443343 kubelet[2482]: I0711 00:13:18.443316 2482 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 11 00:13:18.443502 kubelet[2482]: E0711 00:13:18.443477 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:13:18.444894 kubelet[2482]: I0711 00:13:18.443962 2482 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 11 00:13:18.444894 kubelet[2482]: I0711 00:13:18.444130 2482 reconciler.go:26] "Reconciler: start to sync state" Jul 11 00:13:18.446294 kubelet[2482]: I0711 00:13:18.446269 2482 factory.go:221] Registration of the systemd container factory successfully Jul 11 00:13:18.446451 kubelet[2482]: I0711 00:13:18.446428 2482 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 11 00:13:18.451788 kubelet[2482]: I0711 00:13:18.449719 2482 factory.go:221] Registration of the containerd container factory successfully Jul 11 00:13:18.451788 kubelet[2482]: E0711 00:13:18.451130 2482 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 11 00:13:18.455028 kubelet[2482]: I0711 00:13:18.454984 2482 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 11 00:13:18.456251 kubelet[2482]: I0711 00:13:18.456224 2482 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 11 00:13:18.456290 kubelet[2482]: I0711 00:13:18.456257 2482 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 11 00:13:18.456290 kubelet[2482]: I0711 00:13:18.456279 2482 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 11 00:13:18.456290 kubelet[2482]: I0711 00:13:18.456287 2482 kubelet.go:2382] "Starting kubelet main sync loop" Jul 11 00:13:18.456362 kubelet[2482]: E0711 00:13:18.456339 2482 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 11 00:13:18.491527 kubelet[2482]: I0711 00:13:18.491493 2482 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 11 00:13:18.491751 kubelet[2482]: I0711 00:13:18.491706 2482 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 11 00:13:18.491751 kubelet[2482]: I0711 00:13:18.491734 2482 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:13:18.491973 kubelet[2482]: I0711 00:13:18.491909 2482 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 11 00:13:18.491973 kubelet[2482]: I0711 00:13:18.491920 2482 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 11 00:13:18.491973 kubelet[2482]: I0711 00:13:18.491937 2482 policy_none.go:49] "None policy: Start" Jul 11 00:13:18.491973 kubelet[2482]: I0711 00:13:18.491954 2482 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 11 00:13:18.491973 kubelet[2482]: I0711 00:13:18.491967 2482 state_mem.go:35] "Initializing new in-memory state store" Jul 11 00:13:18.492144 kubelet[2482]: I0711 00:13:18.492060 2482 state_mem.go:75] "Updated machine memory state" Jul 11 00:13:18.499491 kubelet[2482]: I0711 00:13:18.499455 2482 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 11 00:13:18.499677 kubelet[2482]: I0711 00:13:18.499653 2482 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 11 00:13:18.499800 kubelet[2482]: I0711 00:13:18.499667 2482 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 11 00:13:18.499957 kubelet[2482]: I0711 00:13:18.499930 2482 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 11 00:13:18.500987 kubelet[2482]: E0711 00:13:18.500877 2482 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 11 00:13:18.557221 kubelet[2482]: I0711 00:13:18.557161 2482 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 11 00:13:18.557385 kubelet[2482]: I0711 00:13:18.557161 2482 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 11 00:13:18.557744 kubelet[2482]: I0711 00:13:18.557701 2482 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 11 00:13:18.605159 kubelet[2482]: I0711 00:13:18.605055 2482 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:13:18.645260 kubelet[2482]: I0711 00:13:18.645204 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c385a3a5bf871a5a13247fc8a81d52e5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c385a3a5bf871a5a13247fc8a81d52e5\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:13:18.645260 kubelet[2482]: I0711 00:13:18.645252 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c385a3a5bf871a5a13247fc8a81d52e5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c385a3a5bf871a5a13247fc8a81d52e5\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:13:18.645548 kubelet[2482]: I0711 00:13:18.645279 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:13:18.645548 kubelet[2482]: I0711 00:13:18.645301 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 11 00:13:18.645548 kubelet[2482]: I0711 00:13:18.645326 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c385a3a5bf871a5a13247fc8a81d52e5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c385a3a5bf871a5a13247fc8a81d52e5\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:13:18.645548 kubelet[2482]: I0711 00:13:18.645348 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:13:18.645548 kubelet[2482]: I0711 00:13:18.645368 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:13:18.645709 kubelet[2482]: I0711 00:13:18.645391 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:13:18.645709 kubelet[2482]: I0711 00:13:18.645442 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:13:18.657218 kubelet[2482]: E0711 00:13:18.657191 2482 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 11 00:13:18.707548 kubelet[2482]: E0711 00:13:18.707434 2482 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 11 00:13:18.711008 kubelet[2482]: I0711 00:13:18.710973 2482 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 11 00:13:18.711091 kubelet[2482]: I0711 00:13:18.711084 2482 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 11 00:13:18.942889 kubelet[2482]: E0711 00:13:18.942726 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:13:18.958156 kubelet[2482]: E0711 00:13:18.958099 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:13:19.008864 kubelet[2482]: E0711 00:13:19.008805 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:13:19.430553 kubelet[2482]: I0711 00:13:19.430419 2482 apiserver.go:52] "Watching apiserver" Jul 11 00:13:19.444953 kubelet[2482]: I0711 00:13:19.444913 2482 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 11 00:13:19.474214 kubelet[2482]: E0711 00:13:19.474176 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:13:19.475415 kubelet[2482]: I0711 00:13:19.474810 2482 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 11 00:13:19.475415 kubelet[2482]: I0711 00:13:19.475101 2482 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 11 00:13:19.485499 kubelet[2482]: E0711 00:13:19.484234 2482 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 11 00:13:19.485499 kubelet[2482]: E0711 00:13:19.484414 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:13:19.486088 kubelet[2482]: E0711 00:13:19.485825 2482 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 11 00:13:19.486088 kubelet[2482]: E0711 00:13:19.486014 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:13:19.502780 kubelet[2482]: I0711 00:13:19.502685 2482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.502555851 podStartE2EDuration="3.502555851s" podCreationTimestamp="2025-07-11 00:13:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:13:19.502380512 +0000 UTC m=+1.171754419" watchObservedRunningTime="2025-07-11 00:13:19.502555851 +0000 UTC m=+1.171929758" Jul 11 00:13:19.529494 kubelet[2482]: I0711 00:13:19.529433 2482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.529412877 podStartE2EDuration="3.529412877s" podCreationTimestamp="2025-07-11 00:13:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:13:19.529159202 +0000 UTC m=+1.198533109" watchObservedRunningTime="2025-07-11 00:13:19.529412877 +0000 UTC m=+1.198786784" Jul 11 00:13:20.476361 kubelet[2482]: E0711 00:13:20.476324 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:13:20.476860 kubelet[2482]: E0711 00:13:20.476478 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:13:21.477958 kubelet[2482]: E0711 00:13:21.477914 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:13:23.554139 kubelet[2482]: E0711 00:13:23.554090 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:13:23.573093 kubelet[2482]: I0711 00:13:23.573009 2482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=5.572991298 podStartE2EDuration="5.572991298s" podCreationTimestamp="2025-07-11 00:13:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:13:19.543362077 +0000 UTC m=+1.212735974" watchObservedRunningTime="2025-07-11 00:13:23.572991298 +0000 UTC m=+5.242365206" Jul 11 00:13:24.067899 kubelet[2482]: I0711 00:13:24.067871 2482 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 11 00:13:24.068416 containerd[1453]: time="2025-07-11T00:13:24.068363348Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 11 00:13:24.068810 kubelet[2482]: I0711 00:13:24.068660 2482 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 11 00:13:24.482822 kubelet[2482]: E0711 00:13:24.482633 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:13:25.051026 systemd[1]: Created slice kubepods-besteffort-podf35bc2fa_0a16_4745_9681_2af76e67905a.slice - libcontainer container kubepods-besteffort-podf35bc2fa_0a16_4745_9681_2af76e67905a.slice. Jul 11 00:13:25.087286 kubelet[2482]: I0711 00:13:25.087213 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f35bc2fa-0a16-4745-9681-2af76e67905a-xtables-lock\") pod \"kube-proxy-285xp\" (UID: \"f35bc2fa-0a16-4745-9681-2af76e67905a\") " pod="kube-system/kube-proxy-285xp" Jul 11 00:13:25.087286 kubelet[2482]: I0711 00:13:25.087267 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f35bc2fa-0a16-4745-9681-2af76e67905a-lib-modules\") pod \"kube-proxy-285xp\" (UID: \"f35bc2fa-0a16-4745-9681-2af76e67905a\") " pod="kube-system/kube-proxy-285xp" Jul 11 00:13:25.087286 kubelet[2482]: I0711 00:13:25.087287 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f35bc2fa-0a16-4745-9681-2af76e67905a-kube-proxy\") pod \"kube-proxy-285xp\" (UID: \"f35bc2fa-0a16-4745-9681-2af76e67905a\") " pod="kube-system/kube-proxy-285xp" Jul 11 00:13:25.087809 kubelet[2482]: I0711 00:13:25.087304 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lh7g\" (UniqueName: \"kubernetes.io/projected/f35bc2fa-0a16-4745-9681-2af76e67905a-kube-api-access-7lh7g\") pod \"kube-proxy-285xp\" (UID: \"f35bc2fa-0a16-4745-9681-2af76e67905a\") " pod="kube-system/kube-proxy-285xp" Jul 11 00:13:25.174945 systemd[1]: Created slice kubepods-besteffort-pod4fab85c9_85ea_4647_9691_703e857ba2e3.slice - libcontainer container kubepods-besteffort-pod4fab85c9_85ea_4647_9691_703e857ba2e3.slice. Jul 11 00:13:25.188038 kubelet[2482]: I0711 00:13:25.187974 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmhq4\" (UniqueName: \"kubernetes.io/projected/4fab85c9-85ea-4647-9691-703e857ba2e3-kube-api-access-pmhq4\") pod \"tigera-operator-747864d56d-6pm9f\" (UID: \"4fab85c9-85ea-4647-9691-703e857ba2e3\") " pod="tigera-operator/tigera-operator-747864d56d-6pm9f" Jul 11 00:13:25.188038 kubelet[2482]: I0711 00:13:25.188034 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4fab85c9-85ea-4647-9691-703e857ba2e3-var-lib-calico\") pod \"tigera-operator-747864d56d-6pm9f\" (UID: \"4fab85c9-85ea-4647-9691-703e857ba2e3\") " pod="tigera-operator/tigera-operator-747864d56d-6pm9f" Jul 11 00:13:25.363125 kubelet[2482]: E0711 00:13:25.362983 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:13:25.363600 containerd[1453]: time="2025-07-11T00:13:25.363539710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-285xp,Uid:f35bc2fa-0a16-4745-9681-2af76e67905a,Namespace:kube-system,Attempt:0,}" Jul 11 00:13:25.402706 containerd[1453]: time="2025-07-11T00:13:25.402543953Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:13:25.402706 containerd[1453]: time="2025-07-11T00:13:25.402653581Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:13:25.402706 containerd[1453]: time="2025-07-11T00:13:25.402667317Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:13:25.402972 containerd[1453]: time="2025-07-11T00:13:25.402825879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:13:25.436913 systemd[1]: Started cri-containerd-2b842dbf10a434fa5452eb4c3a7f6b06b95e238d350f79da50e6cfe4e6322e24.scope - libcontainer container 2b842dbf10a434fa5452eb4c3a7f6b06b95e238d350f79da50e6cfe4e6322e24. Jul 11 00:13:25.471577 containerd[1453]: time="2025-07-11T00:13:25.471516977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-285xp,Uid:f35bc2fa-0a16-4745-9681-2af76e67905a,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b842dbf10a434fa5452eb4c3a7f6b06b95e238d350f79da50e6cfe4e6322e24\"" Jul 11 00:13:25.474839 kubelet[2482]: E0711 00:13:25.474726 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:13:25.477313 containerd[1453]: time="2025-07-11T00:13:25.477259881Z" level=info msg="CreateContainer within sandbox \"2b842dbf10a434fa5452eb4c3a7f6b06b95e238d350f79da50e6cfe4e6322e24\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 11 00:13:25.479649 containerd[1453]: time="2025-07-11T00:13:25.479611517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-6pm9f,Uid:4fab85c9-85ea-4647-9691-703e857ba2e3,Namespace:tigera-operator,Attempt:0,}" Jul 11 00:13:25.523096 containerd[1453]: time="2025-07-11T00:13:25.523030925Z" level=info msg="CreateContainer within sandbox \"2b842dbf10a434fa5452eb4c3a7f6b06b95e238d350f79da50e6cfe4e6322e24\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c22b373e7b6f178b05ac26e287f1f61a67f1938fa84913d10ea5fbad0bd78226\"" Jul 11 00:13:25.523875 containerd[1453]: time="2025-07-11T00:13:25.523565533Z" level=info msg="StartContainer for \"c22b373e7b6f178b05ac26e287f1f61a67f1938fa84913d10ea5fbad0bd78226\"" Jul 11 00:13:25.541146 containerd[1453]: time="2025-07-11T00:13:25.540673341Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:13:25.541146 containerd[1453]: time="2025-07-11T00:13:25.540898881Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:13:25.541146 containerd[1453]: time="2025-07-11T00:13:25.540913729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:13:25.541146 containerd[1453]: time="2025-07-11T00:13:25.540995976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:13:25.557995 systemd[1]: Started cri-containerd-c22b373e7b6f178b05ac26e287f1f61a67f1938fa84913d10ea5fbad0bd78226.scope - libcontainer container c22b373e7b6f178b05ac26e287f1f61a67f1938fa84913d10ea5fbad0bd78226. Jul 11 00:13:25.561623 systemd[1]: Started cri-containerd-d7ba4865535ae20605029dc5f5813df12bcf91c670ae384286deb6382029ad73.scope - libcontainer container d7ba4865535ae20605029dc5f5813df12bcf91c670ae384286deb6382029ad73. Jul 11 00:13:25.604332 containerd[1453]: time="2025-07-11T00:13:25.604181684Z" level=info msg="StartContainer for \"c22b373e7b6f178b05ac26e287f1f61a67f1938fa84913d10ea5fbad0bd78226\" returns successfully" Jul 11 00:13:25.604332 containerd[1453]: time="2025-07-11T00:13:25.604275692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-6pm9f,Uid:4fab85c9-85ea-4647-9691-703e857ba2e3,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d7ba4865535ae20605029dc5f5813df12bcf91c670ae384286deb6382029ad73\"" Jul 11 00:13:25.607244 containerd[1453]: time="2025-07-11T00:13:25.607213585Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 11 00:13:25.732079 kubelet[2482]: E0711 00:13:25.730704 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:13:26.488158 kubelet[2482]: E0711 00:13:26.488108 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:13:26.488746 kubelet[2482]: E0711 00:13:26.488707 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:13:26.498693 kubelet[2482]: I0711 00:13:26.498605 2482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-285xp" podStartSLOduration=1.498587186 podStartE2EDuration="1.498587186s" podCreationTimestamp="2025-07-11 00:13:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:13:26.497931178 +0000 UTC m=+8.167305085" watchObservedRunningTime="2025-07-11 00:13:26.498587186 +0000 UTC m=+8.167961093" Jul 11 00:13:26.941155 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1546220403.mount: Deactivated successfully. Jul 11 00:13:27.490659 kubelet[2482]: E0711 00:13:27.490626 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:13:27.491270 kubelet[2482]: E0711 00:13:27.490986 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:13:27.874133 kubelet[2482]: E0711 00:13:27.873994 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:13:28.491626 kubelet[2482]: E0711 00:13:28.491580 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:13:29.414928 containerd[1453]: time="2025-07-11T00:13:29.414856632Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:29.415899 containerd[1453]: time="2025-07-11T00:13:29.415855057Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 11 00:13:29.417013 containerd[1453]: time="2025-07-11T00:13:29.416980112Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:29.425441 containerd[1453]: time="2025-07-11T00:13:29.425404465Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:29.426116 containerd[1453]: time="2025-07-11T00:13:29.426071891Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 3.818821035s" Jul 11 00:13:29.426116 containerd[1453]: time="2025-07-11T00:13:29.426110534Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 11 00:13:29.428510 containerd[1453]: time="2025-07-11T00:13:29.428466945Z" level=info msg="CreateContainer within sandbox \"d7ba4865535ae20605029dc5f5813df12bcf91c670ae384286deb6382029ad73\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 11 00:13:29.444585 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2556555576.mount: Deactivated successfully. Jul 11 00:13:29.450028 containerd[1453]: time="2025-07-11T00:13:29.449965434Z" level=info msg="CreateContainer within sandbox \"d7ba4865535ae20605029dc5f5813df12bcf91c670ae384286deb6382029ad73\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"074368fdb399218442d951e0cb8e6fbbfe178d00dbf1c044b78e536a17a10e6c\"" Jul 11 00:13:29.450615 containerd[1453]: time="2025-07-11T00:13:29.450577865Z" level=info msg="StartContainer for \"074368fdb399218442d951e0cb8e6fbbfe178d00dbf1c044b78e536a17a10e6c\"" Jul 11 00:13:29.487267 systemd[1]: run-containerd-runc-k8s.io-074368fdb399218442d951e0cb8e6fbbfe178d00dbf1c044b78e536a17a10e6c-runc.scSo0d.mount: Deactivated successfully. Jul 11 00:13:29.496196 systemd[1]: Started cri-containerd-074368fdb399218442d951e0cb8e6fbbfe178d00dbf1c044b78e536a17a10e6c.scope - libcontainer container 074368fdb399218442d951e0cb8e6fbbfe178d00dbf1c044b78e536a17a10e6c. Jul 11 00:13:29.533735 containerd[1453]: time="2025-07-11T00:13:29.533652410Z" level=info msg="StartContainer for \"074368fdb399218442d951e0cb8e6fbbfe178d00dbf1c044b78e536a17a10e6c\" returns successfully" Jul 11 00:13:33.489065 update_engine[1441]: I20250711 00:13:33.488948 1441 update_attempter.cc:509] Updating boot flags... Jul 11 00:13:33.561802 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2875) Jul 11 00:13:33.604383 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2878) Jul 11 00:13:33.632800 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2878) Jul 11 00:13:34.954943 sudo[1636]: pam_unix(sudo:session): session closed for user root Jul 11 00:13:34.963929 sshd[1633]: pam_unix(sshd:session): session closed for user core Jul 11 00:13:34.968677 systemd[1]: sshd@6-10.0.0.36:22-10.0.0.1:39428.service: Deactivated successfully. Jul 11 00:13:34.972332 systemd[1]: session-7.scope: Deactivated successfully. Jul 11 00:13:34.972736 systemd[1]: session-7.scope: Consumed 4.913s CPU time, 158.4M memory peak, 0B memory swap peak. Jul 11 00:13:34.973598 systemd-logind[1439]: Session 7 logged out. Waiting for processes to exit. Jul 11 00:13:34.975314 systemd-logind[1439]: Removed session 7. Jul 11 00:13:37.442557 kubelet[2482]: I0711 00:13:37.442466 2482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-6pm9f" podStartSLOduration=8.622111974 podStartE2EDuration="12.442414703s" podCreationTimestamp="2025-07-11 00:13:25 +0000 UTC" firstStartedPulling="2025-07-11 00:13:25.606718693 +0000 UTC m=+7.276092600" lastFinishedPulling="2025-07-11 00:13:29.427021422 +0000 UTC m=+11.096395329" observedRunningTime="2025-07-11 00:13:30.519114034 +0000 UTC m=+12.188487941" watchObservedRunningTime="2025-07-11 00:13:37.442414703 +0000 UTC m=+19.111788610" Jul 11 00:13:37.455281 systemd[1]: Created slice kubepods-besteffort-pod86e623d2_4ff8_499f_b666_bb3a5dc07432.slice - libcontainer container kubepods-besteffort-pod86e623d2_4ff8_499f_b666_bb3a5dc07432.slice. Jul 11 00:13:37.461617 kubelet[2482]: I0711 00:13:37.461558 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86e623d2-4ff8-499f-b666-bb3a5dc07432-tigera-ca-bundle\") pod \"calico-typha-6c6b7f966c-4n8qw\" (UID: \"86e623d2-4ff8-499f-b666-bb3a5dc07432\") " pod="calico-system/calico-typha-6c6b7f966c-4n8qw" Jul 11 00:13:37.461617 kubelet[2482]: I0711 00:13:37.461607 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/86e623d2-4ff8-499f-b666-bb3a5dc07432-typha-certs\") pod \"calico-typha-6c6b7f966c-4n8qw\" (UID: \"86e623d2-4ff8-499f-b666-bb3a5dc07432\") " pod="calico-system/calico-typha-6c6b7f966c-4n8qw" Jul 11 00:13:37.461854 kubelet[2482]: I0711 00:13:37.461630 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nffrw\" (UniqueName: \"kubernetes.io/projected/86e623d2-4ff8-499f-b666-bb3a5dc07432-kube-api-access-nffrw\") pod \"calico-typha-6c6b7f966c-4n8qw\" (UID: \"86e623d2-4ff8-499f-b666-bb3a5dc07432\") " pod="calico-system/calico-typha-6c6b7f966c-4n8qw" Jul 11 00:13:37.738855 systemd[1]: Created slice kubepods-besteffort-pod432a2c78_42f5_4734_bd6f_acf5cf281484.slice - libcontainer container kubepods-besteffort-pod432a2c78_42f5_4734_bd6f_acf5cf281484.slice. Jul 11 00:13:37.759165 kubelet[2482]: E0711 00:13:37.759009 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:13:37.760592 containerd[1453]: time="2025-07-11T00:13:37.760456562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6c6b7f966c-4n8qw,Uid:86e623d2-4ff8-499f-b666-bb3a5dc07432,Namespace:calico-system,Attempt:0,}" Jul 11 00:13:37.764820 kubelet[2482]: I0711 00:13:37.764785 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/432a2c78-42f5-4734-bd6f-acf5cf281484-cni-log-dir\") pod \"calico-node-tx8kz\" (UID: \"432a2c78-42f5-4734-bd6f-acf5cf281484\") " pod="calico-system/calico-node-tx8kz" Jul 11 00:13:37.764820 kubelet[2482]: I0711 00:13:37.764818 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/432a2c78-42f5-4734-bd6f-acf5cf281484-node-certs\") pod \"calico-node-tx8kz\" (UID: \"432a2c78-42f5-4734-bd6f-acf5cf281484\") " pod="calico-system/calico-node-tx8kz" Jul 11 00:13:37.764910 kubelet[2482]: I0711 00:13:37.764839 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/432a2c78-42f5-4734-bd6f-acf5cf281484-flexvol-driver-host\") pod \"calico-node-tx8kz\" (UID: \"432a2c78-42f5-4734-bd6f-acf5cf281484\") " pod="calico-system/calico-node-tx8kz" Jul 11 00:13:37.764910 kubelet[2482]: I0711 00:13:37.764859 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/432a2c78-42f5-4734-bd6f-acf5cf281484-policysync\") pod \"calico-node-tx8kz\" (UID: \"432a2c78-42f5-4734-bd6f-acf5cf281484\") " pod="calico-system/calico-node-tx8kz" Jul 11 00:13:37.764910 kubelet[2482]: I0711 00:13:37.764873 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/432a2c78-42f5-4734-bd6f-acf5cf281484-xtables-lock\") pod \"calico-node-tx8kz\" (UID: \"432a2c78-42f5-4734-bd6f-acf5cf281484\") " pod="calico-system/calico-node-tx8kz" Jul 11 00:13:37.764910 kubelet[2482]: I0711 00:13:37.764890 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/432a2c78-42f5-4734-bd6f-acf5cf281484-cni-net-dir\") pod \"calico-node-tx8kz\" (UID: \"432a2c78-42f5-4734-bd6f-acf5cf281484\") " pod="calico-system/calico-node-tx8kz" Jul 11 00:13:37.765004 kubelet[2482]: I0711 00:13:37.764965 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/432a2c78-42f5-4734-bd6f-acf5cf281484-var-lib-calico\") pod \"calico-node-tx8kz\" (UID: \"432a2c78-42f5-4734-bd6f-acf5cf281484\") " pod="calico-system/calico-node-tx8kz" Jul 11 00:13:37.765092 kubelet[2482]: I0711 00:13:37.765054 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/432a2c78-42f5-4734-bd6f-acf5cf281484-lib-modules\") pod \"calico-node-tx8kz\" (UID: \"432a2c78-42f5-4734-bd6f-acf5cf281484\") " pod="calico-system/calico-node-tx8kz" Jul 11 00:13:37.765131 kubelet[2482]: I0711 00:13:37.765092 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/432a2c78-42f5-4734-bd6f-acf5cf281484-tigera-ca-bundle\") pod \"calico-node-tx8kz\" (UID: \"432a2c78-42f5-4734-bd6f-acf5cf281484\") " pod="calico-system/calico-node-tx8kz" Jul 11 00:13:37.765131 kubelet[2482]: I0711 00:13:37.765109 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/432a2c78-42f5-4734-bd6f-acf5cf281484-var-run-calico\") pod \"calico-node-tx8kz\" (UID: \"432a2c78-42f5-4734-bd6f-acf5cf281484\") " pod="calico-system/calico-node-tx8kz" Jul 11 00:13:37.765131 kubelet[2482]: I0711 00:13:37.765124 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8g7x\" (UniqueName: \"kubernetes.io/projected/432a2c78-42f5-4734-bd6f-acf5cf281484-kube-api-access-d8g7x\") pod \"calico-node-tx8kz\" (UID: \"432a2c78-42f5-4734-bd6f-acf5cf281484\") " pod="calico-system/calico-node-tx8kz" Jul 11 00:13:37.765218 kubelet[2482]: I0711 00:13:37.765148 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/432a2c78-42f5-4734-bd6f-acf5cf281484-cni-bin-dir\") pod \"calico-node-tx8kz\" (UID: \"432a2c78-42f5-4734-bd6f-acf5cf281484\") " pod="calico-system/calico-node-tx8kz" Jul 11 00:13:37.839158 containerd[1453]: time="2025-07-11T00:13:37.839037046Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:13:37.839158 containerd[1453]: time="2025-07-11T00:13:37.839101507Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:13:37.839158 containerd[1453]: time="2025-07-11T00:13:37.839116486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:13:37.839496 containerd[1453]: time="2025-07-11T00:13:37.839300734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:13:37.866916 systemd[1]: Started cri-containerd-8c926bb6f2cc2358baa038f7b57b4669445e68fac126a0f023dec43c9b612c26.scope - libcontainer container 8c926bb6f2cc2358baa038f7b57b4669445e68fac126a0f023dec43c9b612c26. Jul 11 00:13:37.871011 kubelet[2482]: E0711 00:13:37.870948 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:37.871011 kubelet[2482]: W0711 00:13:37.870972 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:37.871011 kubelet[2482]: E0711 00:13:37.871017 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:37.877262 kubelet[2482]: E0711 00:13:37.877105 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:37.877262 kubelet[2482]: W0711 00:13:37.877250 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:37.877262 kubelet[2482]: E0711 00:13:37.877268 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:37.909164 containerd[1453]: time="2025-07-11T00:13:37.908997892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6c6b7f966c-4n8qw,Uid:86e623d2-4ff8-499f-b666-bb3a5dc07432,Namespace:calico-system,Attempt:0,} returns sandbox id \"8c926bb6f2cc2358baa038f7b57b4669445e68fac126a0f023dec43c9b612c26\"" Jul 11 00:13:37.909787 kubelet[2482]: E0711 00:13:37.909746 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:13:37.910837 containerd[1453]: time="2025-07-11T00:13:37.910707981Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 11 00:13:37.975018 kubelet[2482]: E0711 00:13:37.974937 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cx6rf" podUID="1ca1aa6c-2238-4c4c-869b-a6dd1b43a48c" Jul 11 00:13:38.043597 containerd[1453]: time="2025-07-11T00:13:38.043461783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tx8kz,Uid:432a2c78-42f5-4734-bd6f-acf5cf281484,Namespace:calico-system,Attempt:0,}" Jul 11 00:13:38.058299 kubelet[2482]: E0711 00:13:38.058258 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.058299 kubelet[2482]: W0711 00:13:38.058286 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.058299 kubelet[2482]: E0711 00:13:38.058313 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.058680 kubelet[2482]: E0711 00:13:38.058641 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.058680 kubelet[2482]: W0711 00:13:38.058669 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.058876 kubelet[2482]: E0711 00:13:38.058700 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.059038 kubelet[2482]: E0711 00:13:38.059012 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.059038 kubelet[2482]: W0711 00:13:38.059026 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.059038 kubelet[2482]: E0711 00:13:38.059038 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.059347 kubelet[2482]: E0711 00:13:38.059329 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.059347 kubelet[2482]: W0711 00:13:38.059342 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.059411 kubelet[2482]: E0711 00:13:38.059351 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.059600 kubelet[2482]: E0711 00:13:38.059585 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.059600 kubelet[2482]: W0711 00:13:38.059596 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.059660 kubelet[2482]: E0711 00:13:38.059606 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.059907 kubelet[2482]: E0711 00:13:38.059888 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.059907 kubelet[2482]: W0711 00:13:38.059900 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.059907 kubelet[2482]: E0711 00:13:38.059909 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.060116 kubelet[2482]: E0711 00:13:38.060101 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.060116 kubelet[2482]: W0711 00:13:38.060112 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.060168 kubelet[2482]: E0711 00:13:38.060121 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.060313 kubelet[2482]: E0711 00:13:38.060296 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.060345 kubelet[2482]: W0711 00:13:38.060317 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.060345 kubelet[2482]: E0711 00:13:38.060326 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.060561 kubelet[2482]: E0711 00:13:38.060540 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.060561 kubelet[2482]: W0711 00:13:38.060557 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.060643 kubelet[2482]: E0711 00:13:38.060570 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.060837 kubelet[2482]: E0711 00:13:38.060820 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.060837 kubelet[2482]: W0711 00:13:38.060834 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.060896 kubelet[2482]: E0711 00:13:38.060847 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.061090 kubelet[2482]: E0711 00:13:38.061074 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.061090 kubelet[2482]: W0711 00:13:38.061086 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.061142 kubelet[2482]: E0711 00:13:38.061096 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.061348 kubelet[2482]: E0711 00:13:38.061330 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.061348 kubelet[2482]: W0711 00:13:38.061342 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.061419 kubelet[2482]: E0711 00:13:38.061353 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.061622 kubelet[2482]: E0711 00:13:38.061606 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.061622 kubelet[2482]: W0711 00:13:38.061617 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.061688 kubelet[2482]: E0711 00:13:38.061643 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.061906 kubelet[2482]: E0711 00:13:38.061889 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.061906 kubelet[2482]: W0711 00:13:38.061901 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.061966 kubelet[2482]: E0711 00:13:38.061911 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.062157 kubelet[2482]: E0711 00:13:38.062143 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.062157 kubelet[2482]: W0711 00:13:38.062154 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.062204 kubelet[2482]: E0711 00:13:38.062162 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.062387 kubelet[2482]: E0711 00:13:38.062368 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.062387 kubelet[2482]: W0711 00:13:38.062381 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.062448 kubelet[2482]: E0711 00:13:38.062391 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.062632 kubelet[2482]: E0711 00:13:38.062616 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.062632 kubelet[2482]: W0711 00:13:38.062627 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.062688 kubelet[2482]: E0711 00:13:38.062635 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.062912 kubelet[2482]: E0711 00:13:38.062896 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.062912 kubelet[2482]: W0711 00:13:38.062908 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.062962 kubelet[2482]: E0711 00:13:38.062917 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.063128 kubelet[2482]: E0711 00:13:38.063113 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.063128 kubelet[2482]: W0711 00:13:38.063124 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.063174 kubelet[2482]: E0711 00:13:38.063132 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.063415 kubelet[2482]: E0711 00:13:38.063390 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.063415 kubelet[2482]: W0711 00:13:38.063403 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.063415 kubelet[2482]: E0711 00:13:38.063414 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.067753 kubelet[2482]: E0711 00:13:38.067709 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.067753 kubelet[2482]: W0711 00:13:38.067741 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.067753 kubelet[2482]: E0711 00:13:38.067772 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.067932 kubelet[2482]: I0711 00:13:38.067807 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1ca1aa6c-2238-4c4c-869b-a6dd1b43a48c-kubelet-dir\") pod \"csi-node-driver-cx6rf\" (UID: \"1ca1aa6c-2238-4c4c-869b-a6dd1b43a48c\") " pod="calico-system/csi-node-driver-cx6rf" Jul 11 00:13:38.068085 kubelet[2482]: E0711 00:13:38.068066 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.068085 kubelet[2482]: W0711 00:13:38.068081 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.068145 kubelet[2482]: E0711 00:13:38.068097 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.068145 kubelet[2482]: I0711 00:13:38.068112 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/1ca1aa6c-2238-4c4c-869b-a6dd1b43a48c-varrun\") pod \"csi-node-driver-cx6rf\" (UID: \"1ca1aa6c-2238-4c4c-869b-a6dd1b43a48c\") " pod="calico-system/csi-node-driver-cx6rf" Jul 11 00:13:38.068467 kubelet[2482]: E0711 00:13:38.068430 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.068524 kubelet[2482]: W0711 00:13:38.068465 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.068524 kubelet[2482]: E0711 00:13:38.068501 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.068834 kubelet[2482]: E0711 00:13:38.068814 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.068834 kubelet[2482]: W0711 00:13:38.068828 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.068939 kubelet[2482]: E0711 00:13:38.068853 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.069103 kubelet[2482]: E0711 00:13:38.069086 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.069103 kubelet[2482]: W0711 00:13:38.069099 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.069184 kubelet[2482]: E0711 00:13:38.069124 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.069395 kubelet[2482]: I0711 00:13:38.069157 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1ca1aa6c-2238-4c4c-869b-a6dd1b43a48c-socket-dir\") pod \"csi-node-driver-cx6rf\" (UID: \"1ca1aa6c-2238-4c4c-869b-a6dd1b43a48c\") " pod="calico-system/csi-node-driver-cx6rf" Jul 11 00:13:38.069565 kubelet[2482]: E0711 00:13:38.069545 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.069565 kubelet[2482]: W0711 00:13:38.069562 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.069633 kubelet[2482]: E0711 00:13:38.069582 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.069872 kubelet[2482]: E0711 00:13:38.069852 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.069872 kubelet[2482]: W0711 00:13:38.069865 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.069992 kubelet[2482]: E0711 00:13:38.069880 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.070127 kubelet[2482]: E0711 00:13:38.070089 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.070127 kubelet[2482]: W0711 00:13:38.070101 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.070127 kubelet[2482]: E0711 00:13:38.070113 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.070341 kubelet[2482]: E0711 00:13:38.070324 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.070341 kubelet[2482]: W0711 00:13:38.070338 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.070431 kubelet[2482]: E0711 00:13:38.070353 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.070610 kubelet[2482]: E0711 00:13:38.070589 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.070610 kubelet[2482]: W0711 00:13:38.070604 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.070701 kubelet[2482]: E0711 00:13:38.070615 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.070701 kubelet[2482]: I0711 00:13:38.070639 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ts7p\" (UniqueName: \"kubernetes.io/projected/1ca1aa6c-2238-4c4c-869b-a6dd1b43a48c-kube-api-access-6ts7p\") pod \"csi-node-driver-cx6rf\" (UID: \"1ca1aa6c-2238-4c4c-869b-a6dd1b43a48c\") " pod="calico-system/csi-node-driver-cx6rf" Jul 11 00:13:38.070930 kubelet[2482]: E0711 00:13:38.070906 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.070930 kubelet[2482]: W0711 00:13:38.070922 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.071049 kubelet[2482]: E0711 00:13:38.070938 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.071049 kubelet[2482]: I0711 00:13:38.070956 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1ca1aa6c-2238-4c4c-869b-a6dd1b43a48c-registration-dir\") pod \"csi-node-driver-cx6rf\" (UID: \"1ca1aa6c-2238-4c4c-869b-a6dd1b43a48c\") " pod="calico-system/csi-node-driver-cx6rf" Jul 11 00:13:38.071227 kubelet[2482]: E0711 00:13:38.071205 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.071227 kubelet[2482]: W0711 00:13:38.071221 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.071338 kubelet[2482]: E0711 00:13:38.071235 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.071488 kubelet[2482]: E0711 00:13:38.071469 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.071488 kubelet[2482]: W0711 00:13:38.071482 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.071557 kubelet[2482]: E0711 00:13:38.071497 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.072195 kubelet[2482]: E0711 00:13:38.072175 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.072195 kubelet[2482]: W0711 00:13:38.072190 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.072195 kubelet[2482]: E0711 00:13:38.072201 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.072298 containerd[1453]: time="2025-07-11T00:13:38.072109960Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:13:38.072449 kubelet[2482]: E0711 00:13:38.072430 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.072449 kubelet[2482]: W0711 00:13:38.072442 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.072528 kubelet[2482]: E0711 00:13:38.072453 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.072964 containerd[1453]: time="2025-07-11T00:13:38.072885904Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:13:38.072964 containerd[1453]: time="2025-07-11T00:13:38.072907035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:13:38.073092 containerd[1453]: time="2025-07-11T00:13:38.073004979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:13:38.093908 systemd[1]: Started cri-containerd-181c5bc91ad07fe016d36e1c1e9dbe5139bccda41f487c855ba4673e98721433.scope - libcontainer container 181c5bc91ad07fe016d36e1c1e9dbe5139bccda41f487c855ba4673e98721433. Jul 11 00:13:38.133155 containerd[1453]: time="2025-07-11T00:13:38.132983996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tx8kz,Uid:432a2c78-42f5-4734-bd6f-acf5cf281484,Namespace:calico-system,Attempt:0,} returns sandbox id \"181c5bc91ad07fe016d36e1c1e9dbe5139bccda41f487c855ba4673e98721433\"" Jul 11 00:13:38.172506 kubelet[2482]: E0711 00:13:38.172462 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.172506 kubelet[2482]: W0711 00:13:38.172489 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.172506 kubelet[2482]: E0711 00:13:38.172514 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.172812 kubelet[2482]: E0711 00:13:38.172776 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.172812 kubelet[2482]: W0711 00:13:38.172785 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.172812 kubelet[2482]: E0711 00:13:38.172798 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.173085 kubelet[2482]: E0711 00:13:38.173063 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.173085 kubelet[2482]: W0711 00:13:38.173081 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.173146 kubelet[2482]: E0711 00:13:38.173102 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.173359 kubelet[2482]: E0711 00:13:38.173333 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.173359 kubelet[2482]: W0711 00:13:38.173348 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.173359 kubelet[2482]: E0711 00:13:38.173365 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.173643 kubelet[2482]: E0711 00:13:38.173608 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.173643 kubelet[2482]: W0711 00:13:38.173620 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.173643 kubelet[2482]: E0711 00:13:38.173640 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.173959 kubelet[2482]: E0711 00:13:38.173944 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.173999 kubelet[2482]: W0711 00:13:38.173958 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.173999 kubelet[2482]: E0711 00:13:38.173979 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.174232 kubelet[2482]: E0711 00:13:38.174217 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.174257 kubelet[2482]: W0711 00:13:38.174231 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.174257 kubelet[2482]: E0711 00:13:38.174248 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.174511 kubelet[2482]: E0711 00:13:38.174496 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.174547 kubelet[2482]: W0711 00:13:38.174509 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.174598 kubelet[2482]: E0711 00:13:38.174562 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.174993 kubelet[2482]: E0711 00:13:38.174961 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.175025 kubelet[2482]: W0711 00:13:38.174993 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.175090 kubelet[2482]: E0711 00:13:38.175063 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.175302 kubelet[2482]: E0711 00:13:38.175286 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.175302 kubelet[2482]: W0711 00:13:38.175298 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.175347 kubelet[2482]: E0711 00:13:38.175323 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.175771 kubelet[2482]: E0711 00:13:38.175726 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.175771 kubelet[2482]: W0711 00:13:38.175746 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.175830 kubelet[2482]: E0711 00:13:38.175797 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.176069 kubelet[2482]: E0711 00:13:38.176054 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.176069 kubelet[2482]: W0711 00:13:38.176065 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.176134 kubelet[2482]: E0711 00:13:38.176110 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.176327 kubelet[2482]: E0711 00:13:38.176305 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.176327 kubelet[2482]: W0711 00:13:38.176317 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.176375 kubelet[2482]: E0711 00:13:38.176351 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.176540 kubelet[2482]: E0711 00:13:38.176525 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.176540 kubelet[2482]: W0711 00:13:38.176536 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.176580 kubelet[2482]: E0711 00:13:38.176568 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.176814 kubelet[2482]: E0711 00:13:38.176795 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.176814 kubelet[2482]: W0711 00:13:38.176807 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.176949 kubelet[2482]: E0711 00:13:38.176879 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.177068 kubelet[2482]: E0711 00:13:38.177039 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.177068 kubelet[2482]: W0711 00:13:38.177057 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.177130 kubelet[2482]: E0711 00:13:38.177082 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.177341 kubelet[2482]: E0711 00:13:38.177320 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.177341 kubelet[2482]: W0711 00:13:38.177336 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.177458 kubelet[2482]: E0711 00:13:38.177357 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.177614 kubelet[2482]: E0711 00:13:38.177594 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.177614 kubelet[2482]: W0711 00:13:38.177609 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.177679 kubelet[2482]: E0711 00:13:38.177629 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.177899 kubelet[2482]: E0711 00:13:38.177884 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.177899 kubelet[2482]: W0711 00:13:38.177895 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.177963 kubelet[2482]: E0711 00:13:38.177927 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.178107 kubelet[2482]: E0711 00:13:38.178093 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.178107 kubelet[2482]: W0711 00:13:38.178103 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.178151 kubelet[2482]: E0711 00:13:38.178132 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.178303 kubelet[2482]: E0711 00:13:38.178289 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.178303 kubelet[2482]: W0711 00:13:38.178299 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.178361 kubelet[2482]: E0711 00:13:38.178331 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.178574 kubelet[2482]: E0711 00:13:38.178551 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.178574 kubelet[2482]: W0711 00:13:38.178565 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.178652 kubelet[2482]: E0711 00:13:38.178582 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.178895 kubelet[2482]: E0711 00:13:38.178874 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.178895 kubelet[2482]: W0711 00:13:38.178889 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.178978 kubelet[2482]: E0711 00:13:38.178907 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.179214 kubelet[2482]: E0711 00:13:38.179183 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.179214 kubelet[2482]: W0711 00:13:38.179199 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.179266 kubelet[2482]: E0711 00:13:38.179219 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.179579 kubelet[2482]: E0711 00:13:38.179558 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.179579 kubelet[2482]: W0711 00:13:38.179575 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.179655 kubelet[2482]: E0711 00:13:38.179586 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:38.188368 kubelet[2482]: E0711 00:13:38.188334 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:38.188368 kubelet[2482]: W0711 00:13:38.188356 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:38.188471 kubelet[2482]: E0711 00:13:38.188375 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:39.292158 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount979086899.mount: Deactivated successfully. Jul 11 00:13:39.457738 kubelet[2482]: E0711 00:13:39.457648 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cx6rf" podUID="1ca1aa6c-2238-4c4c-869b-a6dd1b43a48c" Jul 11 00:13:39.655215 containerd[1453]: time="2025-07-11T00:13:39.655025961Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:39.656100 containerd[1453]: time="2025-07-11T00:13:39.656047918Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 11 00:13:39.657384 containerd[1453]: time="2025-07-11T00:13:39.657334416Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:39.661221 containerd[1453]: time="2025-07-11T00:13:39.661183578Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:39.662349 containerd[1453]: time="2025-07-11T00:13:39.662285358Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 1.751528515s" Jul 11 00:13:39.662417 containerd[1453]: time="2025-07-11T00:13:39.662339009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 11 00:13:39.664035 containerd[1453]: time="2025-07-11T00:13:39.663987630Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 11 00:13:39.673837 containerd[1453]: time="2025-07-11T00:13:39.673801327Z" level=info msg="CreateContainer within sandbox \"8c926bb6f2cc2358baa038f7b57b4669445e68fac126a0f023dec43c9b612c26\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 11 00:13:39.690405 containerd[1453]: time="2025-07-11T00:13:39.690334873Z" level=info msg="CreateContainer within sandbox \"8c926bb6f2cc2358baa038f7b57b4669445e68fac126a0f023dec43c9b612c26\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2bb413bd2641bc213fc75e6ce06d38ac58bb9c224fc86b7a4ac776ecf6d5f3ed\"" Jul 11 00:13:39.691143 containerd[1453]: time="2025-07-11T00:13:39.691057456Z" level=info msg="StartContainer for \"2bb413bd2641bc213fc75e6ce06d38ac58bb9c224fc86b7a4ac776ecf6d5f3ed\"" Jul 11 00:13:39.725118 systemd[1]: Started cri-containerd-2bb413bd2641bc213fc75e6ce06d38ac58bb9c224fc86b7a4ac776ecf6d5f3ed.scope - libcontainer container 2bb413bd2641bc213fc75e6ce06d38ac58bb9c224fc86b7a4ac776ecf6d5f3ed. Jul 11 00:13:39.848459 containerd[1453]: time="2025-07-11T00:13:39.848390088Z" level=info msg="StartContainer for \"2bb413bd2641bc213fc75e6ce06d38ac58bb9c224fc86b7a4ac776ecf6d5f3ed\" returns successfully" Jul 11 00:13:40.525934 kubelet[2482]: E0711 00:13:40.525896 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:13:40.535812 kubelet[2482]: I0711 00:13:40.535719 2482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6c6b7f966c-4n8qw" podStartSLOduration=1.782707116 podStartE2EDuration="3.535680536s" podCreationTimestamp="2025-07-11 00:13:37 +0000 UTC" firstStartedPulling="2025-07-11 00:13:37.910460665 +0000 UTC m=+19.579834572" lastFinishedPulling="2025-07-11 00:13:39.663434085 +0000 UTC m=+21.332807992" observedRunningTime="2025-07-11 00:13:40.534996696 +0000 UTC m=+22.204370603" watchObservedRunningTime="2025-07-11 00:13:40.535680536 +0000 UTC m=+22.205054453" Jul 11 00:13:40.579017 kubelet[2482]: E0711 00:13:40.578941 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:40.579017 kubelet[2482]: W0711 00:13:40.578981 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:40.579017 kubelet[2482]: E0711 00:13:40.579013 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:40.579442 kubelet[2482]: E0711 00:13:40.579403 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:40.579442 kubelet[2482]: W0711 00:13:40.579421 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:40.579442 kubelet[2482]: E0711 00:13:40.579433 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:40.579777 kubelet[2482]: E0711 00:13:40.579718 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:40.579777 kubelet[2482]: W0711 00:13:40.579735 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:40.579777 kubelet[2482]: E0711 00:13:40.579776 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:40.580064 kubelet[2482]: E0711 00:13:40.580025 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:40.580064 kubelet[2482]: W0711 00:13:40.580043 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:40.580064 kubelet[2482]: E0711 00:13:40.580056 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:40.580361 kubelet[2482]: E0711 00:13:40.580323 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:40.580361 kubelet[2482]: W0711 00:13:40.580341 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:40.580361 kubelet[2482]: E0711 00:13:40.580352 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:40.580619 kubelet[2482]: E0711 00:13:40.580582 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:40.580619 kubelet[2482]: W0711 00:13:40.580600 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:40.580619 kubelet[2482]: E0711 00:13:40.580611 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:40.580896 kubelet[2482]: E0711 00:13:40.580869 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:40.580896 kubelet[2482]: W0711 00:13:40.580886 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:40.580896 kubelet[2482]: E0711 00:13:40.580898 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:40.581157 kubelet[2482]: E0711 00:13:40.581130 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:40.581157 kubelet[2482]: W0711 00:13:40.581147 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:40.581157 kubelet[2482]: E0711 00:13:40.581159 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:40.581429 kubelet[2482]: E0711 00:13:40.581402 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:40.581429 kubelet[2482]: W0711 00:13:40.581418 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:40.581429 kubelet[2482]: E0711 00:13:40.581433 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:40.581710 kubelet[2482]: E0711 00:13:40.581672 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:40.581710 kubelet[2482]: W0711 00:13:40.581702 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:40.581840 kubelet[2482]: E0711 00:13:40.581716 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:40.582014 kubelet[2482]: E0711 00:13:40.581986 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:40.582014 kubelet[2482]: W0711 00:13:40.582003 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:40.582014 kubelet[2482]: E0711 00:13:40.582016 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:40.582347 kubelet[2482]: E0711 00:13:40.582311 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:40.582347 kubelet[2482]: W0711 00:13:40.582328 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:40.582347 kubelet[2482]: E0711 00:13:40.582339 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:40.582650 kubelet[2482]: E0711 00:13:40.582624 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:40.582650 kubelet[2482]: W0711 00:13:40.582639 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:40.582650 kubelet[2482]: E0711 00:13:40.582651 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:40.582974 kubelet[2482]: E0711 00:13:40.582949 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:40.582974 kubelet[2482]: W0711 00:13:40.582966 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:40.582974 kubelet[2482]: E0711 00:13:40.582977 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:40.583275 kubelet[2482]: E0711 00:13:40.583252 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:40.583275 kubelet[2482]: W0711 00:13:40.583268 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:40.583342 kubelet[2482]: E0711 00:13:40.583280 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:40.589916 kubelet[2482]: E0711 00:13:40.589867 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:40.589916 kubelet[2482]: W0711 00:13:40.589902 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:40.590046 kubelet[2482]: E0711 00:13:40.589932 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:40.590391 kubelet[2482]: E0711 00:13:40.590369 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:40.590391 kubelet[2482]: W0711 00:13:40.590386 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:40.590469 kubelet[2482]: E0711 00:13:40.590406 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:40.590931 kubelet[2482]: E0711 00:13:40.590895 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:40.590992 kubelet[2482]: W0711 00:13:40.590928 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:40.590992 kubelet[2482]: E0711 00:13:40.590968 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:40.591302 kubelet[2482]: E0711 00:13:40.591276 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:40.591302 kubelet[2482]: W0711 00:13:40.591295 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:40.591378 kubelet[2482]: E0711 00:13:40.591315 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:40.591656 kubelet[2482]: E0711 00:13:40.591631 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:40.591656 kubelet[2482]: W0711 00:13:40.591648 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:40.591794 kubelet[2482]: E0711 00:13:40.591667 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:40.591998 kubelet[2482]: E0711 00:13:40.591974 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:40.591998 kubelet[2482]: W0711 00:13:40.591988 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:40.592072 kubelet[2482]: E0711 00:13:40.592008 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:40.592322 kubelet[2482]: E0711 00:13:40.592298 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:40.592322 kubelet[2482]: W0711 00:13:40.592315 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:40.592394 kubelet[2482]: E0711 00:13:40.592336 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:40.592636 kubelet[2482]: E0711 00:13:40.592611 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:40.592636 kubelet[2482]: W0711 00:13:40.592628 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:40.592721 kubelet[2482]: E0711 00:13:40.592648 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:40.592979 kubelet[2482]: E0711 00:13:40.592956 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:40.592979 kubelet[2482]: W0711 00:13:40.592973 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:40.593066 kubelet[2482]: E0711 00:13:40.592994 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:40.593297 kubelet[2482]: E0711 00:13:40.593280 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:40.593297 kubelet[2482]: W0711 00:13:40.593295 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:40.593390 kubelet[2482]: E0711 00:13:40.593359 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:40.593651 kubelet[2482]: E0711 00:13:40.593619 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:40.593651 kubelet[2482]: W0711 00:13:40.593633 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:40.593741 kubelet[2482]: E0711 00:13:40.593700 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:40.594068 kubelet[2482]: E0711 00:13:40.594032 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:40.594068 kubelet[2482]: W0711 00:13:40.594047 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:40.594161 kubelet[2482]: E0711 00:13:40.594069 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:40.594874 kubelet[2482]: E0711 00:13:40.594843 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:40.594940 kubelet[2482]: W0711 00:13:40.594873 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:40.594940 kubelet[2482]: E0711 00:13:40.594906 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:40.595473 kubelet[2482]: E0711 00:13:40.595448 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:40.595473 kubelet[2482]: W0711 00:13:40.595466 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:40.595554 kubelet[2482]: E0711 00:13:40.595533 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:40.595859 kubelet[2482]: E0711 00:13:40.595833 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:40.595920 kubelet[2482]: W0711 00:13:40.595861 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:40.595920 kubelet[2482]: E0711 00:13:40.595905 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:40.596140 kubelet[2482]: E0711 00:13:40.596104 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:40.596140 kubelet[2482]: W0711 00:13:40.596122 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:40.596244 kubelet[2482]: E0711 00:13:40.596144 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:40.596505 kubelet[2482]: E0711 00:13:40.596460 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:40.596505 kubelet[2482]: W0711 00:13:40.596473 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:40.596505 kubelet[2482]: E0711 00:13:40.596485 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:40.597051 kubelet[2482]: E0711 00:13:40.597037 2482 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:13:40.597051 kubelet[2482]: W0711 00:13:40.597049 2482 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:13:40.597144 kubelet[2482]: E0711 00:13:40.597062 2482 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:13:41.205159 containerd[1453]: time="2025-07-11T00:13:41.205084009Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:41.205908 containerd[1453]: time="2025-07-11T00:13:41.205852308Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 11 00:13:41.207575 containerd[1453]: time="2025-07-11T00:13:41.207534460Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:41.210017 containerd[1453]: time="2025-07-11T00:13:41.209957077Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:41.210934 containerd[1453]: time="2025-07-11T00:13:41.210880699Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.546842935s" Jul 11 00:13:41.211016 containerd[1453]: time="2025-07-11T00:13:41.210933188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 11 00:13:41.213834 containerd[1453]: time="2025-07-11T00:13:41.213795094Z" level=info msg="CreateContainer within sandbox \"181c5bc91ad07fe016d36e1c1e9dbe5139bccda41f487c855ba4673e98721433\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 11 00:13:41.231091 containerd[1453]: time="2025-07-11T00:13:41.231034688Z" level=info msg="CreateContainer within sandbox \"181c5bc91ad07fe016d36e1c1e9dbe5139bccda41f487c855ba4673e98721433\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a8a93dcd881caa6c5c4f14fdd9188ef9f7cc31f6df9da3f2cc14d05f6e21de77\"" Jul 11 00:13:41.231599 containerd[1453]: time="2025-07-11T00:13:41.231549449Z" level=info msg="StartContainer for \"a8a93dcd881caa6c5c4f14fdd9188ef9f7cc31f6df9da3f2cc14d05f6e21de77\"" Jul 11 00:13:41.267929 systemd[1]: Started cri-containerd-a8a93dcd881caa6c5c4f14fdd9188ef9f7cc31f6df9da3f2cc14d05f6e21de77.scope - libcontainer container a8a93dcd881caa6c5c4f14fdd9188ef9f7cc31f6df9da3f2cc14d05f6e21de77. Jul 11 00:13:41.302049 containerd[1453]: time="2025-07-11T00:13:41.301984120Z" level=info msg="StartContainer for \"a8a93dcd881caa6c5c4f14fdd9188ef9f7cc31f6df9da3f2cc14d05f6e21de77\" returns successfully" Jul 11 00:13:41.311991 systemd[1]: cri-containerd-a8a93dcd881caa6c5c4f14fdd9188ef9f7cc31f6df9da3f2cc14d05f6e21de77.scope: Deactivated successfully. Jul 11 00:13:41.457315 kubelet[2482]: E0711 00:13:41.457118 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cx6rf" podUID="1ca1aa6c-2238-4c4c-869b-a6dd1b43a48c" Jul 11 00:13:41.527234 containerd[1453]: time="2025-07-11T00:13:41.524372493Z" level=info msg="shim disconnected" id=a8a93dcd881caa6c5c4f14fdd9188ef9f7cc31f6df9da3f2cc14d05f6e21de77 namespace=k8s.io Jul 11 00:13:41.527234 containerd[1453]: time="2025-07-11T00:13:41.527218409Z" level=warning msg="cleaning up after shim disconnected" id=a8a93dcd881caa6c5c4f14fdd9188ef9f7cc31f6df9da3f2cc14d05f6e21de77 namespace=k8s.io Jul 11 00:13:41.527234 containerd[1453]: time="2025-07-11T00:13:41.527230502Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:13:41.529346 kubelet[2482]: I0711 00:13:41.528705 2482 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:13:41.529346 kubelet[2482]: E0711 00:13:41.529192 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:13:41.670779 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a8a93dcd881caa6c5c4f14fdd9188ef9f7cc31f6df9da3f2cc14d05f6e21de77-rootfs.mount: Deactivated successfully. Jul 11 00:13:42.533220 containerd[1453]: time="2025-07-11T00:13:42.533123784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 11 00:13:43.456943 kubelet[2482]: E0711 00:13:43.456876 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cx6rf" podUID="1ca1aa6c-2238-4c4c-869b-a6dd1b43a48c" Jul 11 00:13:45.457117 kubelet[2482]: E0711 00:13:45.457053 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cx6rf" podUID="1ca1aa6c-2238-4c4c-869b-a6dd1b43a48c" Jul 11 00:13:45.656075 containerd[1453]: time="2025-07-11T00:13:45.656027296Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:45.656964 containerd[1453]: time="2025-07-11T00:13:45.656931139Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 11 00:13:45.658248 containerd[1453]: time="2025-07-11T00:13:45.658212291Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:45.660414 containerd[1453]: time="2025-07-11T00:13:45.660373280Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:45.661055 containerd[1453]: time="2025-07-11T00:13:45.661026271Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 3.127862241s" Jul 11 00:13:45.661055 containerd[1453]: time="2025-07-11T00:13:45.661051779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 11 00:13:45.663289 containerd[1453]: time="2025-07-11T00:13:45.663243366Z" level=info msg="CreateContainer within sandbox \"181c5bc91ad07fe016d36e1c1e9dbe5139bccda41f487c855ba4673e98721433\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 11 00:13:45.678553 containerd[1453]: time="2025-07-11T00:13:45.678503112Z" level=info msg="CreateContainer within sandbox \"181c5bc91ad07fe016d36e1c1e9dbe5139bccda41f487c855ba4673e98721433\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"75ab8a6206ab6e18fd7eec6544f6a0e36e483383c43c091dad2adb562b897d57\"" Jul 11 00:13:45.678950 containerd[1453]: time="2025-07-11T00:13:45.678924345Z" level=info msg="StartContainer for \"75ab8a6206ab6e18fd7eec6544f6a0e36e483383c43c091dad2adb562b897d57\"" Jul 11 00:13:45.718905 systemd[1]: Started cri-containerd-75ab8a6206ab6e18fd7eec6544f6a0e36e483383c43c091dad2adb562b897d57.scope - libcontainer container 75ab8a6206ab6e18fd7eec6544f6a0e36e483383c43c091dad2adb562b897d57. Jul 11 00:13:45.749164 containerd[1453]: time="2025-07-11T00:13:45.748412833Z" level=info msg="StartContainer for \"75ab8a6206ab6e18fd7eec6544f6a0e36e483383c43c091dad2adb562b897d57\" returns successfully" Jul 11 00:13:46.983801 kubelet[2482]: I0711 00:13:46.983741 2482 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:13:46.984360 kubelet[2482]: E0711 00:13:46.984147 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:13:47.372296 systemd[1]: cri-containerd-75ab8a6206ab6e18fd7eec6544f6a0e36e483383c43c091dad2adb562b897d57.scope: Deactivated successfully. Jul 11 00:13:47.402078 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75ab8a6206ab6e18fd7eec6544f6a0e36e483383c43c091dad2adb562b897d57-rootfs.mount: Deactivated successfully. Jul 11 00:13:47.436841 kubelet[2482]: I0711 00:13:47.436136 2482 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 11 00:13:47.465394 systemd[1]: Created slice kubepods-besteffort-pod1ca1aa6c_2238_4c4c_869b_a6dd1b43a48c.slice - libcontainer container kubepods-besteffort-pod1ca1aa6c_2238_4c4c_869b_a6dd1b43a48c.slice. Jul 11 00:13:47.468270 containerd[1453]: time="2025-07-11T00:13:47.468224301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cx6rf,Uid:1ca1aa6c-2238-4c4c-869b-a6dd1b43a48c,Namespace:calico-system,Attempt:0,}" Jul 11 00:13:47.544952 kubelet[2482]: E0711 00:13:47.544919 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:13:48.041210 kubelet[2482]: I0711 00:13:48.041148 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b110a319-ec46-48e9-afde-f14e51fe5798-calico-apiserver-certs\") pod \"calico-apiserver-5b5f5d84fb-jgmkp\" (UID: \"b110a319-ec46-48e9-afde-f14e51fe5798\") " pod="calico-apiserver/calico-apiserver-5b5f5d84fb-jgmkp" Jul 11 00:13:48.041210 kubelet[2482]: I0711 00:13:48.041185 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jdfb\" (UniqueName: \"kubernetes.io/projected/dc370272-6e6b-40f4-bdc3-9aff809787e2-kube-api-access-9jdfb\") pod \"coredns-668d6bf9bc-csf5s\" (UID: \"dc370272-6e6b-40f4-bdc3-9aff809787e2\") " pod="kube-system/coredns-668d6bf9bc-csf5s" Jul 11 00:13:48.041210 kubelet[2482]: I0711 00:13:48.041200 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b528b1a-2869-4a1f-9208-de9d96c1a0aa-tigera-ca-bundle\") pod \"calico-kube-controllers-67b7cfcdf9-djn6p\" (UID: \"2b528b1a-2869-4a1f-9208-de9d96c1a0aa\") " pod="calico-system/calico-kube-controllers-67b7cfcdf9-djn6p" Jul 11 00:13:48.041702 kubelet[2482]: I0711 00:13:48.041218 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc370272-6e6b-40f4-bdc3-9aff809787e2-config-volume\") pod \"coredns-668d6bf9bc-csf5s\" (UID: \"dc370272-6e6b-40f4-bdc3-9aff809787e2\") " pod="kube-system/coredns-668d6bf9bc-csf5s" Jul 11 00:13:48.041702 kubelet[2482]: I0711 00:13:48.041234 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/09f6ed65-1087-4a6c-9f3f-47a108556bd1-goldmane-key-pair\") pod \"goldmane-768f4c5c69-kqs4z\" (UID: \"09f6ed65-1087-4a6c-9f3f-47a108556bd1\") " pod="calico-system/goldmane-768f4c5c69-kqs4z" Jul 11 00:13:48.041702 kubelet[2482]: I0711 00:13:48.041248 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mc7p\" (UniqueName: \"kubernetes.io/projected/2b528b1a-2869-4a1f-9208-de9d96c1a0aa-kube-api-access-4mc7p\") pod \"calico-kube-controllers-67b7cfcdf9-djn6p\" (UID: \"2b528b1a-2869-4a1f-9208-de9d96c1a0aa\") " pod="calico-system/calico-kube-controllers-67b7cfcdf9-djn6p" Jul 11 00:13:48.041702 kubelet[2482]: I0711 00:13:48.041269 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f72dd124-c970-4b6b-a074-81167ad3af44-config-volume\") pod \"coredns-668d6bf9bc-26c2d\" (UID: \"f72dd124-c970-4b6b-a074-81167ad3af44\") " pod="kube-system/coredns-668d6bf9bc-26c2d" Jul 11 00:13:48.041702 kubelet[2482]: I0711 00:13:48.041286 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0823a92-0fad-4d6c-8138-b092cb1e97af-whisker-ca-bundle\") pod \"whisker-6bf88d8cb8-nt5bv\" (UID: \"f0823a92-0fad-4d6c-8138-b092cb1e97af\") " pod="calico-system/whisker-6bf88d8cb8-nt5bv" Jul 11 00:13:48.041850 kubelet[2482]: I0711 00:13:48.041307 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgdhc\" (UniqueName: \"kubernetes.io/projected/b110a319-ec46-48e9-afde-f14e51fe5798-kube-api-access-wgdhc\") pod \"calico-apiserver-5b5f5d84fb-jgmkp\" (UID: \"b110a319-ec46-48e9-afde-f14e51fe5798\") " pod="calico-apiserver/calico-apiserver-5b5f5d84fb-jgmkp" Jul 11 00:13:48.041850 kubelet[2482]: I0711 00:13:48.041324 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jgzs\" (UniqueName: \"kubernetes.io/projected/09f6ed65-1087-4a6c-9f3f-47a108556bd1-kube-api-access-2jgzs\") pod \"goldmane-768f4c5c69-kqs4z\" (UID: \"09f6ed65-1087-4a6c-9f3f-47a108556bd1\") " pod="calico-system/goldmane-768f4c5c69-kqs4z" Jul 11 00:13:48.041850 kubelet[2482]: I0711 00:13:48.041341 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rgvl\" (UniqueName: \"kubernetes.io/projected/170759a5-3b9a-4f76-b425-9d4e9c482064-kube-api-access-7rgvl\") pod \"calico-apiserver-5b5f5d84fb-77rqg\" (UID: \"170759a5-3b9a-4f76-b425-9d4e9c482064\") " pod="calico-apiserver/calico-apiserver-5b5f5d84fb-77rqg" Jul 11 00:13:48.041850 kubelet[2482]: I0711 00:13:48.041363 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09f6ed65-1087-4a6c-9f3f-47a108556bd1-config\") pod \"goldmane-768f4c5c69-kqs4z\" (UID: \"09f6ed65-1087-4a6c-9f3f-47a108556bd1\") " pod="calico-system/goldmane-768f4c5c69-kqs4z" Jul 11 00:13:48.041850 kubelet[2482]: I0711 00:13:48.041381 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thnmx\" (UniqueName: \"kubernetes.io/projected/f72dd124-c970-4b6b-a074-81167ad3af44-kube-api-access-thnmx\") pod \"coredns-668d6bf9bc-26c2d\" (UID: \"f72dd124-c970-4b6b-a074-81167ad3af44\") " pod="kube-system/coredns-668d6bf9bc-26c2d" Jul 11 00:13:48.041973 kubelet[2482]: I0711 00:13:48.041399 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f0823a92-0fad-4d6c-8138-b092cb1e97af-whisker-backend-key-pair\") pod \"whisker-6bf88d8cb8-nt5bv\" (UID: \"f0823a92-0fad-4d6c-8138-b092cb1e97af\") " pod="calico-system/whisker-6bf88d8cb8-nt5bv" Jul 11 00:13:48.041973 kubelet[2482]: I0711 00:13:48.041419 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/170759a5-3b9a-4f76-b425-9d4e9c482064-calico-apiserver-certs\") pod \"calico-apiserver-5b5f5d84fb-77rqg\" (UID: \"170759a5-3b9a-4f76-b425-9d4e9c482064\") " pod="calico-apiserver/calico-apiserver-5b5f5d84fb-77rqg" Jul 11 00:13:48.041973 kubelet[2482]: I0711 00:13:48.041437 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thfr8\" (UniqueName: \"kubernetes.io/projected/f0823a92-0fad-4d6c-8138-b092cb1e97af-kube-api-access-thfr8\") pod \"whisker-6bf88d8cb8-nt5bv\" (UID: \"f0823a92-0fad-4d6c-8138-b092cb1e97af\") " pod="calico-system/whisker-6bf88d8cb8-nt5bv" Jul 11 00:13:48.041973 kubelet[2482]: I0711 00:13:48.041461 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09f6ed65-1087-4a6c-9f3f-47a108556bd1-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-kqs4z\" (UID: \"09f6ed65-1087-4a6c-9f3f-47a108556bd1\") " pod="calico-system/goldmane-768f4c5c69-kqs4z" Jul 11 00:13:48.042668 systemd[1]: Created slice kubepods-burstable-podf72dd124_c970_4b6b_a074_81167ad3af44.slice - libcontainer container kubepods-burstable-podf72dd124_c970_4b6b_a074_81167ad3af44.slice. Jul 11 00:13:48.047611 systemd[1]: Created slice kubepods-besteffort-pod09f6ed65_1087_4a6c_9f3f_47a108556bd1.slice - libcontainer container kubepods-besteffort-pod09f6ed65_1087_4a6c_9f3f_47a108556bd1.slice. Jul 11 00:13:48.052881 systemd[1]: Created slice kubepods-besteffort-podb110a319_ec46_48e9_afde_f14e51fe5798.slice - libcontainer container kubepods-besteffort-podb110a319_ec46_48e9_afde_f14e51fe5798.slice. Jul 11 00:13:48.058041 systemd[1]: Created slice kubepods-burstable-poddc370272_6e6b_40f4_bdc3_9aff809787e2.slice - libcontainer container kubepods-burstable-poddc370272_6e6b_40f4_bdc3_9aff809787e2.slice. Jul 11 00:13:48.063333 systemd[1]: Created slice kubepods-besteffort-pod170759a5_3b9a_4f76_b425_9d4e9c482064.slice - libcontainer container kubepods-besteffort-pod170759a5_3b9a_4f76_b425_9d4e9c482064.slice. Jul 11 00:13:48.068654 systemd[1]: Created slice kubepods-besteffort-pod2b528b1a_2869_4a1f_9208_de9d96c1a0aa.slice - libcontainer container kubepods-besteffort-pod2b528b1a_2869_4a1f_9208_de9d96c1a0aa.slice. Jul 11 00:13:48.073516 systemd[1]: Created slice kubepods-besteffort-podf0823a92_0fad_4d6c_8138_b092cb1e97af.slice - libcontainer container kubepods-besteffort-podf0823a92_0fad_4d6c_8138_b092cb1e97af.slice. Jul 11 00:13:48.095268 containerd[1453]: time="2025-07-11T00:13:48.095197126Z" level=info msg="shim disconnected" id=75ab8a6206ab6e18fd7eec6544f6a0e36e483383c43c091dad2adb562b897d57 namespace=k8s.io Jul 11 00:13:48.095268 containerd[1453]: time="2025-07-11T00:13:48.095256778Z" level=warning msg="cleaning up after shim disconnected" id=75ab8a6206ab6e18fd7eec6544f6a0e36e483383c43c091dad2adb562b897d57 namespace=k8s.io Jul 11 00:13:48.095268 containerd[1453]: time="2025-07-11T00:13:48.095265775Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:13:48.345359 kubelet[2482]: E0711 00:13:48.345216 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:13:48.346109 containerd[1453]: time="2025-07-11T00:13:48.346077058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-26c2d,Uid:f72dd124-c970-4b6b-a074-81167ad3af44,Namespace:kube-system,Attempt:0,}" Jul 11 00:13:48.351077 containerd[1453]: time="2025-07-11T00:13:48.350813161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-kqs4z,Uid:09f6ed65-1087-4a6c-9f3f-47a108556bd1,Namespace:calico-system,Attempt:0,}" Jul 11 00:13:48.351165 containerd[1453]: time="2025-07-11T00:13:48.351107955Z" level=error msg="Failed to destroy network for sandbox \"14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:48.361987 kubelet[2482]: E0711 00:13:48.361614 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:13:48.383587 containerd[1453]: time="2025-07-11T00:13:48.383528734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b5f5d84fb-jgmkp,Uid:b110a319-ec46-48e9-afde-f14e51fe5798,Namespace:calico-apiserver,Attempt:0,}" Jul 11 00:13:48.385387 containerd[1453]: time="2025-07-11T00:13:48.383618673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bf88d8cb8-nt5bv,Uid:f0823a92-0fad-4d6c-8138-b092cb1e97af,Namespace:calico-system,Attempt:0,}" Jul 11 00:13:48.385459 containerd[1453]: time="2025-07-11T00:13:48.383675329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-csf5s,Uid:dc370272-6e6b-40f4-bdc3-9aff809787e2,Namespace:kube-system,Attempt:0,}" Jul 11 00:13:48.385602 containerd[1453]: time="2025-07-11T00:13:48.383716979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b5f5d84fb-77rqg,Uid:170759a5-3b9a-4f76-b425-9d4e9c482064,Namespace:calico-apiserver,Attempt:0,}" Jul 11 00:13:48.385716 containerd[1453]: time="2025-07-11T00:13:48.383785577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67b7cfcdf9-djn6p,Uid:2b528b1a-2869-4a1f-9208-de9d96c1a0aa,Namespace:calico-system,Attempt:0,}" Jul 11 00:13:48.423888 containerd[1453]: time="2025-07-11T00:13:48.423722921Z" level=error msg="encountered an error cleaning up failed sandbox \"14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:48.424033 containerd[1453]: time="2025-07-11T00:13:48.423945740Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cx6rf,Uid:1ca1aa6c-2238-4c4c-869b-a6dd1b43a48c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:48.426283 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58-shm.mount: Deactivated successfully. Jul 11 00:13:48.448551 kubelet[2482]: E0711 00:13:48.448106 2482 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:48.448551 kubelet[2482]: E0711 00:13:48.448187 2482 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cx6rf" Jul 11 00:13:48.448551 kubelet[2482]: E0711 00:13:48.448210 2482 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cx6rf" Jul 11 00:13:48.448953 kubelet[2482]: E0711 00:13:48.448254 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cx6rf_calico-system(1ca1aa6c-2238-4c4c-869b-a6dd1b43a48c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cx6rf_calico-system(1ca1aa6c-2238-4c4c-869b-a6dd1b43a48c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cx6rf" podUID="1ca1aa6c-2238-4c4c-869b-a6dd1b43a48c" Jul 11 00:13:48.453478 containerd[1453]: time="2025-07-11T00:13:48.453431005Z" level=error msg="Failed to destroy network for sandbox \"37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:48.456839 containerd[1453]: time="2025-07-11T00:13:48.456099968Z" level=error msg="encountered an error cleaning up failed sandbox \"37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:48.456839 containerd[1453]: time="2025-07-11T00:13:48.456159770Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-26c2d,Uid:f72dd124-c970-4b6b-a074-81167ad3af44,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:48.456995 kubelet[2482]: E0711 00:13:48.456393 2482 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:48.456995 kubelet[2482]: E0711 00:13:48.456463 2482 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-26c2d" Jul 11 00:13:48.456995 kubelet[2482]: E0711 00:13:48.456484 2482 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-26c2d" Jul 11 00:13:48.456277 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16-shm.mount: Deactivated successfully. Jul 11 00:13:48.457198 kubelet[2482]: E0711 00:13:48.456525 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-26c2d_kube-system(f72dd124-c970-4b6b-a074-81167ad3af44)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-26c2d_kube-system(f72dd124-c970-4b6b-a074-81167ad3af44)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-26c2d" podUID="f72dd124-c970-4b6b-a074-81167ad3af44" Jul 11 00:13:48.548327 containerd[1453]: time="2025-07-11T00:13:48.548245896Z" level=error msg="Failed to destroy network for sandbox \"54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:48.549159 containerd[1453]: time="2025-07-11T00:13:48.548894216Z" level=error msg="encountered an error cleaning up failed sandbox \"54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:48.549159 containerd[1453]: time="2025-07-11T00:13:48.548967605Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-kqs4z,Uid:09f6ed65-1087-4a6c-9f3f-47a108556bd1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:48.549326 kubelet[2482]: E0711 00:13:48.549278 2482 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:48.549533 kubelet[2482]: E0711 00:13:48.549353 2482 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-kqs4z" Jul 11 00:13:48.549533 kubelet[2482]: E0711 00:13:48.549376 2482 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-kqs4z" Jul 11 00:13:48.549533 kubelet[2482]: E0711 00:13:48.549429 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-kqs4z_calico-system(09f6ed65-1087-4a6c-9f3f-47a108556bd1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-kqs4z_calico-system(09f6ed65-1087-4a6c-9f3f-47a108556bd1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-kqs4z" podUID="09f6ed65-1087-4a6c-9f3f-47a108556bd1" Jul 11 00:13:48.564174 containerd[1453]: time="2025-07-11T00:13:48.563980185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 11 00:13:48.571587 kubelet[2482]: I0711 00:13:48.571550 2482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16" Jul 11 00:13:48.584803 kubelet[2482]: I0711 00:13:48.584531 2482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58" Jul 11 00:13:48.616035 containerd[1453]: time="2025-07-11T00:13:48.615876713Z" level=info msg="StopPodSandbox for \"14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58\"" Jul 11 00:13:48.617092 containerd[1453]: time="2025-07-11T00:13:48.616088972Z" level=info msg="StopPodSandbox for \"37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16\"" Jul 11 00:13:48.619581 containerd[1453]: time="2025-07-11T00:13:48.619455848Z" level=info msg="Ensure that sandbox 14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58 in task-service has been cleanup successfully" Jul 11 00:13:48.620095 containerd[1453]: time="2025-07-11T00:13:48.620076947Z" level=info msg="Ensure that sandbox 37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16 in task-service has been cleanup successfully" Jul 11 00:13:48.624667 containerd[1453]: time="2025-07-11T00:13:48.624621308Z" level=error msg="Failed to destroy network for sandbox \"467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:48.625382 containerd[1453]: time="2025-07-11T00:13:48.625357965Z" level=error msg="encountered an error cleaning up failed sandbox \"467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:48.625553 containerd[1453]: time="2025-07-11T00:13:48.625528345Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b5f5d84fb-77rqg,Uid:170759a5-3b9a-4f76-b425-9d4e9c482064,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:48.625959 kubelet[2482]: E0711 00:13:48.625905 2482 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:48.626033 kubelet[2482]: E0711 00:13:48.625984 2482 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b5f5d84fb-77rqg" Jul 11 00:13:48.626033 kubelet[2482]: E0711 00:13:48.626011 2482 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b5f5d84fb-77rqg" Jul 11 00:13:48.626182 kubelet[2482]: E0711 00:13:48.626069 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b5f5d84fb-77rqg_calico-apiserver(170759a5-3b9a-4f76-b425-9d4e9c482064)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b5f5d84fb-77rqg_calico-apiserver(170759a5-3b9a-4f76-b425-9d4e9c482064)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b5f5d84fb-77rqg" podUID="170759a5-3b9a-4f76-b425-9d4e9c482064" Jul 11 00:13:48.647010 containerd[1453]: time="2025-07-11T00:13:48.646855829Z" level=error msg="Failed to destroy network for sandbox \"9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:48.647430 containerd[1453]: time="2025-07-11T00:13:48.647405824Z" level=error msg="encountered an error cleaning up failed sandbox \"9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:48.647626 containerd[1453]: time="2025-07-11T00:13:48.647515830Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b5f5d84fb-jgmkp,Uid:b110a319-ec46-48e9-afde-f14e51fe5798,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:48.652007 kubelet[2482]: E0711 00:13:48.651948 2482 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:48.652144 kubelet[2482]: E0711 00:13:48.652029 2482 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b5f5d84fb-jgmkp" Jul 11 00:13:48.652144 kubelet[2482]: E0711 00:13:48.652063 2482 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b5f5d84fb-jgmkp" Jul 11 00:13:48.652144 kubelet[2482]: E0711 00:13:48.652117 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b5f5d84fb-jgmkp_calico-apiserver(b110a319-ec46-48e9-afde-f14e51fe5798)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b5f5d84fb-jgmkp_calico-apiserver(b110a319-ec46-48e9-afde-f14e51fe5798)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b5f5d84fb-jgmkp" podUID="b110a319-ec46-48e9-afde-f14e51fe5798" Jul 11 00:13:48.655850 containerd[1453]: time="2025-07-11T00:13:48.655795852Z" level=error msg="Failed to destroy network for sandbox \"993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:48.656483 containerd[1453]: time="2025-07-11T00:13:48.656455293Z" level=error msg="encountered an error cleaning up failed sandbox \"993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:48.656721 containerd[1453]: time="2025-07-11T00:13:48.656685015Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-csf5s,Uid:dc370272-6e6b-40f4-bdc3-9aff809787e2,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:48.657123 kubelet[2482]: E0711 00:13:48.657090 2482 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:48.657277 kubelet[2482]: E0711 00:13:48.657224 2482 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-csf5s" Jul 11 00:13:48.657277 kubelet[2482]: E0711 00:13:48.657249 2482 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-csf5s" Jul 11 00:13:48.657527 kubelet[2482]: E0711 00:13:48.657475 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-csf5s_kube-system(dc370272-6e6b-40f4-bdc3-9aff809787e2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-csf5s_kube-system(dc370272-6e6b-40f4-bdc3-9aff809787e2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-csf5s" podUID="dc370272-6e6b-40f4-bdc3-9aff809787e2" Jul 11 00:13:48.670783 containerd[1453]: time="2025-07-11T00:13:48.670709787Z" level=error msg="Failed to destroy network for sandbox \"cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:48.671305 containerd[1453]: time="2025-07-11T00:13:48.671271924Z" level=error msg="encountered an error cleaning up failed sandbox \"cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:48.671370 containerd[1453]: time="2025-07-11T00:13:48.671338210Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bf88d8cb8-nt5bv,Uid:f0823a92-0fad-4d6c-8138-b092cb1e97af,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:48.671714 kubelet[2482]: E0711 00:13:48.671637 2482 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:48.671714 kubelet[2482]: E0711 00:13:48.671728 2482 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6bf88d8cb8-nt5bv" Jul 11 00:13:48.671942 kubelet[2482]: E0711 00:13:48.671751 2482 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6bf88d8cb8-nt5bv" Jul 11 00:13:48.671942 kubelet[2482]: E0711 00:13:48.671823 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6bf88d8cb8-nt5bv_calico-system(f0823a92-0fad-4d6c-8138-b092cb1e97af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6bf88d8cb8-nt5bv_calico-system(f0823a92-0fad-4d6c-8138-b092cb1e97af)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6bf88d8cb8-nt5bv" podUID="f0823a92-0fad-4d6c-8138-b092cb1e97af" Jul 11 00:13:48.673306 containerd[1453]: time="2025-07-11T00:13:48.672920767Z" level=error msg="Failed to destroy network for sandbox \"65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:48.673522 containerd[1453]: time="2025-07-11T00:13:48.673493536Z" level=error msg="encountered an error cleaning up failed sandbox \"65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:48.673664 containerd[1453]: time="2025-07-11T00:13:48.673572724Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67b7cfcdf9-djn6p,Uid:2b528b1a-2869-4a1f-9208-de9d96c1a0aa,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:48.673985 kubelet[2482]: E0711 00:13:48.673956 2482 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:48.674032 kubelet[2482]: E0711 00:13:48.673999 2482 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67b7cfcdf9-djn6p" Jul 11 00:13:48.674032 kubelet[2482]: E0711 00:13:48.674018 2482 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67b7cfcdf9-djn6p" Jul 11 00:13:48.674095 kubelet[2482]: E0711 00:13:48.674055 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-67b7cfcdf9-djn6p_calico-system(2b528b1a-2869-4a1f-9208-de9d96c1a0aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-67b7cfcdf9-djn6p_calico-system(2b528b1a-2869-4a1f-9208-de9d96c1a0aa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-67b7cfcdf9-djn6p" podUID="2b528b1a-2869-4a1f-9208-de9d96c1a0aa" Jul 11 00:13:48.680910 containerd[1453]: time="2025-07-11T00:13:48.680748968Z" level=error msg="StopPodSandbox for \"14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58\" failed" error="failed to destroy network for sandbox \"14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:48.681054 kubelet[2482]: E0711 00:13:48.681013 2482 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58" Jul 11 00:13:48.681127 kubelet[2482]: E0711 00:13:48.681081 2482 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58"} Jul 11 00:13:48.681164 kubelet[2482]: E0711 00:13:48.681142 2482 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1ca1aa6c-2238-4c4c-869b-a6dd1b43a48c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:13:48.681226 kubelet[2482]: E0711 00:13:48.681167 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1ca1aa6c-2238-4c4c-869b-a6dd1b43a48c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cx6rf" podUID="1ca1aa6c-2238-4c4c-869b-a6dd1b43a48c" Jul 11 00:13:48.685548 containerd[1453]: time="2025-07-11T00:13:48.685500360Z" level=error msg="StopPodSandbox for \"37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16\" failed" error="failed to destroy network for sandbox \"37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:48.685785 kubelet[2482]: E0711 00:13:48.685745 2482 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16" Jul 11 00:13:48.685840 kubelet[2482]: E0711 00:13:48.685788 2482 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16"} Jul 11 00:13:48.685840 kubelet[2482]: E0711 00:13:48.685812 2482 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f72dd124-c970-4b6b-a074-81167ad3af44\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:13:48.685916 kubelet[2482]: E0711 00:13:48.685828 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f72dd124-c970-4b6b-a074-81167ad3af44\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-26c2d" podUID="f72dd124-c970-4b6b-a074-81167ad3af44" Jul 11 00:13:49.404969 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35-shm.mount: Deactivated successfully. Jul 11 00:13:49.587905 kubelet[2482]: I0711 00:13:49.587863 2482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c" Jul 11 00:13:49.596237 containerd[1453]: time="2025-07-11T00:13:49.596169895Z" level=info msg="StopPodSandbox for \"65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c\"" Jul 11 00:13:49.596733 containerd[1453]: time="2025-07-11T00:13:49.596418282Z" level=info msg="Ensure that sandbox 65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c in task-service has been cleanup successfully" Jul 11 00:13:49.597189 kubelet[2482]: I0711 00:13:49.597148 2482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45" Jul 11 00:13:49.597983 containerd[1453]: time="2025-07-11T00:13:49.597793480Z" level=info msg="StopPodSandbox for \"9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45\"" Jul 11 00:13:49.598062 containerd[1453]: time="2025-07-11T00:13:49.597982826Z" level=info msg="Ensure that sandbox 9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45 in task-service has been cleanup successfully" Jul 11 00:13:49.598641 kubelet[2482]: I0711 00:13:49.598205 2482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78" Jul 11 00:13:49.599036 containerd[1453]: time="2025-07-11T00:13:49.598846651Z" level=info msg="StopPodSandbox for \"467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78\"" Jul 11 00:13:49.599036 containerd[1453]: time="2025-07-11T00:13:49.598980794Z" level=info msg="Ensure that sandbox 467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78 in task-service has been cleanup successfully" Jul 11 00:13:49.610965 kubelet[2482]: I0711 00:13:49.610907 2482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce" Jul 11 00:13:49.611952 containerd[1453]: time="2025-07-11T00:13:49.611754656Z" level=info msg="StopPodSandbox for \"cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce\"" Jul 11 00:13:49.612002 containerd[1453]: time="2025-07-11T00:13:49.611968719Z" level=info msg="Ensure that sandbox cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce in task-service has been cleanup successfully" Jul 11 00:13:49.628991 kubelet[2482]: I0711 00:13:49.628944 2482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35" Jul 11 00:13:49.630059 containerd[1453]: time="2025-07-11T00:13:49.629984774Z" level=info msg="StopPodSandbox for \"54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35\"" Jul 11 00:13:49.630586 containerd[1453]: time="2025-07-11T00:13:49.630514971Z" level=info msg="Ensure that sandbox 54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35 in task-service has been cleanup successfully" Jul 11 00:13:49.633623 kubelet[2482]: I0711 00:13:49.633587 2482 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243" Jul 11 00:13:49.636016 containerd[1453]: time="2025-07-11T00:13:49.635694947Z" level=info msg="StopPodSandbox for \"993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243\"" Jul 11 00:13:49.636016 containerd[1453]: time="2025-07-11T00:13:49.635927665Z" level=info msg="Ensure that sandbox 993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243 in task-service has been cleanup successfully" Jul 11 00:13:49.641216 containerd[1453]: time="2025-07-11T00:13:49.641170720Z" level=error msg="StopPodSandbox for \"65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c\" failed" error="failed to destroy network for sandbox \"65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:49.641781 kubelet[2482]: E0711 00:13:49.641622 2482 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c" Jul 11 00:13:49.641781 kubelet[2482]: E0711 00:13:49.641679 2482 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c"} Jul 11 00:13:49.641964 kubelet[2482]: E0711 00:13:49.641898 2482 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2b528b1a-2869-4a1f-9208-de9d96c1a0aa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:13:49.641964 kubelet[2482]: E0711 00:13:49.641931 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2b528b1a-2869-4a1f-9208-de9d96c1a0aa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-67b7cfcdf9-djn6p" podUID="2b528b1a-2869-4a1f-9208-de9d96c1a0aa" Jul 11 00:13:49.663382 containerd[1453]: time="2025-07-11T00:13:49.662144398Z" level=error msg="StopPodSandbox for \"cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce\" failed" error="failed to destroy network for sandbox \"cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:49.663518 kubelet[2482]: E0711 00:13:49.662424 2482 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce" Jul 11 00:13:49.663518 kubelet[2482]: E0711 00:13:49.662477 2482 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce"} Jul 11 00:13:49.663518 kubelet[2482]: E0711 00:13:49.662512 2482 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f0823a92-0fad-4d6c-8138-b092cb1e97af\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:13:49.663518 kubelet[2482]: E0711 00:13:49.662535 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f0823a92-0fad-4d6c-8138-b092cb1e97af\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6bf88d8cb8-nt5bv" podUID="f0823a92-0fad-4d6c-8138-b092cb1e97af" Jul 11 00:13:49.666372 containerd[1453]: time="2025-07-11T00:13:49.666327458Z" level=error msg="StopPodSandbox for \"9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45\" failed" error="failed to destroy network for sandbox \"9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:49.667005 kubelet[2482]: E0711 00:13:49.666928 2482 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45" Jul 11 00:13:49.667005 kubelet[2482]: E0711 00:13:49.666997 2482 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45"} Jul 11 00:13:49.667188 kubelet[2482]: E0711 00:13:49.667035 2482 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b110a319-ec46-48e9-afde-f14e51fe5798\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:13:49.667188 kubelet[2482]: E0711 00:13:49.667062 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b110a319-ec46-48e9-afde-f14e51fe5798\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b5f5d84fb-jgmkp" podUID="b110a319-ec46-48e9-afde-f14e51fe5798" Jul 11 00:13:49.667938 containerd[1453]: time="2025-07-11T00:13:49.667908683Z" level=error msg="StopPodSandbox for \"467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78\" failed" error="failed to destroy network for sandbox \"467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:49.668086 kubelet[2482]: E0711 00:13:49.668058 2482 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78" Jul 11 00:13:49.668192 kubelet[2482]: E0711 00:13:49.668168 2482 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78"} Jul 11 00:13:49.668251 kubelet[2482]: E0711 00:13:49.668199 2482 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"170759a5-3b9a-4f76-b425-9d4e9c482064\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:13:49.668251 kubelet[2482]: E0711 00:13:49.668217 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"170759a5-3b9a-4f76-b425-9d4e9c482064\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b5f5d84fb-77rqg" podUID="170759a5-3b9a-4f76-b425-9d4e9c482064" Jul 11 00:13:49.676669 containerd[1453]: time="2025-07-11T00:13:49.676615415Z" level=error msg="StopPodSandbox for \"54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35\" failed" error="failed to destroy network for sandbox \"54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:49.676894 kubelet[2482]: E0711 00:13:49.676860 2482 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35" Jul 11 00:13:49.676936 kubelet[2482]: E0711 00:13:49.676900 2482 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35"} Jul 11 00:13:49.676997 kubelet[2482]: E0711 00:13:49.676928 2482 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"09f6ed65-1087-4a6c-9f3f-47a108556bd1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:13:49.677068 kubelet[2482]: E0711 00:13:49.677005 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"09f6ed65-1087-4a6c-9f3f-47a108556bd1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-kqs4z" podUID="09f6ed65-1087-4a6c-9f3f-47a108556bd1" Jul 11 00:13:49.683344 containerd[1453]: time="2025-07-11T00:13:49.683290212Z" level=error msg="StopPodSandbox for \"993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243\" failed" error="failed to destroy network for sandbox \"993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:13:49.683611 kubelet[2482]: E0711 00:13:49.683558 2482 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243" Jul 11 00:13:49.683661 kubelet[2482]: E0711 00:13:49.683617 2482 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243"} Jul 11 00:13:49.683684 kubelet[2482]: E0711 00:13:49.683656 2482 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dc370272-6e6b-40f4-bdc3-9aff809787e2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:13:49.683729 kubelet[2482]: E0711 00:13:49.683685 2482 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dc370272-6e6b-40f4-bdc3-9aff809787e2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-csf5s" podUID="dc370272-6e6b-40f4-bdc3-9aff809787e2" Jul 11 00:13:52.815612 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2821501035.mount: Deactivated successfully. Jul 11 00:13:53.435373 containerd[1453]: time="2025-07-11T00:13:53.435308863Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:53.436338 containerd[1453]: time="2025-07-11T00:13:53.436108086Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 11 00:13:53.437477 containerd[1453]: time="2025-07-11T00:13:53.437396939Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:53.439797 containerd[1453]: time="2025-07-11T00:13:53.439705398Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:53.440517 containerd[1453]: time="2025-07-11T00:13:53.440474384Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 4.876434948s" Jul 11 00:13:53.440517 containerd[1453]: time="2025-07-11T00:13:53.440512836Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 11 00:13:53.461133 containerd[1453]: time="2025-07-11T00:13:53.461086165Z" level=info msg="CreateContainer within sandbox \"181c5bc91ad07fe016d36e1c1e9dbe5139bccda41f487c855ba4673e98721433\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 11 00:13:53.489316 containerd[1453]: time="2025-07-11T00:13:53.489261344Z" level=info msg="CreateContainer within sandbox \"181c5bc91ad07fe016d36e1c1e9dbe5139bccda41f487c855ba4673e98721433\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b5173d7cdeb88bd739941934bad1f86a781957cf7ef2500802daed3a65f3d56e\"" Jul 11 00:13:53.490715 containerd[1453]: time="2025-07-11T00:13:53.489985836Z" level=info msg="StartContainer for \"b5173d7cdeb88bd739941934bad1f86a781957cf7ef2500802daed3a65f3d56e\"" Jul 11 00:13:53.549911 systemd[1]: Started cri-containerd-b5173d7cdeb88bd739941934bad1f86a781957cf7ef2500802daed3a65f3d56e.scope - libcontainer container b5173d7cdeb88bd739941934bad1f86a781957cf7ef2500802daed3a65f3d56e. Jul 11 00:13:53.584566 containerd[1453]: time="2025-07-11T00:13:53.584512297Z" level=info msg="StartContainer for \"b5173d7cdeb88bd739941934bad1f86a781957cf7ef2500802daed3a65f3d56e\" returns successfully" Jul 11 00:13:53.678695 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 11 00:13:53.678919 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 11 00:13:53.704106 kubelet[2482]: I0711 00:13:53.703929 2482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-tx8kz" podStartSLOduration=1.395925288 podStartE2EDuration="16.703909565s" podCreationTimestamp="2025-07-11 00:13:37 +0000 UTC" firstStartedPulling="2025-07-11 00:13:38.135578343 +0000 UTC m=+19.804952240" lastFinishedPulling="2025-07-11 00:13:53.44356261 +0000 UTC m=+35.112936517" observedRunningTime="2025-07-11 00:13:53.703108149 +0000 UTC m=+35.372482066" watchObservedRunningTime="2025-07-11 00:13:53.703909565 +0000 UTC m=+35.373283472" Jul 11 00:13:53.800833 containerd[1453]: time="2025-07-11T00:13:53.800731091Z" level=info msg="StopPodSandbox for \"cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce\"" Jul 11 00:13:53.958261 containerd[1453]: 2025-07-11 00:13:53.871 [INFO][3803] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce" Jul 11 00:13:53.958261 containerd[1453]: 2025-07-11 00:13:53.871 [INFO][3803] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce" iface="eth0" netns="/var/run/netns/cni-c01e39c6-29b0-c650-a700-faee87aacb82" Jul 11 00:13:53.958261 containerd[1453]: 2025-07-11 00:13:53.872 [INFO][3803] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce" iface="eth0" netns="/var/run/netns/cni-c01e39c6-29b0-c650-a700-faee87aacb82" Jul 11 00:13:53.958261 containerd[1453]: 2025-07-11 00:13:53.872 [INFO][3803] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce" iface="eth0" netns="/var/run/netns/cni-c01e39c6-29b0-c650-a700-faee87aacb82" Jul 11 00:13:53.958261 containerd[1453]: 2025-07-11 00:13:53.872 [INFO][3803] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce" Jul 11 00:13:53.958261 containerd[1453]: 2025-07-11 00:13:53.872 [INFO][3803] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce" Jul 11 00:13:53.958261 containerd[1453]: 2025-07-11 00:13:53.939 [INFO][3813] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce" HandleID="k8s-pod-network.cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce" Workload="localhost-k8s-whisker--6bf88d8cb8--nt5bv-eth0" Jul 11 00:13:53.958261 containerd[1453]: 2025-07-11 00:13:53.940 [INFO][3813] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:13:53.958261 containerd[1453]: 2025-07-11 00:13:53.941 [INFO][3813] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:13:53.958261 containerd[1453]: 2025-07-11 00:13:53.949 [WARNING][3813] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce" HandleID="k8s-pod-network.cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce" Workload="localhost-k8s-whisker--6bf88d8cb8--nt5bv-eth0" Jul 11 00:13:53.958261 containerd[1453]: 2025-07-11 00:13:53.949 [INFO][3813] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce" HandleID="k8s-pod-network.cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce" Workload="localhost-k8s-whisker--6bf88d8cb8--nt5bv-eth0" Jul 11 00:13:53.958261 containerd[1453]: 2025-07-11 00:13:53.951 [INFO][3813] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:13:53.958261 containerd[1453]: 2025-07-11 00:13:53.955 [INFO][3803] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce" Jul 11 00:13:53.958658 containerd[1453]: time="2025-07-11T00:13:53.958374923Z" level=info msg="TearDown network for sandbox \"cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce\" successfully" Jul 11 00:13:53.958658 containerd[1453]: time="2025-07-11T00:13:53.958403056Z" level=info msg="StopPodSandbox for \"cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce\" returns successfully" Jul 11 00:13:53.961559 systemd[1]: run-netns-cni\x2dc01e39c6\x2d29b0\x2dc650\x2da700\x2dfaee87aacb82.mount: Deactivated successfully. Jul 11 00:13:53.980747 kubelet[2482]: I0711 00:13:53.980691 2482 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thfr8\" (UniqueName: \"kubernetes.io/projected/f0823a92-0fad-4d6c-8138-b092cb1e97af-kube-api-access-thfr8\") pod \"f0823a92-0fad-4d6c-8138-b092cb1e97af\" (UID: \"f0823a92-0fad-4d6c-8138-b092cb1e97af\") " Jul 11 00:13:53.980858 kubelet[2482]: I0711 00:13:53.980754 2482 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0823a92-0fad-4d6c-8138-b092cb1e97af-whisker-ca-bundle\") pod \"f0823a92-0fad-4d6c-8138-b092cb1e97af\" (UID: \"f0823a92-0fad-4d6c-8138-b092cb1e97af\") " Jul 11 00:13:53.980858 kubelet[2482]: I0711 00:13:53.980839 2482 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f0823a92-0fad-4d6c-8138-b092cb1e97af-whisker-backend-key-pair\") pod \"f0823a92-0fad-4d6c-8138-b092cb1e97af\" (UID: \"f0823a92-0fad-4d6c-8138-b092cb1e97af\") " Jul 11 00:13:53.981394 kubelet[2482]: I0711 00:13:53.981342 2482 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0823a92-0fad-4d6c-8138-b092cb1e97af-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "f0823a92-0fad-4d6c-8138-b092cb1e97af" (UID: "f0823a92-0fad-4d6c-8138-b092cb1e97af"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 11 00:13:53.986096 kubelet[2482]: I0711 00:13:53.986031 2482 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0823a92-0fad-4d6c-8138-b092cb1e97af-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "f0823a92-0fad-4d6c-8138-b092cb1e97af" (UID: "f0823a92-0fad-4d6c-8138-b092cb1e97af"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 11 00:13:53.987016 kubelet[2482]: I0711 00:13:53.986954 2482 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0823a92-0fad-4d6c-8138-b092cb1e97af-kube-api-access-thfr8" (OuterVolumeSpecName: "kube-api-access-thfr8") pod "f0823a92-0fad-4d6c-8138-b092cb1e97af" (UID: "f0823a92-0fad-4d6c-8138-b092cb1e97af"). InnerVolumeSpecName "kube-api-access-thfr8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 11 00:13:53.987345 systemd[1]: var-lib-kubelet-pods-f0823a92\x2d0fad\x2d4d6c\x2d8138\x2db092cb1e97af-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dthfr8.mount: Deactivated successfully. Jul 11 00:13:53.987460 systemd[1]: var-lib-kubelet-pods-f0823a92\x2d0fad\x2d4d6c\x2d8138\x2db092cb1e97af-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 11 00:13:54.081527 kubelet[2482]: I0711 00:13:54.081445 2482 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-thfr8\" (UniqueName: \"kubernetes.io/projected/f0823a92-0fad-4d6c-8138-b092cb1e97af-kube-api-access-thfr8\") on node \"localhost\" DevicePath \"\"" Jul 11 00:13:54.081527 kubelet[2482]: I0711 00:13:54.081508 2482 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0823a92-0fad-4d6c-8138-b092cb1e97af-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 11 00:13:54.081527 kubelet[2482]: I0711 00:13:54.081517 2482 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f0823a92-0fad-4d6c-8138-b092cb1e97af-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 11 00:13:54.464247 systemd[1]: Removed slice kubepods-besteffort-podf0823a92_0fad_4d6c_8138_b092cb1e97af.slice - libcontainer container kubepods-besteffort-podf0823a92_0fad_4d6c_8138_b092cb1e97af.slice. Jul 11 00:13:54.730814 systemd[1]: Created slice kubepods-besteffort-pod16d1c267_97b3_424f_a656_4062a62b58fa.slice - libcontainer container kubepods-besteffort-pod16d1c267_97b3_424f_a656_4062a62b58fa.slice. Jul 11 00:13:54.885675 kubelet[2482]: I0711 00:13:54.885616 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16d1c267-97b3-424f-a656-4062a62b58fa-whisker-ca-bundle\") pod \"whisker-774b69b6f6-n8csm\" (UID: \"16d1c267-97b3-424f-a656-4062a62b58fa\") " pod="calico-system/whisker-774b69b6f6-n8csm" Jul 11 00:13:54.885675 kubelet[2482]: I0711 00:13:54.885666 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/16d1c267-97b3-424f-a656-4062a62b58fa-whisker-backend-key-pair\") pod \"whisker-774b69b6f6-n8csm\" (UID: \"16d1c267-97b3-424f-a656-4062a62b58fa\") " pod="calico-system/whisker-774b69b6f6-n8csm" Jul 11 00:13:54.885675 kubelet[2482]: I0711 00:13:54.885682 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8sc7\" (UniqueName: \"kubernetes.io/projected/16d1c267-97b3-424f-a656-4062a62b58fa-kube-api-access-z8sc7\") pod \"whisker-774b69b6f6-n8csm\" (UID: \"16d1c267-97b3-424f-a656-4062a62b58fa\") " pod="calico-system/whisker-774b69b6f6-n8csm" Jul 11 00:13:55.035443 containerd[1453]: time="2025-07-11T00:13:55.035295115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-774b69b6f6-n8csm,Uid:16d1c267-97b3-424f-a656-4062a62b58fa,Namespace:calico-system,Attempt:0,}" Jul 11 00:13:55.191793 kernel: bpftool[3965]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 11 00:13:55.367140 systemd-networkd[1388]: cali74f93d56dd3: Link UP Jul 11 00:13:55.369450 systemd-networkd[1388]: cali74f93d56dd3: Gained carrier Jul 11 00:13:55.385941 containerd[1453]: 2025-07-11 00:13:55.293 [INFO][3971] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--774b69b6f6--n8csm-eth0 whisker-774b69b6f6- calico-system 16d1c267-97b3-424f-a656-4062a62b58fa 958 0 2025-07-11 00:13:54 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:774b69b6f6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-774b69b6f6-n8csm eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali74f93d56dd3 [] [] }} ContainerID="b1bb6238548f515e3f8463da056c7c07200994863972deed4e2bbdd5485a9dc9" Namespace="calico-system" Pod="whisker-774b69b6f6-n8csm" WorkloadEndpoint="localhost-k8s-whisker--774b69b6f6--n8csm-" Jul 11 00:13:55.385941 containerd[1453]: 2025-07-11 00:13:55.294 [INFO][3971] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b1bb6238548f515e3f8463da056c7c07200994863972deed4e2bbdd5485a9dc9" Namespace="calico-system" Pod="whisker-774b69b6f6-n8csm" WorkloadEndpoint="localhost-k8s-whisker--774b69b6f6--n8csm-eth0" Jul 11 00:13:55.385941 containerd[1453]: 2025-07-11 00:13:55.323 [INFO][3980] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b1bb6238548f515e3f8463da056c7c07200994863972deed4e2bbdd5485a9dc9" HandleID="k8s-pod-network.b1bb6238548f515e3f8463da056c7c07200994863972deed4e2bbdd5485a9dc9" Workload="localhost-k8s-whisker--774b69b6f6--n8csm-eth0" Jul 11 00:13:55.385941 containerd[1453]: 2025-07-11 00:13:55.324 [INFO][3980] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b1bb6238548f515e3f8463da056c7c07200994863972deed4e2bbdd5485a9dc9" HandleID="k8s-pod-network.b1bb6238548f515e3f8463da056c7c07200994863972deed4e2bbdd5485a9dc9" Workload="localhost-k8s-whisker--774b69b6f6--n8csm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c6310), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-774b69b6f6-n8csm", "timestamp":"2025-07-11 00:13:55.323929218 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:13:55.385941 containerd[1453]: 2025-07-11 00:13:55.324 [INFO][3980] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:13:55.385941 containerd[1453]: 2025-07-11 00:13:55.324 [INFO][3980] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:13:55.385941 containerd[1453]: 2025-07-11 00:13:55.324 [INFO][3980] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:13:55.385941 containerd[1453]: 2025-07-11 00:13:55.330 [INFO][3980] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b1bb6238548f515e3f8463da056c7c07200994863972deed4e2bbdd5485a9dc9" host="localhost" Jul 11 00:13:55.385941 containerd[1453]: 2025-07-11 00:13:55.335 [INFO][3980] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:13:55.385941 containerd[1453]: 2025-07-11 00:13:55.338 [INFO][3980] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:13:55.385941 containerd[1453]: 2025-07-11 00:13:55.340 [INFO][3980] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:13:55.385941 containerd[1453]: 2025-07-11 00:13:55.342 [INFO][3980] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:13:55.385941 containerd[1453]: 2025-07-11 00:13:55.342 [INFO][3980] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b1bb6238548f515e3f8463da056c7c07200994863972deed4e2bbdd5485a9dc9" host="localhost" Jul 11 00:13:55.385941 containerd[1453]: 2025-07-11 00:13:55.343 [INFO][3980] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b1bb6238548f515e3f8463da056c7c07200994863972deed4e2bbdd5485a9dc9 Jul 11 00:13:55.385941 containerd[1453]: 2025-07-11 00:13:55.347 [INFO][3980] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b1bb6238548f515e3f8463da056c7c07200994863972deed4e2bbdd5485a9dc9" host="localhost" Jul 11 00:13:55.385941 containerd[1453]: 2025-07-11 00:13:55.351 [INFO][3980] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.b1bb6238548f515e3f8463da056c7c07200994863972deed4e2bbdd5485a9dc9" host="localhost" Jul 11 00:13:55.385941 containerd[1453]: 2025-07-11 00:13:55.351 [INFO][3980] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.b1bb6238548f515e3f8463da056c7c07200994863972deed4e2bbdd5485a9dc9" host="localhost" Jul 11 00:13:55.385941 containerd[1453]: 2025-07-11 00:13:55.351 [INFO][3980] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:13:55.385941 containerd[1453]: 2025-07-11 00:13:55.351 [INFO][3980] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="b1bb6238548f515e3f8463da056c7c07200994863972deed4e2bbdd5485a9dc9" HandleID="k8s-pod-network.b1bb6238548f515e3f8463da056c7c07200994863972deed4e2bbdd5485a9dc9" Workload="localhost-k8s-whisker--774b69b6f6--n8csm-eth0" Jul 11 00:13:55.386993 containerd[1453]: 2025-07-11 00:13:55.355 [INFO][3971] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b1bb6238548f515e3f8463da056c7c07200994863972deed4e2bbdd5485a9dc9" Namespace="calico-system" Pod="whisker-774b69b6f6-n8csm" WorkloadEndpoint="localhost-k8s-whisker--774b69b6f6--n8csm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--774b69b6f6--n8csm-eth0", GenerateName:"whisker-774b69b6f6-", Namespace:"calico-system", SelfLink:"", UID:"16d1c267-97b3-424f-a656-4062a62b58fa", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"774b69b6f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-774b69b6f6-n8csm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali74f93d56dd3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:13:55.386993 containerd[1453]: 2025-07-11 00:13:55.355 [INFO][3971] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="b1bb6238548f515e3f8463da056c7c07200994863972deed4e2bbdd5485a9dc9" Namespace="calico-system" Pod="whisker-774b69b6f6-n8csm" WorkloadEndpoint="localhost-k8s-whisker--774b69b6f6--n8csm-eth0" Jul 11 00:13:55.386993 containerd[1453]: 2025-07-11 00:13:55.355 [INFO][3971] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali74f93d56dd3 ContainerID="b1bb6238548f515e3f8463da056c7c07200994863972deed4e2bbdd5485a9dc9" Namespace="calico-system" Pod="whisker-774b69b6f6-n8csm" WorkloadEndpoint="localhost-k8s-whisker--774b69b6f6--n8csm-eth0" Jul 11 00:13:55.386993 containerd[1453]: 2025-07-11 00:13:55.370 [INFO][3971] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b1bb6238548f515e3f8463da056c7c07200994863972deed4e2bbdd5485a9dc9" Namespace="calico-system" Pod="whisker-774b69b6f6-n8csm" WorkloadEndpoint="localhost-k8s-whisker--774b69b6f6--n8csm-eth0" Jul 11 00:13:55.386993 containerd[1453]: 2025-07-11 00:13:55.371 [INFO][3971] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b1bb6238548f515e3f8463da056c7c07200994863972deed4e2bbdd5485a9dc9" Namespace="calico-system" Pod="whisker-774b69b6f6-n8csm" WorkloadEndpoint="localhost-k8s-whisker--774b69b6f6--n8csm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--774b69b6f6--n8csm-eth0", GenerateName:"whisker-774b69b6f6-", Namespace:"calico-system", SelfLink:"", UID:"16d1c267-97b3-424f-a656-4062a62b58fa", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"774b69b6f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b1bb6238548f515e3f8463da056c7c07200994863972deed4e2bbdd5485a9dc9", Pod:"whisker-774b69b6f6-n8csm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali74f93d56dd3", MAC:"92:98:0f:bf:4b:69", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:13:55.386993 containerd[1453]: 2025-07-11 00:13:55.379 [INFO][3971] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b1bb6238548f515e3f8463da056c7c07200994863972deed4e2bbdd5485a9dc9" Namespace="calico-system" Pod="whisker-774b69b6f6-n8csm" WorkloadEndpoint="localhost-k8s-whisker--774b69b6f6--n8csm-eth0" Jul 11 00:13:55.429992 containerd[1453]: time="2025-07-11T00:13:55.429752347Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:13:55.429992 containerd[1453]: time="2025-07-11T00:13:55.429841344Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:13:55.429992 containerd[1453]: time="2025-07-11T00:13:55.429855170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:13:55.431252 containerd[1453]: time="2025-07-11T00:13:55.430909792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:13:55.452954 systemd[1]: Started cri-containerd-b1bb6238548f515e3f8463da056c7c07200994863972deed4e2bbdd5485a9dc9.scope - libcontainer container b1bb6238548f515e3f8463da056c7c07200994863972deed4e2bbdd5485a9dc9. Jul 11 00:13:55.465284 systemd-networkd[1388]: vxlan.calico: Link UP Jul 11 00:13:55.465296 systemd-networkd[1388]: vxlan.calico: Gained carrier Jul 11 00:13:55.474504 systemd-resolved[1325]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:13:55.524359 containerd[1453]: time="2025-07-11T00:13:55.524295816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-774b69b6f6-n8csm,Uid:16d1c267-97b3-424f-a656-4062a62b58fa,Namespace:calico-system,Attempt:0,} returns sandbox id \"b1bb6238548f515e3f8463da056c7c07200994863972deed4e2bbdd5485a9dc9\"" Jul 11 00:13:55.527827 containerd[1453]: time="2025-07-11T00:13:55.526978218Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 11 00:13:55.944749 systemd[1]: Started sshd@7-10.0.0.36:22-10.0.0.1:45122.service - OpenSSH per-connection server daemon (10.0.0.1:45122). Jul 11 00:13:55.991901 sshd[4118]: Accepted publickey for core from 10.0.0.1 port 45122 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:13:55.993955 sshd[4118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:13:55.999787 systemd-logind[1439]: New session 8 of user core. Jul 11 00:13:56.007887 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 11 00:13:56.146018 sshd[4118]: pam_unix(sshd:session): session closed for user core Jul 11 00:13:56.150152 systemd[1]: sshd@7-10.0.0.36:22-10.0.0.1:45122.service: Deactivated successfully. Jul 11 00:13:56.152402 systemd[1]: session-8.scope: Deactivated successfully. Jul 11 00:13:56.153032 systemd-logind[1439]: Session 8 logged out. Waiting for processes to exit. Jul 11 00:13:56.153886 systemd-logind[1439]: Removed session 8. Jul 11 00:13:56.460331 kubelet[2482]: I0711 00:13:56.460291 2482 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0823a92-0fad-4d6c-8138-b092cb1e97af" path="/var/lib/kubelet/pods/f0823a92-0fad-4d6c-8138-b092cb1e97af/volumes" Jul 11 00:13:56.997710 containerd[1453]: time="2025-07-11T00:13:56.997644073Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:56.998505 containerd[1453]: time="2025-07-11T00:13:56.998452302Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 11 00:13:56.999691 containerd[1453]: time="2025-07-11T00:13:56.999640465Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:57.001992 containerd[1453]: time="2025-07-11T00:13:57.001957919Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:57.002523 containerd[1453]: time="2025-07-11T00:13:57.002487364Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.475479611s" Jul 11 00:13:57.002523 containerd[1453]: time="2025-07-11T00:13:57.002516759Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 11 00:13:57.004258 containerd[1453]: time="2025-07-11T00:13:57.004215200Z" level=info msg="CreateContainer within sandbox \"b1bb6238548f515e3f8463da056c7c07200994863972deed4e2bbdd5485a9dc9\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 11 00:13:57.022063 containerd[1453]: time="2025-07-11T00:13:57.021999897Z" level=info msg="CreateContainer within sandbox \"b1bb6238548f515e3f8463da056c7c07200994863972deed4e2bbdd5485a9dc9\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"aedff43339029ad9bf3876ea16a7992bffa87819bf073e16ed934725d7a3b351\"" Jul 11 00:13:57.022590 containerd[1453]: time="2025-07-11T00:13:57.022552746Z" level=info msg="StartContainer for \"aedff43339029ad9bf3876ea16a7992bffa87819bf073e16ed934725d7a3b351\"" Jul 11 00:13:57.059898 systemd[1]: Started cri-containerd-aedff43339029ad9bf3876ea16a7992bffa87819bf073e16ed934725d7a3b351.scope - libcontainer container aedff43339029ad9bf3876ea16a7992bffa87819bf073e16ed934725d7a3b351. Jul 11 00:13:57.100680 containerd[1453]: time="2025-07-11T00:13:57.100637754Z" level=info msg="StartContainer for \"aedff43339029ad9bf3876ea16a7992bffa87819bf073e16ed934725d7a3b351\" returns successfully" Jul 11 00:13:57.102110 containerd[1453]: time="2025-07-11T00:13:57.102075886Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 11 00:13:57.217963 systemd-networkd[1388]: cali74f93d56dd3: Gained IPv6LL Jul 11 00:13:57.345885 systemd-networkd[1388]: vxlan.calico: Gained IPv6LL Jul 11 00:13:58.969898 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount899987708.mount: Deactivated successfully. Jul 11 00:13:58.989863 containerd[1453]: time="2025-07-11T00:13:58.989807142Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:58.990496 containerd[1453]: time="2025-07-11T00:13:58.990443788Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 11 00:13:58.991599 containerd[1453]: time="2025-07-11T00:13:58.991566026Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:58.997898 containerd[1453]: time="2025-07-11T00:13:58.997842568Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:13:58.998561 containerd[1453]: time="2025-07-11T00:13:58.998524078Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 1.896408347s" Jul 11 00:13:58.998626 containerd[1453]: time="2025-07-11T00:13:58.998561228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 11 00:13:59.000912 containerd[1453]: time="2025-07-11T00:13:59.000864455Z" level=info msg="CreateContainer within sandbox \"b1bb6238548f515e3f8463da056c7c07200994863972deed4e2bbdd5485a9dc9\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 11 00:13:59.016560 containerd[1453]: time="2025-07-11T00:13:59.016507491Z" level=info msg="CreateContainer within sandbox \"b1bb6238548f515e3f8463da056c7c07200994863972deed4e2bbdd5485a9dc9\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"b63531475c37b06725cbfd8cedf36e1720c2bc570ea9490a30cf1b9605c27f71\"" Jul 11 00:13:59.017134 containerd[1453]: time="2025-07-11T00:13:59.017083934Z" level=info msg="StartContainer for \"b63531475c37b06725cbfd8cedf36e1720c2bc570ea9490a30cf1b9605c27f71\"" Jul 11 00:13:59.050980 systemd[1]: Started cri-containerd-b63531475c37b06725cbfd8cedf36e1720c2bc570ea9490a30cf1b9605c27f71.scope - libcontainer container b63531475c37b06725cbfd8cedf36e1720c2bc570ea9490a30cf1b9605c27f71. Jul 11 00:13:59.098605 containerd[1453]: time="2025-07-11T00:13:59.098560629Z" level=info msg="StartContainer for \"b63531475c37b06725cbfd8cedf36e1720c2bc570ea9490a30cf1b9605c27f71\" returns successfully" Jul 11 00:13:59.718586 kubelet[2482]: I0711 00:13:59.718002 2482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-774b69b6f6-n8csm" podStartSLOduration=2.244487611 podStartE2EDuration="5.717981532s" podCreationTimestamp="2025-07-11 00:13:54 +0000 UTC" firstStartedPulling="2025-07-11 00:13:55.525914619 +0000 UTC m=+37.195288526" lastFinishedPulling="2025-07-11 00:13:58.99940854 +0000 UTC m=+40.668782447" observedRunningTime="2025-07-11 00:13:59.717002272 +0000 UTC m=+41.386376199" watchObservedRunningTime="2025-07-11 00:13:59.717981532 +0000 UTC m=+41.387355459" Jul 11 00:14:00.457619 containerd[1453]: time="2025-07-11T00:14:00.457543131Z" level=info msg="StopPodSandbox for \"14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58\"" Jul 11 00:14:00.544966 containerd[1453]: 2025-07-11 00:14:00.502 [INFO][4240] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58" Jul 11 00:14:00.544966 containerd[1453]: 2025-07-11 00:14:00.503 [INFO][4240] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58" iface="eth0" netns="/var/run/netns/cni-c20e94ea-e0de-adfb-cd18-4eeb706314e8" Jul 11 00:14:00.544966 containerd[1453]: 2025-07-11 00:14:00.503 [INFO][4240] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58" iface="eth0" netns="/var/run/netns/cni-c20e94ea-e0de-adfb-cd18-4eeb706314e8" Jul 11 00:14:00.544966 containerd[1453]: 2025-07-11 00:14:00.503 [INFO][4240] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58" iface="eth0" netns="/var/run/netns/cni-c20e94ea-e0de-adfb-cd18-4eeb706314e8" Jul 11 00:14:00.544966 containerd[1453]: 2025-07-11 00:14:00.503 [INFO][4240] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58" Jul 11 00:14:00.544966 containerd[1453]: 2025-07-11 00:14:00.503 [INFO][4240] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58" Jul 11 00:14:00.544966 containerd[1453]: 2025-07-11 00:14:00.528 [INFO][4249] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58" HandleID="k8s-pod-network.14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58" Workload="localhost-k8s-csi--node--driver--cx6rf-eth0" Jul 11 00:14:00.544966 containerd[1453]: 2025-07-11 00:14:00.528 [INFO][4249] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:14:00.544966 containerd[1453]: 2025-07-11 00:14:00.528 [INFO][4249] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:14:00.544966 containerd[1453]: 2025-07-11 00:14:00.534 [WARNING][4249] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58" HandleID="k8s-pod-network.14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58" Workload="localhost-k8s-csi--node--driver--cx6rf-eth0" Jul 11 00:14:00.544966 containerd[1453]: 2025-07-11 00:14:00.534 [INFO][4249] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58" HandleID="k8s-pod-network.14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58" Workload="localhost-k8s-csi--node--driver--cx6rf-eth0" Jul 11 00:14:00.544966 containerd[1453]: 2025-07-11 00:14:00.536 [INFO][4249] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:14:00.544966 containerd[1453]: 2025-07-11 00:14:00.541 [INFO][4240] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58" Jul 11 00:14:00.545576 containerd[1453]: time="2025-07-11T00:14:00.545161276Z" level=info msg="TearDown network for sandbox \"14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58\" successfully" Jul 11 00:14:00.545576 containerd[1453]: time="2025-07-11T00:14:00.545189880Z" level=info msg="StopPodSandbox for \"14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58\" returns successfully" Jul 11 00:14:00.546449 containerd[1453]: time="2025-07-11T00:14:00.546400985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cx6rf,Uid:1ca1aa6c-2238-4c4c-869b-a6dd1b43a48c,Namespace:calico-system,Attempt:1,}" Jul 11 00:14:00.547861 systemd[1]: run-netns-cni\x2dc20e94ea\x2de0de\x2dadfb\x2dcd18\x2d4eeb706314e8.mount: Deactivated successfully. Jul 11 00:14:00.714267 systemd-networkd[1388]: calic5279c42b24: Link UP Jul 11 00:14:00.715993 systemd-networkd[1388]: calic5279c42b24: Gained carrier Jul 11 00:14:00.731575 containerd[1453]: 2025-07-11 00:14:00.651 [INFO][4258] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--cx6rf-eth0 csi-node-driver- calico-system 1ca1aa6c-2238-4c4c-869b-a6dd1b43a48c 1029 0 2025-07-11 00:13:37 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-cx6rf eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic5279c42b24 [] [] }} ContainerID="98eee2b1ee60faed1392912b3c056f2e6e90b562f1e452e711a35b37dfdc3a1a" Namespace="calico-system" Pod="csi-node-driver-cx6rf" WorkloadEndpoint="localhost-k8s-csi--node--driver--cx6rf-" Jul 11 00:14:00.731575 containerd[1453]: 2025-07-11 00:14:00.651 [INFO][4258] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="98eee2b1ee60faed1392912b3c056f2e6e90b562f1e452e711a35b37dfdc3a1a" Namespace="calico-system" Pod="csi-node-driver-cx6rf" WorkloadEndpoint="localhost-k8s-csi--node--driver--cx6rf-eth0" Jul 11 00:14:00.731575 containerd[1453]: 2025-07-11 00:14:00.677 [INFO][4271] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="98eee2b1ee60faed1392912b3c056f2e6e90b562f1e452e711a35b37dfdc3a1a" HandleID="k8s-pod-network.98eee2b1ee60faed1392912b3c056f2e6e90b562f1e452e711a35b37dfdc3a1a" Workload="localhost-k8s-csi--node--driver--cx6rf-eth0" Jul 11 00:14:00.731575 containerd[1453]: 2025-07-11 00:14:00.678 [INFO][4271] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="98eee2b1ee60faed1392912b3c056f2e6e90b562f1e452e711a35b37dfdc3a1a" HandleID="k8s-pod-network.98eee2b1ee60faed1392912b3c056f2e6e90b562f1e452e711a35b37dfdc3a1a" Workload="localhost-k8s-csi--node--driver--cx6rf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f7b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-cx6rf", "timestamp":"2025-07-11 00:14:00.677474042 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:14:00.731575 containerd[1453]: 2025-07-11 00:14:00.678 [INFO][4271] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:14:00.731575 containerd[1453]: 2025-07-11 00:14:00.678 [INFO][4271] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:14:00.731575 containerd[1453]: 2025-07-11 00:14:00.678 [INFO][4271] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:14:00.731575 containerd[1453]: 2025-07-11 00:14:00.684 [INFO][4271] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.98eee2b1ee60faed1392912b3c056f2e6e90b562f1e452e711a35b37dfdc3a1a" host="localhost" Jul 11 00:14:00.731575 containerd[1453]: 2025-07-11 00:14:00.688 [INFO][4271] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:14:00.731575 containerd[1453]: 2025-07-11 00:14:00.692 [INFO][4271] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:14:00.731575 containerd[1453]: 2025-07-11 00:14:00.694 [INFO][4271] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:14:00.731575 containerd[1453]: 2025-07-11 00:14:00.696 [INFO][4271] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:14:00.731575 containerd[1453]: 2025-07-11 00:14:00.696 [INFO][4271] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.98eee2b1ee60faed1392912b3c056f2e6e90b562f1e452e711a35b37dfdc3a1a" host="localhost" Jul 11 00:14:00.731575 containerd[1453]: 2025-07-11 00:14:00.697 [INFO][4271] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.98eee2b1ee60faed1392912b3c056f2e6e90b562f1e452e711a35b37dfdc3a1a Jul 11 00:14:00.731575 containerd[1453]: 2025-07-11 00:14:00.701 [INFO][4271] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.98eee2b1ee60faed1392912b3c056f2e6e90b562f1e452e711a35b37dfdc3a1a" host="localhost" Jul 11 00:14:00.731575 containerd[1453]: 2025-07-11 00:14:00.708 [INFO][4271] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.98eee2b1ee60faed1392912b3c056f2e6e90b562f1e452e711a35b37dfdc3a1a" host="localhost" Jul 11 00:14:00.731575 containerd[1453]: 2025-07-11 00:14:00.708 [INFO][4271] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.98eee2b1ee60faed1392912b3c056f2e6e90b562f1e452e711a35b37dfdc3a1a" host="localhost" Jul 11 00:14:00.731575 containerd[1453]: 2025-07-11 00:14:00.708 [INFO][4271] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:14:00.731575 containerd[1453]: 2025-07-11 00:14:00.708 [INFO][4271] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="98eee2b1ee60faed1392912b3c056f2e6e90b562f1e452e711a35b37dfdc3a1a" HandleID="k8s-pod-network.98eee2b1ee60faed1392912b3c056f2e6e90b562f1e452e711a35b37dfdc3a1a" Workload="localhost-k8s-csi--node--driver--cx6rf-eth0" Jul 11 00:14:00.732387 containerd[1453]: 2025-07-11 00:14:00.711 [INFO][4258] cni-plugin/k8s.go 418: Populated endpoint ContainerID="98eee2b1ee60faed1392912b3c056f2e6e90b562f1e452e711a35b37dfdc3a1a" Namespace="calico-system" Pod="csi-node-driver-cx6rf" WorkloadEndpoint="localhost-k8s-csi--node--driver--cx6rf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--cx6rf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1ca1aa6c-2238-4c4c-869b-a6dd1b43a48c", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-cx6rf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic5279c42b24", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:14:00.732387 containerd[1453]: 2025-07-11 00:14:00.711 [INFO][4258] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="98eee2b1ee60faed1392912b3c056f2e6e90b562f1e452e711a35b37dfdc3a1a" Namespace="calico-system" Pod="csi-node-driver-cx6rf" WorkloadEndpoint="localhost-k8s-csi--node--driver--cx6rf-eth0" Jul 11 00:14:00.732387 containerd[1453]: 2025-07-11 00:14:00.711 [INFO][4258] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic5279c42b24 ContainerID="98eee2b1ee60faed1392912b3c056f2e6e90b562f1e452e711a35b37dfdc3a1a" Namespace="calico-system" Pod="csi-node-driver-cx6rf" WorkloadEndpoint="localhost-k8s-csi--node--driver--cx6rf-eth0" Jul 11 00:14:00.732387 containerd[1453]: 2025-07-11 00:14:00.715 [INFO][4258] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="98eee2b1ee60faed1392912b3c056f2e6e90b562f1e452e711a35b37dfdc3a1a" Namespace="calico-system" Pod="csi-node-driver-cx6rf" WorkloadEndpoint="localhost-k8s-csi--node--driver--cx6rf-eth0" Jul 11 00:14:00.732387 containerd[1453]: 2025-07-11 00:14:00.715 [INFO][4258] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="98eee2b1ee60faed1392912b3c056f2e6e90b562f1e452e711a35b37dfdc3a1a" Namespace="calico-system" Pod="csi-node-driver-cx6rf" WorkloadEndpoint="localhost-k8s-csi--node--driver--cx6rf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--cx6rf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1ca1aa6c-2238-4c4c-869b-a6dd1b43a48c", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"98eee2b1ee60faed1392912b3c056f2e6e90b562f1e452e711a35b37dfdc3a1a", Pod:"csi-node-driver-cx6rf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic5279c42b24", MAC:"52:3a:71:11:e8:0a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:14:00.732387 containerd[1453]: 2025-07-11 00:14:00.727 [INFO][4258] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="98eee2b1ee60faed1392912b3c056f2e6e90b562f1e452e711a35b37dfdc3a1a" Namespace="calico-system" Pod="csi-node-driver-cx6rf" WorkloadEndpoint="localhost-k8s-csi--node--driver--cx6rf-eth0" Jul 11 00:14:00.752689 containerd[1453]: time="2025-07-11T00:14:00.752388555Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:14:00.752689 containerd[1453]: time="2025-07-11T00:14:00.752456462Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:14:00.752689 containerd[1453]: time="2025-07-11T00:14:00.752468004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:14:00.752689 containerd[1453]: time="2025-07-11T00:14:00.752548666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:14:00.775979 systemd[1]: Started cri-containerd-98eee2b1ee60faed1392912b3c056f2e6e90b562f1e452e711a35b37dfdc3a1a.scope - libcontainer container 98eee2b1ee60faed1392912b3c056f2e6e90b562f1e452e711a35b37dfdc3a1a. Jul 11 00:14:00.788980 systemd-resolved[1325]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:14:00.808216 containerd[1453]: time="2025-07-11T00:14:00.808160794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cx6rf,Uid:1ca1aa6c-2238-4c4c-869b-a6dd1b43a48c,Namespace:calico-system,Attempt:1,} returns sandbox id \"98eee2b1ee60faed1392912b3c056f2e6e90b562f1e452e711a35b37dfdc3a1a\"" Jul 11 00:14:00.810889 containerd[1453]: time="2025-07-11T00:14:00.810842761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 11 00:14:01.158952 systemd[1]: Started sshd@8-10.0.0.36:22-10.0.0.1:36986.service - OpenSSH per-connection server daemon (10.0.0.1:36986). Jul 11 00:14:01.207127 sshd[4336]: Accepted publickey for core from 10.0.0.1 port 36986 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:14:01.209085 sshd[4336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:14:01.213851 systemd-logind[1439]: New session 9 of user core. Jul 11 00:14:01.223931 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 11 00:14:01.372507 sshd[4336]: pam_unix(sshd:session): session closed for user core Jul 11 00:14:01.376561 systemd[1]: sshd@8-10.0.0.36:22-10.0.0.1:36986.service: Deactivated successfully. Jul 11 00:14:01.379326 systemd[1]: session-9.scope: Deactivated successfully. Jul 11 00:14:01.380431 systemd-logind[1439]: Session 9 logged out. Waiting for processes to exit. Jul 11 00:14:01.381553 systemd-logind[1439]: Removed session 9. Jul 11 00:14:01.457538 containerd[1453]: time="2025-07-11T00:14:01.457487793Z" level=info msg="StopPodSandbox for \"467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78\"" Jul 11 00:14:01.542219 containerd[1453]: 2025-07-11 00:14:01.503 [INFO][4362] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78" Jul 11 00:14:01.542219 containerd[1453]: 2025-07-11 00:14:01.503 [INFO][4362] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78" iface="eth0" netns="/var/run/netns/cni-a794a344-186f-f0c6-6b6b-27250fe98114" Jul 11 00:14:01.542219 containerd[1453]: 2025-07-11 00:14:01.503 [INFO][4362] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78" iface="eth0" netns="/var/run/netns/cni-a794a344-186f-f0c6-6b6b-27250fe98114" Jul 11 00:14:01.542219 containerd[1453]: 2025-07-11 00:14:01.504 [INFO][4362] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78" iface="eth0" netns="/var/run/netns/cni-a794a344-186f-f0c6-6b6b-27250fe98114" Jul 11 00:14:01.542219 containerd[1453]: 2025-07-11 00:14:01.504 [INFO][4362] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78" Jul 11 00:14:01.542219 containerd[1453]: 2025-07-11 00:14:01.504 [INFO][4362] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78" Jul 11 00:14:01.542219 containerd[1453]: 2025-07-11 00:14:01.528 [INFO][4372] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78" HandleID="k8s-pod-network.467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78" Workload="localhost-k8s-calico--apiserver--5b5f5d84fb--77rqg-eth0" Jul 11 00:14:01.542219 containerd[1453]: 2025-07-11 00:14:01.528 [INFO][4372] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:14:01.542219 containerd[1453]: 2025-07-11 00:14:01.528 [INFO][4372] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:14:01.542219 containerd[1453]: 2025-07-11 00:14:01.534 [WARNING][4372] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78" HandleID="k8s-pod-network.467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78" Workload="localhost-k8s-calico--apiserver--5b5f5d84fb--77rqg-eth0" Jul 11 00:14:01.542219 containerd[1453]: 2025-07-11 00:14:01.534 [INFO][4372] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78" HandleID="k8s-pod-network.467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78" Workload="localhost-k8s-calico--apiserver--5b5f5d84fb--77rqg-eth0" Jul 11 00:14:01.542219 containerd[1453]: 2025-07-11 00:14:01.536 [INFO][4372] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:14:01.542219 containerd[1453]: 2025-07-11 00:14:01.539 [INFO][4362] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78" Jul 11 00:14:01.543110 containerd[1453]: time="2025-07-11T00:14:01.542449152Z" level=info msg="TearDown network for sandbox \"467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78\" successfully" Jul 11 00:14:01.543110 containerd[1453]: time="2025-07-11T00:14:01.542484548Z" level=info msg="StopPodSandbox for \"467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78\" returns successfully" Jul 11 00:14:01.543258 containerd[1453]: time="2025-07-11T00:14:01.543226371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b5f5d84fb-77rqg,Uid:170759a5-3b9a-4f76-b425-9d4e9c482064,Namespace:calico-apiserver,Attempt:1,}" Jul 11 00:14:01.545937 systemd[1]: run-netns-cni\x2da794a344\x2d186f\x2df0c6\x2d6b6b\x2d27250fe98114.mount: Deactivated successfully. Jul 11 00:14:01.671831 systemd-networkd[1388]: cali715f07f8fff: Link UP Jul 11 00:14:01.672647 systemd-networkd[1388]: cali715f07f8fff: Gained carrier Jul 11 00:14:01.691535 containerd[1453]: 2025-07-11 00:14:01.606 [INFO][4381] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5b5f5d84fb--77rqg-eth0 calico-apiserver-5b5f5d84fb- calico-apiserver 170759a5-3b9a-4f76-b425-9d4e9c482064 1041 0 2025-07-11 00:13:35 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b5f5d84fb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5b5f5d84fb-77rqg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali715f07f8fff [] [] }} ContainerID="03f2a2c377eac9892504f63e039d51a29429ee543e524701f14d98ef0d7d2f5d" Namespace="calico-apiserver" Pod="calico-apiserver-5b5f5d84fb-77rqg" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b5f5d84fb--77rqg-" Jul 11 00:14:01.691535 containerd[1453]: 2025-07-11 00:14:01.607 [INFO][4381] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="03f2a2c377eac9892504f63e039d51a29429ee543e524701f14d98ef0d7d2f5d" Namespace="calico-apiserver" Pod="calico-apiserver-5b5f5d84fb-77rqg" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b5f5d84fb--77rqg-eth0" Jul 11 00:14:01.691535 containerd[1453]: 2025-07-11 00:14:01.635 [INFO][4395] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="03f2a2c377eac9892504f63e039d51a29429ee543e524701f14d98ef0d7d2f5d" HandleID="k8s-pod-network.03f2a2c377eac9892504f63e039d51a29429ee543e524701f14d98ef0d7d2f5d" Workload="localhost-k8s-calico--apiserver--5b5f5d84fb--77rqg-eth0" Jul 11 00:14:01.691535 containerd[1453]: 2025-07-11 00:14:01.635 [INFO][4395] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="03f2a2c377eac9892504f63e039d51a29429ee543e524701f14d98ef0d7d2f5d" HandleID="k8s-pod-network.03f2a2c377eac9892504f63e039d51a29429ee543e524701f14d98ef0d7d2f5d" Workload="localhost-k8s-calico--apiserver--5b5f5d84fb--77rqg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005953a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5b5f5d84fb-77rqg", "timestamp":"2025-07-11 00:14:01.635257498 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:14:01.691535 containerd[1453]: 2025-07-11 00:14:01.635 [INFO][4395] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:14:01.691535 containerd[1453]: 2025-07-11 00:14:01.635 [INFO][4395] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:14:01.691535 containerd[1453]: 2025-07-11 00:14:01.635 [INFO][4395] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:14:01.691535 containerd[1453]: 2025-07-11 00:14:01.641 [INFO][4395] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.03f2a2c377eac9892504f63e039d51a29429ee543e524701f14d98ef0d7d2f5d" host="localhost" Jul 11 00:14:01.691535 containerd[1453]: 2025-07-11 00:14:01.646 [INFO][4395] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:14:01.691535 containerd[1453]: 2025-07-11 00:14:01.650 [INFO][4395] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:14:01.691535 containerd[1453]: 2025-07-11 00:14:01.651 [INFO][4395] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:14:01.691535 containerd[1453]: 2025-07-11 00:14:01.653 [INFO][4395] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:14:01.691535 containerd[1453]: 2025-07-11 00:14:01.653 [INFO][4395] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.03f2a2c377eac9892504f63e039d51a29429ee543e524701f14d98ef0d7d2f5d" host="localhost" Jul 11 00:14:01.691535 containerd[1453]: 2025-07-11 00:14:01.655 [INFO][4395] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.03f2a2c377eac9892504f63e039d51a29429ee543e524701f14d98ef0d7d2f5d Jul 11 00:14:01.691535 containerd[1453]: 2025-07-11 00:14:01.659 [INFO][4395] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.03f2a2c377eac9892504f63e039d51a29429ee543e524701f14d98ef0d7d2f5d" host="localhost" Jul 11 00:14:01.691535 containerd[1453]: 2025-07-11 00:14:01.665 [INFO][4395] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.03f2a2c377eac9892504f63e039d51a29429ee543e524701f14d98ef0d7d2f5d" host="localhost" Jul 11 00:14:01.691535 containerd[1453]: 2025-07-11 00:14:01.665 [INFO][4395] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.03f2a2c377eac9892504f63e039d51a29429ee543e524701f14d98ef0d7d2f5d" host="localhost" Jul 11 00:14:01.691535 containerd[1453]: 2025-07-11 00:14:01.665 [INFO][4395] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:14:01.691535 containerd[1453]: 2025-07-11 00:14:01.665 [INFO][4395] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="03f2a2c377eac9892504f63e039d51a29429ee543e524701f14d98ef0d7d2f5d" HandleID="k8s-pod-network.03f2a2c377eac9892504f63e039d51a29429ee543e524701f14d98ef0d7d2f5d" Workload="localhost-k8s-calico--apiserver--5b5f5d84fb--77rqg-eth0" Jul 11 00:14:01.692146 containerd[1453]: 2025-07-11 00:14:01.669 [INFO][4381] cni-plugin/k8s.go 418: Populated endpoint ContainerID="03f2a2c377eac9892504f63e039d51a29429ee543e524701f14d98ef0d7d2f5d" Namespace="calico-apiserver" Pod="calico-apiserver-5b5f5d84fb-77rqg" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b5f5d84fb--77rqg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b5f5d84fb--77rqg-eth0", GenerateName:"calico-apiserver-5b5f5d84fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"170759a5-3b9a-4f76-b425-9d4e9c482064", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b5f5d84fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5b5f5d84fb-77rqg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali715f07f8fff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:14:01.692146 containerd[1453]: 2025-07-11 00:14:01.669 [INFO][4381] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="03f2a2c377eac9892504f63e039d51a29429ee543e524701f14d98ef0d7d2f5d" Namespace="calico-apiserver" Pod="calico-apiserver-5b5f5d84fb-77rqg" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b5f5d84fb--77rqg-eth0" Jul 11 00:14:01.692146 containerd[1453]: 2025-07-11 00:14:01.669 [INFO][4381] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali715f07f8fff ContainerID="03f2a2c377eac9892504f63e039d51a29429ee543e524701f14d98ef0d7d2f5d" Namespace="calico-apiserver" Pod="calico-apiserver-5b5f5d84fb-77rqg" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b5f5d84fb--77rqg-eth0" Jul 11 00:14:01.692146 containerd[1453]: 2025-07-11 00:14:01.672 [INFO][4381] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="03f2a2c377eac9892504f63e039d51a29429ee543e524701f14d98ef0d7d2f5d" Namespace="calico-apiserver" Pod="calico-apiserver-5b5f5d84fb-77rqg" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b5f5d84fb--77rqg-eth0" Jul 11 00:14:01.692146 containerd[1453]: 2025-07-11 00:14:01.674 [INFO][4381] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="03f2a2c377eac9892504f63e039d51a29429ee543e524701f14d98ef0d7d2f5d" Namespace="calico-apiserver" Pod="calico-apiserver-5b5f5d84fb-77rqg" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b5f5d84fb--77rqg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b5f5d84fb--77rqg-eth0", GenerateName:"calico-apiserver-5b5f5d84fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"170759a5-3b9a-4f76-b425-9d4e9c482064", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b5f5d84fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"03f2a2c377eac9892504f63e039d51a29429ee543e524701f14d98ef0d7d2f5d", Pod:"calico-apiserver-5b5f5d84fb-77rqg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali715f07f8fff", MAC:"3a:a2:88:c7:4c:72", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:14:01.692146 containerd[1453]: 2025-07-11 00:14:01.687 [INFO][4381] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="03f2a2c377eac9892504f63e039d51a29429ee543e524701f14d98ef0d7d2f5d" Namespace="calico-apiserver" Pod="calico-apiserver-5b5f5d84fb-77rqg" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b5f5d84fb--77rqg-eth0" Jul 11 00:14:01.711857 containerd[1453]: time="2025-07-11T00:14:01.711374746Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:14:01.711857 containerd[1453]: time="2025-07-11T00:14:01.711441001Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:14:01.711857 containerd[1453]: time="2025-07-11T00:14:01.711452292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:14:01.712093 containerd[1453]: time="2025-07-11T00:14:01.711549154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:14:01.732908 systemd[1]: Started cri-containerd-03f2a2c377eac9892504f63e039d51a29429ee543e524701f14d98ef0d7d2f5d.scope - libcontainer container 03f2a2c377eac9892504f63e039d51a29429ee543e524701f14d98ef0d7d2f5d. Jul 11 00:14:01.748186 systemd-resolved[1325]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:14:01.778225 containerd[1453]: time="2025-07-11T00:14:01.778165378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b5f5d84fb-77rqg,Uid:170759a5-3b9a-4f76-b425-9d4e9c482064,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"03f2a2c377eac9892504f63e039d51a29429ee543e524701f14d98ef0d7d2f5d\"" Jul 11 00:14:02.337981 systemd-networkd[1388]: calic5279c42b24: Gained IPv6LL Jul 11 00:14:02.465811 containerd[1453]: time="2025-07-11T00:14:02.465732111Z" level=info msg="StopPodSandbox for \"37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16\"" Jul 11 00:14:02.466310 containerd[1453]: time="2025-07-11T00:14:02.466257317Z" level=info msg="StopPodSandbox for \"65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c\"" Jul 11 00:14:02.466480 containerd[1453]: time="2025-07-11T00:14:02.466403231Z" level=info msg="StopPodSandbox for \"54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35\"" Jul 11 00:14:02.632121 containerd[1453]: 2025-07-11 00:14:02.547 [INFO][4487] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35" Jul 11 00:14:02.632121 containerd[1453]: 2025-07-11 00:14:02.548 [INFO][4487] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35" iface="eth0" netns="/var/run/netns/cni-129b87ff-6fb2-bd68-f914-b616621228e1" Jul 11 00:14:02.632121 containerd[1453]: 2025-07-11 00:14:02.548 [INFO][4487] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35" iface="eth0" netns="/var/run/netns/cni-129b87ff-6fb2-bd68-f914-b616621228e1" Jul 11 00:14:02.632121 containerd[1453]: 2025-07-11 00:14:02.549 [INFO][4487] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35" iface="eth0" netns="/var/run/netns/cni-129b87ff-6fb2-bd68-f914-b616621228e1" Jul 11 00:14:02.632121 containerd[1453]: 2025-07-11 00:14:02.549 [INFO][4487] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35" Jul 11 00:14:02.632121 containerd[1453]: 2025-07-11 00:14:02.549 [INFO][4487] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35" Jul 11 00:14:02.632121 containerd[1453]: 2025-07-11 00:14:02.577 [INFO][4510] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35" HandleID="k8s-pod-network.54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35" Workload="localhost-k8s-goldmane--768f4c5c69--kqs4z-eth0" Jul 11 00:14:02.632121 containerd[1453]: 2025-07-11 00:14:02.578 [INFO][4510] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:14:02.632121 containerd[1453]: 2025-07-11 00:14:02.578 [INFO][4510] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:14:02.632121 containerd[1453]: 2025-07-11 00:14:02.624 [WARNING][4510] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35" HandleID="k8s-pod-network.54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35" Workload="localhost-k8s-goldmane--768f4c5c69--kqs4z-eth0" Jul 11 00:14:02.632121 containerd[1453]: 2025-07-11 00:14:02.624 [INFO][4510] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35" HandleID="k8s-pod-network.54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35" Workload="localhost-k8s-goldmane--768f4c5c69--kqs4z-eth0" Jul 11 00:14:02.632121 containerd[1453]: 2025-07-11 00:14:02.627 [INFO][4510] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:14:02.632121 containerd[1453]: 2025-07-11 00:14:02.629 [INFO][4487] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35" Jul 11 00:14:02.637461 systemd[1]: run-netns-cni\x2d129b87ff\x2d6fb2\x2dbd68\x2df914\x2db616621228e1.mount: Deactivated successfully. Jul 11 00:14:02.638097 containerd[1453]: time="2025-07-11T00:14:02.637949689Z" level=info msg="TearDown network for sandbox \"54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35\" successfully" Jul 11 00:14:02.638097 containerd[1453]: time="2025-07-11T00:14:02.637988902Z" level=info msg="StopPodSandbox for \"54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35\" returns successfully" Jul 11 00:14:02.642201 containerd[1453]: time="2025-07-11T00:14:02.641848580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-kqs4z,Uid:09f6ed65-1087-4a6c-9f3f-47a108556bd1,Namespace:calico-system,Attempt:1,}" Jul 11 00:14:02.643542 containerd[1453]: 2025-07-11 00:14:02.553 [INFO][4488] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c" Jul 11 00:14:02.643542 containerd[1453]: 2025-07-11 00:14:02.553 [INFO][4488] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c" iface="eth0" netns="/var/run/netns/cni-9f36d949-3d00-73dd-cf13-89a738175058" Jul 11 00:14:02.643542 containerd[1453]: 2025-07-11 00:14:02.553 [INFO][4488] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c" iface="eth0" netns="/var/run/netns/cni-9f36d949-3d00-73dd-cf13-89a738175058" Jul 11 00:14:02.643542 containerd[1453]: 2025-07-11 00:14:02.558 [INFO][4488] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c" iface="eth0" netns="/var/run/netns/cni-9f36d949-3d00-73dd-cf13-89a738175058" Jul 11 00:14:02.643542 containerd[1453]: 2025-07-11 00:14:02.558 [INFO][4488] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c" Jul 11 00:14:02.643542 containerd[1453]: 2025-07-11 00:14:02.558 [INFO][4488] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c" Jul 11 00:14:02.643542 containerd[1453]: 2025-07-11 00:14:02.584 [INFO][4517] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c" HandleID="k8s-pod-network.65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c" Workload="localhost-k8s-calico--kube--controllers--67b7cfcdf9--djn6p-eth0" Jul 11 00:14:02.643542 containerd[1453]: 2025-07-11 00:14:02.584 [INFO][4517] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:14:02.643542 containerd[1453]: 2025-07-11 00:14:02.627 [INFO][4517] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:14:02.643542 containerd[1453]: 2025-07-11 00:14:02.634 [WARNING][4517] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c" HandleID="k8s-pod-network.65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c" Workload="localhost-k8s-calico--kube--controllers--67b7cfcdf9--djn6p-eth0" Jul 11 00:14:02.643542 containerd[1453]: 2025-07-11 00:14:02.634 [INFO][4517] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c" HandleID="k8s-pod-network.65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c" Workload="localhost-k8s-calico--kube--controllers--67b7cfcdf9--djn6p-eth0" Jul 11 00:14:02.643542 containerd[1453]: 2025-07-11 00:14:02.636 [INFO][4517] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:14:02.643542 containerd[1453]: 2025-07-11 00:14:02.639 [INFO][4488] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c" Jul 11 00:14:02.643919 containerd[1453]: time="2025-07-11T00:14:02.643858835Z" level=info msg="TearDown network for sandbox \"65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c\" successfully" Jul 11 00:14:02.643919 containerd[1453]: time="2025-07-11T00:14:02.643875777Z" level=info msg="StopPodSandbox for \"65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c\" returns successfully" Jul 11 00:14:02.646412 systemd[1]: run-netns-cni\x2d9f36d949\x2d3d00\x2d73dd\x2dcf13\x2d89a738175058.mount: Deactivated successfully. Jul 11 00:14:02.647718 containerd[1453]: time="2025-07-11T00:14:02.647678098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67b7cfcdf9-djn6p,Uid:2b528b1a-2869-4a1f-9208-de9d96c1a0aa,Namespace:calico-system,Attempt:1,}" Jul 11 00:14:02.674833 containerd[1453]: 2025-07-11 00:14:02.624 [INFO][4486] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16" Jul 11 00:14:02.674833 containerd[1453]: 2025-07-11 00:14:02.626 [INFO][4486] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16" iface="eth0" netns="/var/run/netns/cni-21051a75-de42-a6bc-9937-d95b17fb5603" Jul 11 00:14:02.674833 containerd[1453]: 2025-07-11 00:14:02.627 [INFO][4486] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16" iface="eth0" netns="/var/run/netns/cni-21051a75-de42-a6bc-9937-d95b17fb5603" Jul 11 00:14:02.674833 containerd[1453]: 2025-07-11 00:14:02.627 [INFO][4486] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16" iface="eth0" netns="/var/run/netns/cni-21051a75-de42-a6bc-9937-d95b17fb5603" Jul 11 00:14:02.674833 containerd[1453]: 2025-07-11 00:14:02.627 [INFO][4486] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16" Jul 11 00:14:02.674833 containerd[1453]: 2025-07-11 00:14:02.628 [INFO][4486] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16" Jul 11 00:14:02.674833 containerd[1453]: 2025-07-11 00:14:02.655 [INFO][4531] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16" HandleID="k8s-pod-network.37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16" Workload="localhost-k8s-coredns--668d6bf9bc--26c2d-eth0" Jul 11 00:14:02.674833 containerd[1453]: 2025-07-11 00:14:02.655 [INFO][4531] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:14:02.674833 containerd[1453]: 2025-07-11 00:14:02.655 [INFO][4531] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:14:02.674833 containerd[1453]: 2025-07-11 00:14:02.663 [WARNING][4531] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16" HandleID="k8s-pod-network.37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16" Workload="localhost-k8s-coredns--668d6bf9bc--26c2d-eth0" Jul 11 00:14:02.674833 containerd[1453]: 2025-07-11 00:14:02.663 [INFO][4531] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16" HandleID="k8s-pod-network.37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16" Workload="localhost-k8s-coredns--668d6bf9bc--26c2d-eth0" Jul 11 00:14:02.674833 containerd[1453]: 2025-07-11 00:14:02.665 [INFO][4531] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:14:02.674833 containerd[1453]: 2025-07-11 00:14:02.669 [INFO][4486] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16" Jul 11 00:14:02.677329 containerd[1453]: time="2025-07-11T00:14:02.677281933Z" level=info msg="TearDown network for sandbox \"37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16\" successfully" Jul 11 00:14:02.677329 containerd[1453]: time="2025-07-11T00:14:02.677321668Z" level=info msg="StopPodSandbox for \"37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16\" returns successfully" Jul 11 00:14:02.677836 kubelet[2482]: E0711 00:14:02.677800 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:14:02.680239 systemd[1]: run-netns-cni\x2d21051a75\x2dde42\x2da6bc\x2d9937\x2dd95b17fb5603.mount: Deactivated successfully. Jul 11 00:14:02.682138 containerd[1453]: time="2025-07-11T00:14:02.682098388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-26c2d,Uid:f72dd124-c970-4b6b-a074-81167ad3af44,Namespace:kube-system,Attempt:1,}" Jul 11 00:14:02.715295 containerd[1453]: time="2025-07-11T00:14:02.715221182Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:14:02.912648 containerd[1453]: time="2025-07-11T00:14:02.912472512Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 11 00:14:02.913905 systemd-networkd[1388]: cali715f07f8fff: Gained IPv6LL Jul 11 00:14:02.924480 containerd[1453]: time="2025-07-11T00:14:02.924415309Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:14:02.933029 containerd[1453]: time="2025-07-11T00:14:02.932590453Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:14:02.934738 containerd[1453]: time="2025-07-11T00:14:02.933912605Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 2.123024318s" Jul 11 00:14:02.934738 containerd[1453]: time="2025-07-11T00:14:02.934549812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 11 00:14:02.937784 containerd[1453]: time="2025-07-11T00:14:02.937745323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 11 00:14:02.938973 containerd[1453]: time="2025-07-11T00:14:02.938719092Z" level=info msg="CreateContainer within sandbox \"98eee2b1ee60faed1392912b3c056f2e6e90b562f1e452e711a35b37dfdc3a1a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 11 00:14:03.009958 systemd-networkd[1388]: cali7de7e3e1d9b: Link UP Jul 11 00:14:03.010587 systemd-networkd[1388]: cali7de7e3e1d9b: Gained carrier Jul 11 00:14:03.035700 containerd[1453]: 2025-07-11 00:14:02.721 [INFO][4541] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--768f4c5c69--kqs4z-eth0 goldmane-768f4c5c69- calico-system 09f6ed65-1087-4a6c-9f3f-47a108556bd1 1057 0 2025-07-11 00:13:37 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-768f4c5c69-kqs4z eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali7de7e3e1d9b [] [] }} ContainerID="9af6cf80095206aa785494ea1c02e69978499076d85f4ac4f26b7384966be6ab" Namespace="calico-system" Pod="goldmane-768f4c5c69-kqs4z" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--kqs4z-" Jul 11 00:14:03.035700 containerd[1453]: 2025-07-11 00:14:02.722 [INFO][4541] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9af6cf80095206aa785494ea1c02e69978499076d85f4ac4f26b7384966be6ab" Namespace="calico-system" Pod="goldmane-768f4c5c69-kqs4z" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--kqs4z-eth0" Jul 11 00:14:03.035700 containerd[1453]: 2025-07-11 00:14:02.812 [INFO][4569] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9af6cf80095206aa785494ea1c02e69978499076d85f4ac4f26b7384966be6ab" HandleID="k8s-pod-network.9af6cf80095206aa785494ea1c02e69978499076d85f4ac4f26b7384966be6ab" Workload="localhost-k8s-goldmane--768f4c5c69--kqs4z-eth0" Jul 11 00:14:03.035700 containerd[1453]: 2025-07-11 00:14:02.812 [INFO][4569] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9af6cf80095206aa785494ea1c02e69978499076d85f4ac4f26b7384966be6ab" HandleID="k8s-pod-network.9af6cf80095206aa785494ea1c02e69978499076d85f4ac4f26b7384966be6ab" Workload="localhost-k8s-goldmane--768f4c5c69--kqs4z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7170), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-768f4c5c69-kqs4z", "timestamp":"2025-07-11 00:14:02.812314379 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:14:03.035700 containerd[1453]: 2025-07-11 00:14:02.812 [INFO][4569] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:14:03.035700 containerd[1453]: 2025-07-11 00:14:02.812 [INFO][4569] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:14:03.035700 containerd[1453]: 2025-07-11 00:14:02.812 [INFO][4569] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:14:03.035700 containerd[1453]: 2025-07-11 00:14:02.918 [INFO][4569] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9af6cf80095206aa785494ea1c02e69978499076d85f4ac4f26b7384966be6ab" host="localhost" Jul 11 00:14:03.035700 containerd[1453]: 2025-07-11 00:14:02.923 [INFO][4569] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:14:03.035700 containerd[1453]: 2025-07-11 00:14:02.927 [INFO][4569] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:14:03.035700 containerd[1453]: 2025-07-11 00:14:02.938 [INFO][4569] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:14:03.035700 containerd[1453]: 2025-07-11 00:14:02.946 [INFO][4569] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:14:03.035700 containerd[1453]: 2025-07-11 00:14:02.946 [INFO][4569] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9af6cf80095206aa785494ea1c02e69978499076d85f4ac4f26b7384966be6ab" host="localhost" Jul 11 00:14:03.035700 containerd[1453]: 2025-07-11 00:14:02.961 [INFO][4569] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9af6cf80095206aa785494ea1c02e69978499076d85f4ac4f26b7384966be6ab Jul 11 00:14:03.035700 containerd[1453]: 2025-07-11 00:14:02.975 [INFO][4569] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9af6cf80095206aa785494ea1c02e69978499076d85f4ac4f26b7384966be6ab" host="localhost" Jul 11 00:14:03.035700 containerd[1453]: 2025-07-11 00:14:02.991 [INFO][4569] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.9af6cf80095206aa785494ea1c02e69978499076d85f4ac4f26b7384966be6ab" host="localhost" Jul 11 00:14:03.035700 containerd[1453]: 2025-07-11 00:14:02.991 [INFO][4569] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.9af6cf80095206aa785494ea1c02e69978499076d85f4ac4f26b7384966be6ab" host="localhost" Jul 11 00:14:03.035700 containerd[1453]: 2025-07-11 00:14:02.991 [INFO][4569] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:14:03.035700 containerd[1453]: 2025-07-11 00:14:02.991 [INFO][4569] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="9af6cf80095206aa785494ea1c02e69978499076d85f4ac4f26b7384966be6ab" HandleID="k8s-pod-network.9af6cf80095206aa785494ea1c02e69978499076d85f4ac4f26b7384966be6ab" Workload="localhost-k8s-goldmane--768f4c5c69--kqs4z-eth0" Jul 11 00:14:03.036790 containerd[1453]: 2025-07-11 00:14:03.005 [INFO][4541] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9af6cf80095206aa785494ea1c02e69978499076d85f4ac4f26b7384966be6ab" Namespace="calico-system" Pod="goldmane-768f4c5c69-kqs4z" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--kqs4z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--kqs4z-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"09f6ed65-1087-4a6c-9f3f-47a108556bd1", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-768f4c5c69-kqs4z", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7de7e3e1d9b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:14:03.036790 containerd[1453]: 2025-07-11 00:14:03.006 [INFO][4541] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="9af6cf80095206aa785494ea1c02e69978499076d85f4ac4f26b7384966be6ab" Namespace="calico-system" Pod="goldmane-768f4c5c69-kqs4z" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--kqs4z-eth0" Jul 11 00:14:03.036790 containerd[1453]: 2025-07-11 00:14:03.006 [INFO][4541] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7de7e3e1d9b ContainerID="9af6cf80095206aa785494ea1c02e69978499076d85f4ac4f26b7384966be6ab" Namespace="calico-system" Pod="goldmane-768f4c5c69-kqs4z" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--kqs4z-eth0" Jul 11 00:14:03.036790 containerd[1453]: 2025-07-11 00:14:03.009 [INFO][4541] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9af6cf80095206aa785494ea1c02e69978499076d85f4ac4f26b7384966be6ab" Namespace="calico-system" Pod="goldmane-768f4c5c69-kqs4z" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--kqs4z-eth0" Jul 11 00:14:03.036790 containerd[1453]: 2025-07-11 00:14:03.009 [INFO][4541] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9af6cf80095206aa785494ea1c02e69978499076d85f4ac4f26b7384966be6ab" Namespace="calico-system" Pod="goldmane-768f4c5c69-kqs4z" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--kqs4z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--kqs4z-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"09f6ed65-1087-4a6c-9f3f-47a108556bd1", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9af6cf80095206aa785494ea1c02e69978499076d85f4ac4f26b7384966be6ab", Pod:"goldmane-768f4c5c69-kqs4z", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7de7e3e1d9b", MAC:"8a:39:2e:6b:77:4f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:14:03.036790 containerd[1453]: 2025-07-11 00:14:03.029 [INFO][4541] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9af6cf80095206aa785494ea1c02e69978499076d85f4ac4f26b7384966be6ab" Namespace="calico-system" Pod="goldmane-768f4c5c69-kqs4z" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--kqs4z-eth0" Jul 11 00:14:03.050440 containerd[1453]: time="2025-07-11T00:14:03.050364950Z" level=info msg="CreateContainer within sandbox \"98eee2b1ee60faed1392912b3c056f2e6e90b562f1e452e711a35b37dfdc3a1a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"bb0631321001b13e5b6bd3e5710c286a3ac5d0fc0b7bd3076fb51abd833bc8b8\"" Jul 11 00:14:03.052780 containerd[1453]: time="2025-07-11T00:14:03.051585933Z" level=info msg="StartContainer for \"bb0631321001b13e5b6bd3e5710c286a3ac5d0fc0b7bd3076fb51abd833bc8b8\"" Jul 11 00:14:03.080852 containerd[1453]: time="2025-07-11T00:14:03.078472411Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:14:03.080852 containerd[1453]: time="2025-07-11T00:14:03.078581586Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:14:03.080852 containerd[1453]: time="2025-07-11T00:14:03.078596964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:14:03.081364 containerd[1453]: time="2025-07-11T00:14:03.081220110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:14:03.121071 systemd[1]: Started cri-containerd-bb0631321001b13e5b6bd3e5710c286a3ac5d0fc0b7bd3076fb51abd833bc8b8.scope - libcontainer container bb0631321001b13e5b6bd3e5710c286a3ac5d0fc0b7bd3076fb51abd833bc8b8. Jul 11 00:14:03.126583 systemd[1]: Started cri-containerd-9af6cf80095206aa785494ea1c02e69978499076d85f4ac4f26b7384966be6ab.scope - libcontainer container 9af6cf80095206aa785494ea1c02e69978499076d85f4ac4f26b7384966be6ab. Jul 11 00:14:03.132222 systemd-networkd[1388]: calia9b6101d2fc: Link UP Jul 11 00:14:03.132544 systemd-networkd[1388]: calia9b6101d2fc: Gained carrier Jul 11 00:14:03.156663 containerd[1453]: 2025-07-11 00:14:02.789 [INFO][4553] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--67b7cfcdf9--djn6p-eth0 calico-kube-controllers-67b7cfcdf9- calico-system 2b528b1a-2869-4a1f-9208-de9d96c1a0aa 1058 0 2025-07-11 00:13:38 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:67b7cfcdf9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-67b7cfcdf9-djn6p eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia9b6101d2fc [] [] }} ContainerID="4a7848a5c363b91dda18f14d1e7900dc623df2022e01692c1b6a929c66858066" Namespace="calico-system" Pod="calico-kube-controllers-67b7cfcdf9-djn6p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67b7cfcdf9--djn6p-" Jul 11 00:14:03.156663 containerd[1453]: 2025-07-11 00:14:02.790 [INFO][4553] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4a7848a5c363b91dda18f14d1e7900dc623df2022e01692c1b6a929c66858066" Namespace="calico-system" Pod="calico-kube-controllers-67b7cfcdf9-djn6p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67b7cfcdf9--djn6p-eth0" Jul 11 00:14:03.156663 containerd[1453]: 2025-07-11 00:14:02.816 [INFO][4575] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4a7848a5c363b91dda18f14d1e7900dc623df2022e01692c1b6a929c66858066" HandleID="k8s-pod-network.4a7848a5c363b91dda18f14d1e7900dc623df2022e01692c1b6a929c66858066" Workload="localhost-k8s-calico--kube--controllers--67b7cfcdf9--djn6p-eth0" Jul 11 00:14:03.156663 containerd[1453]: 2025-07-11 00:14:02.816 [INFO][4575] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4a7848a5c363b91dda18f14d1e7900dc623df2022e01692c1b6a929c66858066" HandleID="k8s-pod-network.4a7848a5c363b91dda18f14d1e7900dc623df2022e01692c1b6a929c66858066" Workload="localhost-k8s-calico--kube--controllers--67b7cfcdf9--djn6p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f3f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-67b7cfcdf9-djn6p", "timestamp":"2025-07-11 00:14:02.816414138 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:14:03.156663 containerd[1453]: 2025-07-11 00:14:02.816 [INFO][4575] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:14:03.156663 containerd[1453]: 2025-07-11 00:14:02.992 [INFO][4575] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:14:03.156663 containerd[1453]: 2025-07-11 00:14:02.992 [INFO][4575] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:14:03.156663 containerd[1453]: 2025-07-11 00:14:03.027 [INFO][4575] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4a7848a5c363b91dda18f14d1e7900dc623df2022e01692c1b6a929c66858066" host="localhost" Jul 11 00:14:03.156663 containerd[1453]: 2025-07-11 00:14:03.044 [INFO][4575] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:14:03.156663 containerd[1453]: 2025-07-11 00:14:03.056 [INFO][4575] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:14:03.156663 containerd[1453]: 2025-07-11 00:14:03.062 [INFO][4575] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:14:03.156663 containerd[1453]: 2025-07-11 00:14:03.073 [INFO][4575] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:14:03.156663 containerd[1453]: 2025-07-11 00:14:03.075 [INFO][4575] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4a7848a5c363b91dda18f14d1e7900dc623df2022e01692c1b6a929c66858066" host="localhost" Jul 11 00:14:03.156663 containerd[1453]: 2025-07-11 00:14:03.079 [INFO][4575] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4a7848a5c363b91dda18f14d1e7900dc623df2022e01692c1b6a929c66858066 Jul 11 00:14:03.156663 containerd[1453]: 2025-07-11 00:14:03.101 [INFO][4575] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4a7848a5c363b91dda18f14d1e7900dc623df2022e01692c1b6a929c66858066" host="localhost" Jul 11 00:14:03.156663 containerd[1453]: 2025-07-11 00:14:03.117 [INFO][4575] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.4a7848a5c363b91dda18f14d1e7900dc623df2022e01692c1b6a929c66858066" host="localhost" Jul 11 00:14:03.156663 containerd[1453]: 2025-07-11 00:14:03.117 [INFO][4575] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.4a7848a5c363b91dda18f14d1e7900dc623df2022e01692c1b6a929c66858066" host="localhost" Jul 11 00:14:03.156663 containerd[1453]: 2025-07-11 00:14:03.119 [INFO][4575] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:14:03.156663 containerd[1453]: 2025-07-11 00:14:03.119 [INFO][4575] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="4a7848a5c363b91dda18f14d1e7900dc623df2022e01692c1b6a929c66858066" HandleID="k8s-pod-network.4a7848a5c363b91dda18f14d1e7900dc623df2022e01692c1b6a929c66858066" Workload="localhost-k8s-calico--kube--controllers--67b7cfcdf9--djn6p-eth0" Jul 11 00:14:03.157496 containerd[1453]: 2025-07-11 00:14:03.123 [INFO][4553] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4a7848a5c363b91dda18f14d1e7900dc623df2022e01692c1b6a929c66858066" Namespace="calico-system" Pod="calico-kube-controllers-67b7cfcdf9-djn6p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67b7cfcdf9--djn6p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--67b7cfcdf9--djn6p-eth0", GenerateName:"calico-kube-controllers-67b7cfcdf9-", Namespace:"calico-system", SelfLink:"", UID:"2b528b1a-2869-4a1f-9208-de9d96c1a0aa", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67b7cfcdf9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-67b7cfcdf9-djn6p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia9b6101d2fc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:14:03.157496 containerd[1453]: 2025-07-11 00:14:03.124 [INFO][4553] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="4a7848a5c363b91dda18f14d1e7900dc623df2022e01692c1b6a929c66858066" Namespace="calico-system" Pod="calico-kube-controllers-67b7cfcdf9-djn6p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67b7cfcdf9--djn6p-eth0" Jul 11 00:14:03.157496 containerd[1453]: 2025-07-11 00:14:03.124 [INFO][4553] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia9b6101d2fc ContainerID="4a7848a5c363b91dda18f14d1e7900dc623df2022e01692c1b6a929c66858066" Namespace="calico-system" Pod="calico-kube-controllers-67b7cfcdf9-djn6p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67b7cfcdf9--djn6p-eth0" Jul 11 00:14:03.157496 containerd[1453]: 2025-07-11 00:14:03.138 [INFO][4553] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4a7848a5c363b91dda18f14d1e7900dc623df2022e01692c1b6a929c66858066" Namespace="calico-system" Pod="calico-kube-controllers-67b7cfcdf9-djn6p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67b7cfcdf9--djn6p-eth0" Jul 11 00:14:03.157496 containerd[1453]: 2025-07-11 00:14:03.139 [INFO][4553] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4a7848a5c363b91dda18f14d1e7900dc623df2022e01692c1b6a929c66858066" Namespace="calico-system" Pod="calico-kube-controllers-67b7cfcdf9-djn6p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67b7cfcdf9--djn6p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--67b7cfcdf9--djn6p-eth0", GenerateName:"calico-kube-controllers-67b7cfcdf9-", Namespace:"calico-system", SelfLink:"", UID:"2b528b1a-2869-4a1f-9208-de9d96c1a0aa", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67b7cfcdf9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4a7848a5c363b91dda18f14d1e7900dc623df2022e01692c1b6a929c66858066", Pod:"calico-kube-controllers-67b7cfcdf9-djn6p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia9b6101d2fc", MAC:"b6:5b:e9:0d:74:3e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:14:03.157496 containerd[1453]: 2025-07-11 00:14:03.152 [INFO][4553] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4a7848a5c363b91dda18f14d1e7900dc623df2022e01692c1b6a929c66858066" Namespace="calico-system" Pod="calico-kube-controllers-67b7cfcdf9-djn6p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67b7cfcdf9--djn6p-eth0" Jul 11 00:14:03.175825 systemd-resolved[1325]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:14:03.195597 kubelet[2482]: I0711 00:14:03.195266 2482 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:14:03.213316 containerd[1453]: time="2025-07-11T00:14:03.213254043Z" level=info msg="StartContainer for \"bb0631321001b13e5b6bd3e5710c286a3ac5d0fc0b7bd3076fb51abd833bc8b8\" returns successfully" Jul 11 00:14:03.213530 containerd[1453]: time="2025-07-11T00:14:03.213510204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-kqs4z,Uid:09f6ed65-1087-4a6c-9f3f-47a108556bd1,Namespace:calico-system,Attempt:1,} returns sandbox id \"9af6cf80095206aa785494ea1c02e69978499076d85f4ac4f26b7384966be6ab\"" Jul 11 00:14:03.231034 containerd[1453]: time="2025-07-11T00:14:03.229857301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:14:03.231034 containerd[1453]: time="2025-07-11T00:14:03.229921141Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:14:03.231034 containerd[1453]: time="2025-07-11T00:14:03.229935077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:14:03.231034 containerd[1453]: time="2025-07-11T00:14:03.230014846Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:14:03.233674 systemd-networkd[1388]: cali25a2a249da7: Link UP Jul 11 00:14:03.234799 systemd-networkd[1388]: cali25a2a249da7: Gained carrier Jul 11 00:14:03.252374 containerd[1453]: 2025-07-11 00:14:03.024 [INFO][4585] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--26c2d-eth0 coredns-668d6bf9bc- kube-system f72dd124-c970-4b6b-a074-81167ad3af44 1059 0 2025-07-11 00:13:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-26c2d eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali25a2a249da7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="63b1dc3052e5bca0afad005ab8db879d58ccba26e34bf97e1d04c891a47de05c" Namespace="kube-system" Pod="coredns-668d6bf9bc-26c2d" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--26c2d-" Jul 11 00:14:03.252374 containerd[1453]: 2025-07-11 00:14:03.025 [INFO][4585] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="63b1dc3052e5bca0afad005ab8db879d58ccba26e34bf97e1d04c891a47de05c" Namespace="kube-system" Pod="coredns-668d6bf9bc-26c2d" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--26c2d-eth0" Jul 11 00:14:03.252374 containerd[1453]: 2025-07-11 00:14:03.073 [INFO][4605] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="63b1dc3052e5bca0afad005ab8db879d58ccba26e34bf97e1d04c891a47de05c" HandleID="k8s-pod-network.63b1dc3052e5bca0afad005ab8db879d58ccba26e34bf97e1d04c891a47de05c" Workload="localhost-k8s-coredns--668d6bf9bc--26c2d-eth0" Jul 11 00:14:03.252374 containerd[1453]: 2025-07-11 00:14:03.073 [INFO][4605] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="63b1dc3052e5bca0afad005ab8db879d58ccba26e34bf97e1d04c891a47de05c" HandleID="k8s-pod-network.63b1dc3052e5bca0afad005ab8db879d58ccba26e34bf97e1d04c891a47de05c" Workload="localhost-k8s-coredns--668d6bf9bc--26c2d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f450), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-26c2d", "timestamp":"2025-07-11 00:14:03.073454487 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:14:03.252374 containerd[1453]: 2025-07-11 00:14:03.073 [INFO][4605] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:14:03.252374 containerd[1453]: 2025-07-11 00:14:03.119 [INFO][4605] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:14:03.252374 containerd[1453]: 2025-07-11 00:14:03.119 [INFO][4605] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:14:03.252374 containerd[1453]: 2025-07-11 00:14:03.131 [INFO][4605] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.63b1dc3052e5bca0afad005ab8db879d58ccba26e34bf97e1d04c891a47de05c" host="localhost" Jul 11 00:14:03.252374 containerd[1453]: 2025-07-11 00:14:03.138 [INFO][4605] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:14:03.252374 containerd[1453]: 2025-07-11 00:14:03.155 [INFO][4605] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:14:03.252374 containerd[1453]: 2025-07-11 00:14:03.159 [INFO][4605] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:14:03.252374 containerd[1453]: 2025-07-11 00:14:03.162 [INFO][4605] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:14:03.252374 containerd[1453]: 2025-07-11 00:14:03.162 [INFO][4605] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.63b1dc3052e5bca0afad005ab8db879d58ccba26e34bf97e1d04c891a47de05c" host="localhost" Jul 11 00:14:03.252374 containerd[1453]: 2025-07-11 00:14:03.164 [INFO][4605] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.63b1dc3052e5bca0afad005ab8db879d58ccba26e34bf97e1d04c891a47de05c Jul 11 00:14:03.252374 containerd[1453]: 2025-07-11 00:14:03.171 [INFO][4605] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.63b1dc3052e5bca0afad005ab8db879d58ccba26e34bf97e1d04c891a47de05c" host="localhost" Jul 11 00:14:03.252374 containerd[1453]: 2025-07-11 00:14:03.220 [INFO][4605] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.63b1dc3052e5bca0afad005ab8db879d58ccba26e34bf97e1d04c891a47de05c" host="localhost" Jul 11 00:14:03.252374 containerd[1453]: 2025-07-11 00:14:03.220 [INFO][4605] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.63b1dc3052e5bca0afad005ab8db879d58ccba26e34bf97e1d04c891a47de05c" host="localhost" Jul 11 00:14:03.252374 containerd[1453]: 2025-07-11 00:14:03.220 [INFO][4605] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:14:03.252374 containerd[1453]: 2025-07-11 00:14:03.220 [INFO][4605] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="63b1dc3052e5bca0afad005ab8db879d58ccba26e34bf97e1d04c891a47de05c" HandleID="k8s-pod-network.63b1dc3052e5bca0afad005ab8db879d58ccba26e34bf97e1d04c891a47de05c" Workload="localhost-k8s-coredns--668d6bf9bc--26c2d-eth0" Jul 11 00:14:03.252952 containerd[1453]: 2025-07-11 00:14:03.224 [INFO][4585] cni-plugin/k8s.go 418: Populated endpoint ContainerID="63b1dc3052e5bca0afad005ab8db879d58ccba26e34bf97e1d04c891a47de05c" Namespace="kube-system" Pod="coredns-668d6bf9bc-26c2d" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--26c2d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--26c2d-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f72dd124-c970-4b6b-a074-81167ad3af44", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-26c2d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali25a2a249da7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:14:03.252952 containerd[1453]: 2025-07-11 00:14:03.224 [INFO][4585] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="63b1dc3052e5bca0afad005ab8db879d58ccba26e34bf97e1d04c891a47de05c" Namespace="kube-system" Pod="coredns-668d6bf9bc-26c2d" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--26c2d-eth0" Jul 11 00:14:03.252952 containerd[1453]: 2025-07-11 00:14:03.224 [INFO][4585] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali25a2a249da7 ContainerID="63b1dc3052e5bca0afad005ab8db879d58ccba26e34bf97e1d04c891a47de05c" Namespace="kube-system" Pod="coredns-668d6bf9bc-26c2d" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--26c2d-eth0" Jul 11 00:14:03.252952 containerd[1453]: 2025-07-11 00:14:03.235 [INFO][4585] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="63b1dc3052e5bca0afad005ab8db879d58ccba26e34bf97e1d04c891a47de05c" Namespace="kube-system" Pod="coredns-668d6bf9bc-26c2d" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--26c2d-eth0" Jul 11 00:14:03.252952 containerd[1453]: 2025-07-11 00:14:03.235 [INFO][4585] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="63b1dc3052e5bca0afad005ab8db879d58ccba26e34bf97e1d04c891a47de05c" Namespace="kube-system" Pod="coredns-668d6bf9bc-26c2d" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--26c2d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--26c2d-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f72dd124-c970-4b6b-a074-81167ad3af44", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"63b1dc3052e5bca0afad005ab8db879d58ccba26e34bf97e1d04c891a47de05c", Pod:"coredns-668d6bf9bc-26c2d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali25a2a249da7", MAC:"2e:94:d2:4e:56:da", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:14:03.252952 containerd[1453]: 2025-07-11 00:14:03.247 [INFO][4585] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="63b1dc3052e5bca0afad005ab8db879d58ccba26e34bf97e1d04c891a47de05c" Namespace="kube-system" Pod="coredns-668d6bf9bc-26c2d" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--26c2d-eth0" Jul 11 00:14:03.264484 systemd[1]: Started cri-containerd-4a7848a5c363b91dda18f14d1e7900dc623df2022e01692c1b6a929c66858066.scope - libcontainer container 4a7848a5c363b91dda18f14d1e7900dc623df2022e01692c1b6a929c66858066. Jul 11 00:14:03.281065 containerd[1453]: time="2025-07-11T00:14:03.280834346Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:14:03.281065 containerd[1453]: time="2025-07-11T00:14:03.280892825Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:14:03.281065 containerd[1453]: time="2025-07-11T00:14:03.280907493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:14:03.281065 containerd[1453]: time="2025-07-11T00:14:03.281004204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:14:03.290233 systemd-resolved[1325]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:14:03.305575 systemd[1]: Started cri-containerd-63b1dc3052e5bca0afad005ab8db879d58ccba26e34bf97e1d04c891a47de05c.scope - libcontainer container 63b1dc3052e5bca0afad005ab8db879d58ccba26e34bf97e1d04c891a47de05c. Jul 11 00:14:03.325816 containerd[1453]: time="2025-07-11T00:14:03.325726784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67b7cfcdf9-djn6p,Uid:2b528b1a-2869-4a1f-9208-de9d96c1a0aa,Namespace:calico-system,Attempt:1,} returns sandbox id \"4a7848a5c363b91dda18f14d1e7900dc623df2022e01692c1b6a929c66858066\"" Jul 11 00:14:03.326062 systemd-resolved[1325]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:14:03.353645 containerd[1453]: time="2025-07-11T00:14:03.353589344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-26c2d,Uid:f72dd124-c970-4b6b-a074-81167ad3af44,Namespace:kube-system,Attempt:1,} returns sandbox id \"63b1dc3052e5bca0afad005ab8db879d58ccba26e34bf97e1d04c891a47de05c\"" Jul 11 00:14:03.354651 kubelet[2482]: E0711 00:14:03.354418 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:14:03.357140 containerd[1453]: time="2025-07-11T00:14:03.357077104Z" level=info msg="CreateContainer within sandbox \"63b1dc3052e5bca0afad005ab8db879d58ccba26e34bf97e1d04c891a47de05c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 11 00:14:03.396969 containerd[1453]: time="2025-07-11T00:14:03.396902447Z" level=info msg="CreateContainer within sandbox \"63b1dc3052e5bca0afad005ab8db879d58ccba26e34bf97e1d04c891a47de05c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0f0ed75463f75d475b4ee0d4c231b5dd3d4fa0aeb6bdab050a22274922754106\"" Jul 11 00:14:03.398678 containerd[1453]: time="2025-07-11T00:14:03.398453600Z" level=info msg="StartContainer for \"0f0ed75463f75d475b4ee0d4c231b5dd3d4fa0aeb6bdab050a22274922754106\"" Jul 11 00:14:03.434542 systemd[1]: Started cri-containerd-0f0ed75463f75d475b4ee0d4c231b5dd3d4fa0aeb6bdab050a22274922754106.scope - libcontainer container 0f0ed75463f75d475b4ee0d4c231b5dd3d4fa0aeb6bdab050a22274922754106. Jul 11 00:14:03.458588 containerd[1453]: time="2025-07-11T00:14:03.458498933Z" level=info msg="StopPodSandbox for \"9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45\"" Jul 11 00:14:03.485433 containerd[1453]: time="2025-07-11T00:14:03.485359122Z" level=info msg="StartContainer for \"0f0ed75463f75d475b4ee0d4c231b5dd3d4fa0aeb6bdab050a22274922754106\" returns successfully" Jul 11 00:14:03.542409 containerd[1453]: 2025-07-11 00:14:03.506 [INFO][4881] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45" Jul 11 00:14:03.542409 containerd[1453]: 2025-07-11 00:14:03.507 [INFO][4881] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45" iface="eth0" netns="/var/run/netns/cni-677cf03b-98c7-5e5f-6465-1db27fd2f943" Jul 11 00:14:03.542409 containerd[1453]: 2025-07-11 00:14:03.509 [INFO][4881] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45" iface="eth0" netns="/var/run/netns/cni-677cf03b-98c7-5e5f-6465-1db27fd2f943" Jul 11 00:14:03.542409 containerd[1453]: 2025-07-11 00:14:03.509 [INFO][4881] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45" iface="eth0" netns="/var/run/netns/cni-677cf03b-98c7-5e5f-6465-1db27fd2f943" Jul 11 00:14:03.542409 containerd[1453]: 2025-07-11 00:14:03.509 [INFO][4881] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45" Jul 11 00:14:03.542409 containerd[1453]: 2025-07-11 00:14:03.509 [INFO][4881] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45" Jul 11 00:14:03.542409 containerd[1453]: 2025-07-11 00:14:03.530 [INFO][4899] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45" HandleID="k8s-pod-network.9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45" Workload="localhost-k8s-calico--apiserver--5b5f5d84fb--jgmkp-eth0" Jul 11 00:14:03.542409 containerd[1453]: 2025-07-11 00:14:03.530 [INFO][4899] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:14:03.542409 containerd[1453]: 2025-07-11 00:14:03.530 [INFO][4899] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:14:03.542409 containerd[1453]: 2025-07-11 00:14:03.535 [WARNING][4899] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45" HandleID="k8s-pod-network.9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45" Workload="localhost-k8s-calico--apiserver--5b5f5d84fb--jgmkp-eth0" Jul 11 00:14:03.542409 containerd[1453]: 2025-07-11 00:14:03.535 [INFO][4899] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45" HandleID="k8s-pod-network.9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45" Workload="localhost-k8s-calico--apiserver--5b5f5d84fb--jgmkp-eth0" Jul 11 00:14:03.542409 containerd[1453]: 2025-07-11 00:14:03.537 [INFO][4899] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:14:03.542409 containerd[1453]: 2025-07-11 00:14:03.539 [INFO][4881] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45" Jul 11 00:14:03.543184 containerd[1453]: time="2025-07-11T00:14:03.543143008Z" level=info msg="TearDown network for sandbox \"9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45\" successfully" Jul 11 00:14:03.543184 containerd[1453]: time="2025-07-11T00:14:03.543176040Z" level=info msg="StopPodSandbox for \"9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45\" returns successfully" Jul 11 00:14:03.543966 containerd[1453]: time="2025-07-11T00:14:03.543920047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b5f5d84fb-jgmkp,Uid:b110a319-ec46-48e9-afde-f14e51fe5798,Namespace:calico-apiserver,Attempt:1,}" Jul 11 00:14:03.677106 systemd-networkd[1388]: caliea1627560b9: Link UP Jul 11 00:14:03.681427 systemd-networkd[1388]: caliea1627560b9: Gained carrier Jul 11 00:14:03.701338 containerd[1453]: 2025-07-11 00:14:03.607 [INFO][4911] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5b5f5d84fb--jgmkp-eth0 calico-apiserver-5b5f5d84fb- calico-apiserver b110a319-ec46-48e9-afde-f14e51fe5798 1091 0 2025-07-11 00:13:35 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b5f5d84fb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5b5f5d84fb-jgmkp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliea1627560b9 [] [] }} ContainerID="da9e259fc13e048d858ce144f2ec7e74ff01a7658de44f393864990832959da2" Namespace="calico-apiserver" Pod="calico-apiserver-5b5f5d84fb-jgmkp" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b5f5d84fb--jgmkp-" Jul 11 00:14:03.701338 containerd[1453]: 2025-07-11 00:14:03.607 [INFO][4911] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="da9e259fc13e048d858ce144f2ec7e74ff01a7658de44f393864990832959da2" Namespace="calico-apiserver" Pod="calico-apiserver-5b5f5d84fb-jgmkp" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b5f5d84fb--jgmkp-eth0" Jul 11 00:14:03.701338 containerd[1453]: 2025-07-11 00:14:03.633 [INFO][4925] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="da9e259fc13e048d858ce144f2ec7e74ff01a7658de44f393864990832959da2" HandleID="k8s-pod-network.da9e259fc13e048d858ce144f2ec7e74ff01a7658de44f393864990832959da2" Workload="localhost-k8s-calico--apiserver--5b5f5d84fb--jgmkp-eth0" Jul 11 00:14:03.701338 containerd[1453]: 2025-07-11 00:14:03.633 [INFO][4925] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="da9e259fc13e048d858ce144f2ec7e74ff01a7658de44f393864990832959da2" HandleID="k8s-pod-network.da9e259fc13e048d858ce144f2ec7e74ff01a7658de44f393864990832959da2" Workload="localhost-k8s-calico--apiserver--5b5f5d84fb--jgmkp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001394f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5b5f5d84fb-jgmkp", "timestamp":"2025-07-11 00:14:03.633559131 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:14:03.701338 containerd[1453]: 2025-07-11 00:14:03.633 [INFO][4925] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:14:03.701338 containerd[1453]: 2025-07-11 00:14:03.633 [INFO][4925] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:14:03.701338 containerd[1453]: 2025-07-11 00:14:03.634 [INFO][4925] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:14:03.701338 containerd[1453]: 2025-07-11 00:14:03.640 [INFO][4925] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.da9e259fc13e048d858ce144f2ec7e74ff01a7658de44f393864990832959da2" host="localhost" Jul 11 00:14:03.701338 containerd[1453]: 2025-07-11 00:14:03.647 [INFO][4925] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:14:03.701338 containerd[1453]: 2025-07-11 00:14:03.651 [INFO][4925] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:14:03.701338 containerd[1453]: 2025-07-11 00:14:03.652 [INFO][4925] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:14:03.701338 containerd[1453]: 2025-07-11 00:14:03.655 [INFO][4925] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:14:03.701338 containerd[1453]: 2025-07-11 00:14:03.655 [INFO][4925] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.da9e259fc13e048d858ce144f2ec7e74ff01a7658de44f393864990832959da2" host="localhost" Jul 11 00:14:03.701338 containerd[1453]: 2025-07-11 00:14:03.656 [INFO][4925] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.da9e259fc13e048d858ce144f2ec7e74ff01a7658de44f393864990832959da2 Jul 11 00:14:03.701338 containerd[1453]: 2025-07-11 00:14:03.661 [INFO][4925] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.da9e259fc13e048d858ce144f2ec7e74ff01a7658de44f393864990832959da2" host="localhost" Jul 11 00:14:03.701338 containerd[1453]: 2025-07-11 00:14:03.669 [INFO][4925] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.da9e259fc13e048d858ce144f2ec7e74ff01a7658de44f393864990832959da2" host="localhost" Jul 11 00:14:03.701338 containerd[1453]: 2025-07-11 00:14:03.669 [INFO][4925] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.da9e259fc13e048d858ce144f2ec7e74ff01a7658de44f393864990832959da2" host="localhost" Jul 11 00:14:03.701338 containerd[1453]: 2025-07-11 00:14:03.669 [INFO][4925] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:14:03.701338 containerd[1453]: 2025-07-11 00:14:03.669 [INFO][4925] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="da9e259fc13e048d858ce144f2ec7e74ff01a7658de44f393864990832959da2" HandleID="k8s-pod-network.da9e259fc13e048d858ce144f2ec7e74ff01a7658de44f393864990832959da2" Workload="localhost-k8s-calico--apiserver--5b5f5d84fb--jgmkp-eth0" Jul 11 00:14:03.702197 containerd[1453]: 2025-07-11 00:14:03.673 [INFO][4911] cni-plugin/k8s.go 418: Populated endpoint ContainerID="da9e259fc13e048d858ce144f2ec7e74ff01a7658de44f393864990832959da2" Namespace="calico-apiserver" Pod="calico-apiserver-5b5f5d84fb-jgmkp" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b5f5d84fb--jgmkp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b5f5d84fb--jgmkp-eth0", GenerateName:"calico-apiserver-5b5f5d84fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"b110a319-ec46-48e9-afde-f14e51fe5798", ResourceVersion:"1091", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b5f5d84fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5b5f5d84fb-jgmkp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliea1627560b9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:14:03.702197 containerd[1453]: 2025-07-11 00:14:03.673 [INFO][4911] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="da9e259fc13e048d858ce144f2ec7e74ff01a7658de44f393864990832959da2" Namespace="calico-apiserver" Pod="calico-apiserver-5b5f5d84fb-jgmkp" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b5f5d84fb--jgmkp-eth0" Jul 11 00:14:03.702197 containerd[1453]: 2025-07-11 00:14:03.673 [INFO][4911] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliea1627560b9 ContainerID="da9e259fc13e048d858ce144f2ec7e74ff01a7658de44f393864990832959da2" Namespace="calico-apiserver" Pod="calico-apiserver-5b5f5d84fb-jgmkp" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b5f5d84fb--jgmkp-eth0" Jul 11 00:14:03.702197 containerd[1453]: 2025-07-11 00:14:03.681 [INFO][4911] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="da9e259fc13e048d858ce144f2ec7e74ff01a7658de44f393864990832959da2" Namespace="calico-apiserver" Pod="calico-apiserver-5b5f5d84fb-jgmkp" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b5f5d84fb--jgmkp-eth0" Jul 11 00:14:03.702197 containerd[1453]: 2025-07-11 00:14:03.684 [INFO][4911] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="da9e259fc13e048d858ce144f2ec7e74ff01a7658de44f393864990832959da2" Namespace="calico-apiserver" Pod="calico-apiserver-5b5f5d84fb-jgmkp" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b5f5d84fb--jgmkp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b5f5d84fb--jgmkp-eth0", GenerateName:"calico-apiserver-5b5f5d84fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"b110a319-ec46-48e9-afde-f14e51fe5798", ResourceVersion:"1091", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b5f5d84fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"da9e259fc13e048d858ce144f2ec7e74ff01a7658de44f393864990832959da2", Pod:"calico-apiserver-5b5f5d84fb-jgmkp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliea1627560b9", MAC:"e2:f1:51:71:3c:ba", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:14:03.702197 containerd[1453]: 2025-07-11 00:14:03.696 [INFO][4911] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="da9e259fc13e048d858ce144f2ec7e74ff01a7658de44f393864990832959da2" Namespace="calico-apiserver" Pod="calico-apiserver-5b5f5d84fb-jgmkp" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b5f5d84fb--jgmkp-eth0" Jul 11 00:14:03.723674 containerd[1453]: time="2025-07-11T00:14:03.723531982Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:14:03.723674 containerd[1453]: time="2025-07-11T00:14:03.723601913Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:14:03.723674 containerd[1453]: time="2025-07-11T00:14:03.723613665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:14:03.724107 containerd[1453]: time="2025-07-11T00:14:03.723727149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:14:03.744202 systemd[1]: Started cri-containerd-da9e259fc13e048d858ce144f2ec7e74ff01a7658de44f393864990832959da2.scope - libcontainer container da9e259fc13e048d858ce144f2ec7e74ff01a7658de44f393864990832959da2. Jul 11 00:14:03.747754 kubelet[2482]: E0711 00:14:03.747722 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:14:03.768680 systemd[1]: run-netns-cni\x2d677cf03b\x2d98c7\x2d5e5f\x2d6465\x2d1db27fd2f943.mount: Deactivated successfully. Jul 11 00:14:03.777064 systemd-resolved[1325]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:14:03.807877 containerd[1453]: time="2025-07-11T00:14:03.807831761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b5f5d84fb-jgmkp,Uid:b110a319-ec46-48e9-afde-f14e51fe5798,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"da9e259fc13e048d858ce144f2ec7e74ff01a7658de44f393864990832959da2\"" Jul 11 00:14:04.459888 containerd[1453]: time="2025-07-11T00:14:04.459818530Z" level=info msg="StopPodSandbox for \"993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243\"" Jul 11 00:14:04.515754 systemd-networkd[1388]: cali25a2a249da7: Gained IPv6LL Jul 11 00:14:04.535027 kubelet[2482]: I0711 00:14:04.534939 2482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-26c2d" podStartSLOduration=39.534911087 podStartE2EDuration="39.534911087s" podCreationTimestamp="2025-07-11 00:13:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:14:03.77228433 +0000 UTC m=+45.441658237" watchObservedRunningTime="2025-07-11 00:14:04.534911087 +0000 UTC m=+46.204284994" Jul 11 00:14:04.577374 containerd[1453]: 2025-07-11 00:14:04.533 [INFO][4999] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243" Jul 11 00:14:04.577374 containerd[1453]: 2025-07-11 00:14:04.534 [INFO][4999] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243" iface="eth0" netns="/var/run/netns/cni-4c5fe6e6-137d-dbd4-f108-74fdf80595dd" Jul 11 00:14:04.577374 containerd[1453]: 2025-07-11 00:14:04.534 [INFO][4999] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243" iface="eth0" netns="/var/run/netns/cni-4c5fe6e6-137d-dbd4-f108-74fdf80595dd" Jul 11 00:14:04.577374 containerd[1453]: 2025-07-11 00:14:04.534 [INFO][4999] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243" iface="eth0" netns="/var/run/netns/cni-4c5fe6e6-137d-dbd4-f108-74fdf80595dd" Jul 11 00:14:04.577374 containerd[1453]: 2025-07-11 00:14:04.534 [INFO][4999] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243" Jul 11 00:14:04.577374 containerd[1453]: 2025-07-11 00:14:04.534 [INFO][4999] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243" Jul 11 00:14:04.577374 containerd[1453]: 2025-07-11 00:14:04.560 [INFO][5007] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243" HandleID="k8s-pod-network.993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243" Workload="localhost-k8s-coredns--668d6bf9bc--csf5s-eth0" Jul 11 00:14:04.577374 containerd[1453]: 2025-07-11 00:14:04.560 [INFO][5007] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:14:04.577374 containerd[1453]: 2025-07-11 00:14:04.560 [INFO][5007] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:14:04.577374 containerd[1453]: 2025-07-11 00:14:04.568 [WARNING][5007] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243" HandleID="k8s-pod-network.993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243" Workload="localhost-k8s-coredns--668d6bf9bc--csf5s-eth0" Jul 11 00:14:04.577374 containerd[1453]: 2025-07-11 00:14:04.569 [INFO][5007] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243" HandleID="k8s-pod-network.993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243" Workload="localhost-k8s-coredns--668d6bf9bc--csf5s-eth0" Jul 11 00:14:04.577374 containerd[1453]: 2025-07-11 00:14:04.570 [INFO][5007] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:14:04.577374 containerd[1453]: 2025-07-11 00:14:04.573 [INFO][4999] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243" Jul 11 00:14:04.578157 containerd[1453]: time="2025-07-11T00:14:04.578106602Z" level=info msg="TearDown network for sandbox \"993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243\" successfully" Jul 11 00:14:04.578157 containerd[1453]: time="2025-07-11T00:14:04.578144585Z" level=info msg="StopPodSandbox for \"993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243\" returns successfully" Jul 11 00:14:04.578653 kubelet[2482]: E0711 00:14:04.578621 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:14:04.579807 containerd[1453]: time="2025-07-11T00:14:04.579174478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-csf5s,Uid:dc370272-6e6b-40f4-bdc3-9aff809787e2,Namespace:kube-system,Attempt:1,}" Jul 11 00:14:04.580041 systemd-networkd[1388]: cali7de7e3e1d9b: Gained IPv6LL Jul 11 00:14:04.582506 systemd[1]: run-netns-cni\x2d4c5fe6e6\x2d137d\x2ddbd4\x2df108\x2d74fdf80595dd.mount: Deactivated successfully. Jul 11 00:14:04.752081 kubelet[2482]: E0711 00:14:04.751852 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:14:04.771217 systemd-networkd[1388]: calia9b6101d2fc: Gained IPv6LL Jul 11 00:14:04.898952 systemd-networkd[1388]: caliea1627560b9: Gained IPv6LL Jul 11 00:14:04.990561 systemd-networkd[1388]: cali6a67c0d1d65: Link UP Jul 11 00:14:04.991859 systemd-networkd[1388]: cali6a67c0d1d65: Gained carrier Jul 11 00:14:05.011881 containerd[1453]: 2025-07-11 00:14:04.761 [INFO][5021] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--csf5s-eth0 coredns-668d6bf9bc- kube-system dc370272-6e6b-40f4-bdc3-9aff809787e2 1104 0 2025-07-11 00:13:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-csf5s eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6a67c0d1d65 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6e19f29b7ee86637947e92012fa7a06e543812dcc07f87eba7be2d12bec63573" Namespace="kube-system" Pod="coredns-668d6bf9bc-csf5s" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--csf5s-" Jul 11 00:14:05.011881 containerd[1453]: 2025-07-11 00:14:04.761 [INFO][5021] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6e19f29b7ee86637947e92012fa7a06e543812dcc07f87eba7be2d12bec63573" Namespace="kube-system" Pod="coredns-668d6bf9bc-csf5s" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--csf5s-eth0" Jul 11 00:14:05.011881 containerd[1453]: 2025-07-11 00:14:04.812 [INFO][5035] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6e19f29b7ee86637947e92012fa7a06e543812dcc07f87eba7be2d12bec63573" HandleID="k8s-pod-network.6e19f29b7ee86637947e92012fa7a06e543812dcc07f87eba7be2d12bec63573" Workload="localhost-k8s-coredns--668d6bf9bc--csf5s-eth0" Jul 11 00:14:05.011881 containerd[1453]: 2025-07-11 00:14:04.813 [INFO][5035] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6e19f29b7ee86637947e92012fa7a06e543812dcc07f87eba7be2d12bec63573" HandleID="k8s-pod-network.6e19f29b7ee86637947e92012fa7a06e543812dcc07f87eba7be2d12bec63573" Workload="localhost-k8s-coredns--668d6bf9bc--csf5s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fb90), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-csf5s", "timestamp":"2025-07-11 00:14:04.812911043 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:14:05.011881 containerd[1453]: 2025-07-11 00:14:04.813 [INFO][5035] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:14:05.011881 containerd[1453]: 2025-07-11 00:14:04.813 [INFO][5035] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:14:05.011881 containerd[1453]: 2025-07-11 00:14:04.813 [INFO][5035] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:14:05.011881 containerd[1453]: 2025-07-11 00:14:04.820 [INFO][5035] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6e19f29b7ee86637947e92012fa7a06e543812dcc07f87eba7be2d12bec63573" host="localhost" Jul 11 00:14:05.011881 containerd[1453]: 2025-07-11 00:14:04.825 [INFO][5035] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:14:05.011881 containerd[1453]: 2025-07-11 00:14:04.829 [INFO][5035] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:14:05.011881 containerd[1453]: 2025-07-11 00:14:04.830 [INFO][5035] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:14:05.011881 containerd[1453]: 2025-07-11 00:14:04.835 [INFO][5035] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:14:05.011881 containerd[1453]: 2025-07-11 00:14:04.836 [INFO][5035] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6e19f29b7ee86637947e92012fa7a06e543812dcc07f87eba7be2d12bec63573" host="localhost" Jul 11 00:14:05.011881 containerd[1453]: 2025-07-11 00:14:04.839 [INFO][5035] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6e19f29b7ee86637947e92012fa7a06e543812dcc07f87eba7be2d12bec63573 Jul 11 00:14:05.011881 containerd[1453]: 2025-07-11 00:14:04.975 [INFO][5035] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6e19f29b7ee86637947e92012fa7a06e543812dcc07f87eba7be2d12bec63573" host="localhost" Jul 11 00:14:05.011881 containerd[1453]: 2025-07-11 00:14:04.982 [INFO][5035] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.6e19f29b7ee86637947e92012fa7a06e543812dcc07f87eba7be2d12bec63573" host="localhost" Jul 11 00:14:05.011881 containerd[1453]: 2025-07-11 00:14:04.982 [INFO][5035] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.6e19f29b7ee86637947e92012fa7a06e543812dcc07f87eba7be2d12bec63573" host="localhost" Jul 11 00:14:05.011881 containerd[1453]: 2025-07-11 00:14:04.982 [INFO][5035] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:14:05.011881 containerd[1453]: 2025-07-11 00:14:04.982 [INFO][5035] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="6e19f29b7ee86637947e92012fa7a06e543812dcc07f87eba7be2d12bec63573" HandleID="k8s-pod-network.6e19f29b7ee86637947e92012fa7a06e543812dcc07f87eba7be2d12bec63573" Workload="localhost-k8s-coredns--668d6bf9bc--csf5s-eth0" Jul 11 00:14:05.013373 containerd[1453]: 2025-07-11 00:14:04.986 [INFO][5021] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6e19f29b7ee86637947e92012fa7a06e543812dcc07f87eba7be2d12bec63573" Namespace="kube-system" Pod="coredns-668d6bf9bc-csf5s" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--csf5s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--csf5s-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"dc370272-6e6b-40f4-bdc3-9aff809787e2", ResourceVersion:"1104", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-csf5s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6a67c0d1d65", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:14:05.013373 containerd[1453]: 2025-07-11 00:14:04.987 [INFO][5021] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="6e19f29b7ee86637947e92012fa7a06e543812dcc07f87eba7be2d12bec63573" Namespace="kube-system" Pod="coredns-668d6bf9bc-csf5s" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--csf5s-eth0" Jul 11 00:14:05.013373 containerd[1453]: 2025-07-11 00:14:04.987 [INFO][5021] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6a67c0d1d65 ContainerID="6e19f29b7ee86637947e92012fa7a06e543812dcc07f87eba7be2d12bec63573" Namespace="kube-system" Pod="coredns-668d6bf9bc-csf5s" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--csf5s-eth0" Jul 11 00:14:05.013373 containerd[1453]: 2025-07-11 00:14:04.992 [INFO][5021] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6e19f29b7ee86637947e92012fa7a06e543812dcc07f87eba7be2d12bec63573" Namespace="kube-system" Pod="coredns-668d6bf9bc-csf5s" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--csf5s-eth0" Jul 11 00:14:05.013373 containerd[1453]: 2025-07-11 00:14:04.992 [INFO][5021] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6e19f29b7ee86637947e92012fa7a06e543812dcc07f87eba7be2d12bec63573" Namespace="kube-system" Pod="coredns-668d6bf9bc-csf5s" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--csf5s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--csf5s-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"dc370272-6e6b-40f4-bdc3-9aff809787e2", ResourceVersion:"1104", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6e19f29b7ee86637947e92012fa7a06e543812dcc07f87eba7be2d12bec63573", Pod:"coredns-668d6bf9bc-csf5s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6a67c0d1d65", MAC:"36:3f:a5:aa:47:f6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:14:05.013373 containerd[1453]: 2025-07-11 00:14:05.004 [INFO][5021] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6e19f29b7ee86637947e92012fa7a06e543812dcc07f87eba7be2d12bec63573" Namespace="kube-system" Pod="coredns-668d6bf9bc-csf5s" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--csf5s-eth0" Jul 11 00:14:05.092371 containerd[1453]: time="2025-07-11T00:14:05.092109916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:14:05.092371 containerd[1453]: time="2025-07-11T00:14:05.092178835Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:14:05.092371 containerd[1453]: time="2025-07-11T00:14:05.092192801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:14:05.092627 containerd[1453]: time="2025-07-11T00:14:05.092321783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:14:05.128094 systemd[1]: Started cri-containerd-6e19f29b7ee86637947e92012fa7a06e543812dcc07f87eba7be2d12bec63573.scope - libcontainer container 6e19f29b7ee86637947e92012fa7a06e543812dcc07f87eba7be2d12bec63573. Jul 11 00:14:05.145407 systemd-resolved[1325]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:14:05.174559 containerd[1453]: time="2025-07-11T00:14:05.174493331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-csf5s,Uid:dc370272-6e6b-40f4-bdc3-9aff809787e2,Namespace:kube-system,Attempt:1,} returns sandbox id \"6e19f29b7ee86637947e92012fa7a06e543812dcc07f87eba7be2d12bec63573\"" Jul 11 00:14:05.175619 kubelet[2482]: E0711 00:14:05.175579 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:14:05.179432 containerd[1453]: time="2025-07-11T00:14:05.179388182Z" level=info msg="CreateContainer within sandbox \"6e19f29b7ee86637947e92012fa7a06e543812dcc07f87eba7be2d12bec63573\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 11 00:14:05.353593 containerd[1453]: time="2025-07-11T00:14:05.353409737Z" level=info msg="CreateContainer within sandbox \"6e19f29b7ee86637947e92012fa7a06e543812dcc07f87eba7be2d12bec63573\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b6d9eab6a9b9ffb4bb000a46863e16ade98d50eb6a10eeef4a055a1de95d9cb5\"" Jul 11 00:14:05.354552 containerd[1453]: time="2025-07-11T00:14:05.354498792Z" level=info msg="StartContainer for \"b6d9eab6a9b9ffb4bb000a46863e16ade98d50eb6a10eeef4a055a1de95d9cb5\"" Jul 11 00:14:05.357437 containerd[1453]: time="2025-07-11T00:14:05.357393086Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:14:05.358971 containerd[1453]: time="2025-07-11T00:14:05.358916776Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 11 00:14:05.362289 containerd[1453]: time="2025-07-11T00:14:05.362241519Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:14:05.366038 containerd[1453]: time="2025-07-11T00:14:05.365944231Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:14:05.366892 containerd[1453]: time="2025-07-11T00:14:05.366844471Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 2.42894045s" Jul 11 00:14:05.367395 containerd[1453]: time="2025-07-11T00:14:05.367375739Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 11 00:14:05.375058 containerd[1453]: time="2025-07-11T00:14:05.375015822Z" level=info msg="CreateContainer within sandbox \"03f2a2c377eac9892504f63e039d51a29429ee543e524701f14d98ef0d7d2f5d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 11 00:14:05.375690 containerd[1453]: time="2025-07-11T00:14:05.375336074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 11 00:14:05.395995 systemd[1]: Started cri-containerd-b6d9eab6a9b9ffb4bb000a46863e16ade98d50eb6a10eeef4a055a1de95d9cb5.scope - libcontainer container b6d9eab6a9b9ffb4bb000a46863e16ade98d50eb6a10eeef4a055a1de95d9cb5. Jul 11 00:14:05.405085 containerd[1453]: time="2025-07-11T00:14:05.405028141Z" level=info msg="CreateContainer within sandbox \"03f2a2c377eac9892504f63e039d51a29429ee543e524701f14d98ef0d7d2f5d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6afc5d37af6c2c0c57ee13f05f06478cfe1bd72fbc1188bdaf3c2fbdf0a5871b\"" Jul 11 00:14:05.406322 containerd[1453]: time="2025-07-11T00:14:05.405931567Z" level=info msg="StartContainer for \"6afc5d37af6c2c0c57ee13f05f06478cfe1bd72fbc1188bdaf3c2fbdf0a5871b\"" Jul 11 00:14:05.435282 containerd[1453]: time="2025-07-11T00:14:05.435238611Z" level=info msg="StartContainer for \"b6d9eab6a9b9ffb4bb000a46863e16ade98d50eb6a10eeef4a055a1de95d9cb5\" returns successfully" Jul 11 00:14:05.447004 systemd[1]: Started cri-containerd-6afc5d37af6c2c0c57ee13f05f06478cfe1bd72fbc1188bdaf3c2fbdf0a5871b.scope - libcontainer container 6afc5d37af6c2c0c57ee13f05f06478cfe1bd72fbc1188bdaf3c2fbdf0a5871b. Jul 11 00:14:05.512155 containerd[1453]: time="2025-07-11T00:14:05.512100670Z" level=info msg="StartContainer for \"6afc5d37af6c2c0c57ee13f05f06478cfe1bd72fbc1188bdaf3c2fbdf0a5871b\" returns successfully" Jul 11 00:14:05.761427 kubelet[2482]: E0711 00:14:05.758854 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:14:05.761427 kubelet[2482]: E0711 00:14:05.759267 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:14:05.824707 kubelet[2482]: I0711 00:14:05.824643 2482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5b5f5d84fb-77rqg" podStartSLOduration=27.231176619 podStartE2EDuration="30.82462243s" podCreationTimestamp="2025-07-11 00:13:35 +0000 UTC" firstStartedPulling="2025-07-11 00:14:01.779584654 +0000 UTC m=+43.448958561" lastFinishedPulling="2025-07-11 00:14:05.373030465 +0000 UTC m=+47.042404372" observedRunningTime="2025-07-11 00:14:05.823563862 +0000 UTC m=+47.492937769" watchObservedRunningTime="2025-07-11 00:14:05.82462243 +0000 UTC m=+47.493996338" Jul 11 00:14:06.177934 systemd-networkd[1388]: cali6a67c0d1d65: Gained IPv6LL Jul 11 00:14:06.385978 systemd[1]: Started sshd@9-10.0.0.36:22-10.0.0.1:36996.service - OpenSSH per-connection server daemon (10.0.0.1:36996). Jul 11 00:14:06.454431 sshd[5195]: Accepted publickey for core from 10.0.0.1 port 36996 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:14:06.456603 sshd[5195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:14:06.464846 systemd-logind[1439]: New session 10 of user core. Jul 11 00:14:06.472931 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 11 00:14:06.614843 sshd[5195]: pam_unix(sshd:session): session closed for user core Jul 11 00:14:06.620592 systemd[1]: sshd@9-10.0.0.36:22-10.0.0.1:36996.service: Deactivated successfully. Jul 11 00:14:06.623712 systemd[1]: session-10.scope: Deactivated successfully. Jul 11 00:14:06.624573 systemd-logind[1439]: Session 10 logged out. Waiting for processes to exit. Jul 11 00:14:06.625804 systemd-logind[1439]: Removed session 10. Jul 11 00:14:06.761312 kubelet[2482]: I0711 00:14:06.761203 2482 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:14:06.761628 kubelet[2482]: E0711 00:14:06.761598 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:14:06.761979 kubelet[2482]: E0711 00:14:06.761754 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:14:07.320435 containerd[1453]: time="2025-07-11T00:14:07.320363751Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:14:07.321182 containerd[1453]: time="2025-07-11T00:14:07.321126283Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 11 00:14:07.323413 containerd[1453]: time="2025-07-11T00:14:07.323373922Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:14:07.325578 containerd[1453]: time="2025-07-11T00:14:07.325550598Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:14:07.326220 containerd[1453]: time="2025-07-11T00:14:07.326192513Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 1.95028685s" Jul 11 00:14:07.326268 containerd[1453]: time="2025-07-11T00:14:07.326223801Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 11 00:14:07.327305 containerd[1453]: time="2025-07-11T00:14:07.327272831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 11 00:14:07.328305 containerd[1453]: time="2025-07-11T00:14:07.328280553Z" level=info msg="CreateContainer within sandbox \"98eee2b1ee60faed1392912b3c056f2e6e90b562f1e452e711a35b37dfdc3a1a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 11 00:14:07.362159 containerd[1453]: time="2025-07-11T00:14:07.361880849Z" level=info msg="CreateContainer within sandbox \"98eee2b1ee60faed1392912b3c056f2e6e90b562f1e452e711a35b37dfdc3a1a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"8ea2150b1ce695d9979d2b71c2f5ba99c012e70b55b30fe8c535f766e65e69a0\"" Jul 11 00:14:07.362793 containerd[1453]: time="2025-07-11T00:14:07.362725825Z" level=info msg="StartContainer for \"8ea2150b1ce695d9979d2b71c2f5ba99c012e70b55b30fe8c535f766e65e69a0\"" Jul 11 00:14:07.402912 systemd[1]: Started cri-containerd-8ea2150b1ce695d9979d2b71c2f5ba99c012e70b55b30fe8c535f766e65e69a0.scope - libcontainer container 8ea2150b1ce695d9979d2b71c2f5ba99c012e70b55b30fe8c535f766e65e69a0. Jul 11 00:14:07.436267 containerd[1453]: time="2025-07-11T00:14:07.436213108Z" level=info msg="StartContainer for \"8ea2150b1ce695d9979d2b71c2f5ba99c012e70b55b30fe8c535f766e65e69a0\" returns successfully" Jul 11 00:14:07.529993 kubelet[2482]: I0711 00:14:07.529952 2482 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 11 00:14:07.529993 kubelet[2482]: I0711 00:14:07.529991 2482 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 11 00:14:07.765947 kubelet[2482]: E0711 00:14:07.765808 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:14:07.777743 kubelet[2482]: I0711 00:14:07.777673 2482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-csf5s" podStartSLOduration=42.777653194 podStartE2EDuration="42.777653194s" podCreationTimestamp="2025-07-11 00:13:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:14:05.85470505 +0000 UTC m=+47.524078957" watchObservedRunningTime="2025-07-11 00:14:07.777653194 +0000 UTC m=+49.447027101" Jul 11 00:14:09.661628 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1500429186.mount: Deactivated successfully. Jul 11 00:14:11.301199 containerd[1453]: time="2025-07-11T00:14:11.301129598Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:14:11.302248 containerd[1453]: time="2025-07-11T00:14:11.302199216Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 11 00:14:11.308129 containerd[1453]: time="2025-07-11T00:14:11.308093148Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:14:11.311361 containerd[1453]: time="2025-07-11T00:14:11.311288915Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:14:11.312048 containerd[1453]: time="2025-07-11T00:14:11.312014778Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 3.984702022s" Jul 11 00:14:11.312129 containerd[1453]: time="2025-07-11T00:14:11.312052508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 11 00:14:11.316120 containerd[1453]: time="2025-07-11T00:14:11.315844536Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 11 00:14:11.317522 containerd[1453]: time="2025-07-11T00:14:11.317487709Z" level=info msg="CreateContainer within sandbox \"9af6cf80095206aa785494ea1c02e69978499076d85f4ac4f26b7384966be6ab\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 11 00:14:11.345630 containerd[1453]: time="2025-07-11T00:14:11.345586766Z" level=info msg="CreateContainer within sandbox \"9af6cf80095206aa785494ea1c02e69978499076d85f4ac4f26b7384966be6ab\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"e8e9978074f9b20e911ae5367c05b829197bc5731790f8c1f6995f0340e318f1\"" Jul 11 00:14:11.346373 containerd[1453]: time="2025-07-11T00:14:11.346333367Z" level=info msg="StartContainer for \"e8e9978074f9b20e911ae5367c05b829197bc5731790f8c1f6995f0340e318f1\"" Jul 11 00:14:11.394186 systemd[1]: Started cri-containerd-e8e9978074f9b20e911ae5367c05b829197bc5731790f8c1f6995f0340e318f1.scope - libcontainer container e8e9978074f9b20e911ae5367c05b829197bc5731790f8c1f6995f0340e318f1. Jul 11 00:14:11.630788 systemd[1]: Started sshd@10-10.0.0.36:22-10.0.0.1:54300.service - OpenSSH per-connection server daemon (10.0.0.1:54300). Jul 11 00:14:11.692960 containerd[1453]: time="2025-07-11T00:14:11.646904754Z" level=info msg="StartContainer for \"e8e9978074f9b20e911ae5367c05b829197bc5731790f8c1f6995f0340e318f1\" returns successfully" Jul 11 00:14:11.693389 sshd[5308]: Accepted publickey for core from 10.0.0.1 port 54300 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:14:11.695586 sshd[5308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:14:11.699953 systemd-logind[1439]: New session 11 of user core. Jul 11 00:14:11.711013 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 11 00:14:11.787058 kubelet[2482]: I0711 00:14:11.786991 2482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-kqs4z" podStartSLOduration=26.687300653 podStartE2EDuration="34.786972876s" podCreationTimestamp="2025-07-11 00:13:37 +0000 UTC" firstStartedPulling="2025-07-11 00:14:03.215983208 +0000 UTC m=+44.885357115" lastFinishedPulling="2025-07-11 00:14:11.315655431 +0000 UTC m=+52.985029338" observedRunningTime="2025-07-11 00:14:11.78659133 +0000 UTC m=+53.455965237" watchObservedRunningTime="2025-07-11 00:14:11.786972876 +0000 UTC m=+53.456346783" Jul 11 00:14:11.789052 kubelet[2482]: I0711 00:14:11.789012 2482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-cx6rf" podStartSLOduration=28.272229832 podStartE2EDuration="34.789000933s" podCreationTimestamp="2025-07-11 00:13:37 +0000 UTC" firstStartedPulling="2025-07-11 00:14:00.810331731 +0000 UTC m=+42.479705638" lastFinishedPulling="2025-07-11 00:14:07.327102832 +0000 UTC m=+48.996476739" observedRunningTime="2025-07-11 00:14:07.778421958 +0000 UTC m=+49.447795865" watchObservedRunningTime="2025-07-11 00:14:11.789000933 +0000 UTC m=+53.458374840" Jul 11 00:14:11.870861 sshd[5308]: pam_unix(sshd:session): session closed for user core Jul 11 00:14:11.883851 systemd[1]: sshd@10-10.0.0.36:22-10.0.0.1:54300.service: Deactivated successfully. Jul 11 00:14:11.886039 systemd[1]: session-11.scope: Deactivated successfully. Jul 11 00:14:11.887847 systemd-logind[1439]: Session 11 logged out. Waiting for processes to exit. Jul 11 00:14:11.897030 systemd[1]: Started sshd@11-10.0.0.36:22-10.0.0.1:54316.service - OpenSSH per-connection server daemon (10.0.0.1:54316). Jul 11 00:14:11.897940 systemd-logind[1439]: Removed session 11. Jul 11 00:14:11.930872 sshd[5326]: Accepted publickey for core from 10.0.0.1 port 54316 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:14:11.932548 sshd[5326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:14:11.936625 systemd-logind[1439]: New session 12 of user core. Jul 11 00:14:11.947917 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 11 00:14:12.105216 sshd[5326]: pam_unix(sshd:session): session closed for user core Jul 11 00:14:12.117670 systemd[1]: sshd@11-10.0.0.36:22-10.0.0.1:54316.service: Deactivated successfully. Jul 11 00:14:12.122147 systemd[1]: session-12.scope: Deactivated successfully. Jul 11 00:14:12.127851 systemd-logind[1439]: Session 12 logged out. Waiting for processes to exit. Jul 11 00:14:12.138646 systemd[1]: Started sshd@12-10.0.0.36:22-10.0.0.1:54332.service - OpenSSH per-connection server daemon (10.0.0.1:54332). Jul 11 00:14:12.140309 systemd-logind[1439]: Removed session 12. Jul 11 00:14:12.174876 sshd[5338]: Accepted publickey for core from 10.0.0.1 port 54332 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:14:12.176788 sshd[5338]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:14:12.180738 systemd-logind[1439]: New session 13 of user core. Jul 11 00:14:12.190895 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 11 00:14:12.301444 sshd[5338]: pam_unix(sshd:session): session closed for user core Jul 11 00:14:12.305406 systemd[1]: sshd@12-10.0.0.36:22-10.0.0.1:54332.service: Deactivated successfully. Jul 11 00:14:12.307628 systemd[1]: session-13.scope: Deactivated successfully. Jul 11 00:14:12.308387 systemd-logind[1439]: Session 13 logged out. Waiting for processes to exit. Jul 11 00:14:12.309314 systemd-logind[1439]: Removed session 13. Jul 11 00:14:12.778354 kubelet[2482]: I0711 00:14:12.778305 2482 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:14:14.336468 containerd[1453]: time="2025-07-11T00:14:14.336402907Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:14:14.337382 containerd[1453]: time="2025-07-11T00:14:14.337342761Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 11 00:14:14.338630 containerd[1453]: time="2025-07-11T00:14:14.338597125Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:14:14.340983 containerd[1453]: time="2025-07-11T00:14:14.340951433Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:14:14.352668 containerd[1453]: time="2025-07-11T00:14:14.352614961Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 3.036735339s" Jul 11 00:14:14.352725 containerd[1453]: time="2025-07-11T00:14:14.352668661Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 11 00:14:14.353576 containerd[1453]: time="2025-07-11T00:14:14.353551167Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 11 00:14:14.364327 containerd[1453]: time="2025-07-11T00:14:14.364287324Z" level=info msg="CreateContainer within sandbox \"4a7848a5c363b91dda18f14d1e7900dc623df2022e01692c1b6a929c66858066\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 11 00:14:14.385176 containerd[1453]: time="2025-07-11T00:14:14.385122302Z" level=info msg="CreateContainer within sandbox \"4a7848a5c363b91dda18f14d1e7900dc623df2022e01692c1b6a929c66858066\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"6b41e9fab3107260daa43009adb4790ab5eff4c83d20651621941db4e3135101\"" Jul 11 00:14:14.385747 containerd[1453]: time="2025-07-11T00:14:14.385707581Z" level=info msg="StartContainer for \"6b41e9fab3107260daa43009adb4790ab5eff4c83d20651621941db4e3135101\"" Jul 11 00:14:14.447921 systemd[1]: Started cri-containerd-6b41e9fab3107260daa43009adb4790ab5eff4c83d20651621941db4e3135101.scope - libcontainer container 6b41e9fab3107260daa43009adb4790ab5eff4c83d20651621941db4e3135101. Jul 11 00:14:14.493961 containerd[1453]: time="2025-07-11T00:14:14.493743766Z" level=info msg="StartContainer for \"6b41e9fab3107260daa43009adb4790ab5eff4c83d20651621941db4e3135101\" returns successfully" Jul 11 00:14:14.750331 containerd[1453]: time="2025-07-11T00:14:14.750276365Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:14:14.751119 containerd[1453]: time="2025-07-11T00:14:14.751079582Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 11 00:14:14.753241 containerd[1453]: time="2025-07-11T00:14:14.753206885Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 399.626201ms" Jul 11 00:14:14.753241 containerd[1453]: time="2025-07-11T00:14:14.753238564Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 11 00:14:14.755294 containerd[1453]: time="2025-07-11T00:14:14.755243276Z" level=info msg="CreateContainer within sandbox \"da9e259fc13e048d858ce144f2ec7e74ff01a7658de44f393864990832959da2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 11 00:14:14.788532 containerd[1453]: time="2025-07-11T00:14:14.788480197Z" level=info msg="CreateContainer within sandbox \"da9e259fc13e048d858ce144f2ec7e74ff01a7658de44f393864990832959da2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f5f66e14a9e7df1f9398542880426841042de6b2f327ca5efc67d2ce30b077a2\"" Jul 11 00:14:14.790412 containerd[1453]: time="2025-07-11T00:14:14.788974795Z" level=info msg="StartContainer for \"f5f66e14a9e7df1f9398542880426841042de6b2f327ca5efc67d2ce30b077a2\"" Jul 11 00:14:14.799086 kubelet[2482]: I0711 00:14:14.799013 2482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-67b7cfcdf9-djn6p" podStartSLOduration=25.775496789 podStartE2EDuration="36.798979239s" podCreationTimestamp="2025-07-11 00:13:38 +0000 UTC" firstStartedPulling="2025-07-11 00:14:03.329924297 +0000 UTC m=+44.999298204" lastFinishedPulling="2025-07-11 00:14:14.353406747 +0000 UTC m=+56.022780654" observedRunningTime="2025-07-11 00:14:14.798338286 +0000 UTC m=+56.467712203" watchObservedRunningTime="2025-07-11 00:14:14.798979239 +0000 UTC m=+56.468353146" Jul 11 00:14:14.822973 systemd[1]: Started cri-containerd-f5f66e14a9e7df1f9398542880426841042de6b2f327ca5efc67d2ce30b077a2.scope - libcontainer container f5f66e14a9e7df1f9398542880426841042de6b2f327ca5efc67d2ce30b077a2. Jul 11 00:14:14.870205 containerd[1453]: time="2025-07-11T00:14:14.870154693Z" level=info msg="StartContainer for \"f5f66e14a9e7df1f9398542880426841042de6b2f327ca5efc67d2ce30b077a2\" returns successfully" Jul 11 00:14:15.787955 kubelet[2482]: I0711 00:14:15.787904 2482 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:14:16.122340 kubelet[2482]: I0711 00:14:16.120657 2482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5b5f5d84fb-jgmkp" podStartSLOduration=30.17761627 podStartE2EDuration="41.120627014s" podCreationTimestamp="2025-07-11 00:13:35 +0000 UTC" firstStartedPulling="2025-07-11 00:14:03.810953742 +0000 UTC m=+45.480327639" lastFinishedPulling="2025-07-11 00:14:14.753964476 +0000 UTC m=+56.423338383" observedRunningTime="2025-07-11 00:14:15.830811438 +0000 UTC m=+57.500185346" watchObservedRunningTime="2025-07-11 00:14:16.120627014 +0000 UTC m=+57.790000921" Jul 11 00:14:17.315416 systemd[1]: Started sshd@13-10.0.0.36:22-10.0.0.1:54336.service - OpenSSH per-connection server daemon (10.0.0.1:54336). Jul 11 00:14:17.594855 sshd[5565]: Accepted publickey for core from 10.0.0.1 port 54336 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:14:17.597431 sshd[5565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:14:17.602450 systemd-logind[1439]: New session 14 of user core. Jul 11 00:14:17.611917 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 11 00:14:17.812511 sshd[5565]: pam_unix(sshd:session): session closed for user core Jul 11 00:14:17.817193 systemd[1]: sshd@13-10.0.0.36:22-10.0.0.1:54336.service: Deactivated successfully. Jul 11 00:14:17.819444 systemd[1]: session-14.scope: Deactivated successfully. Jul 11 00:14:17.820236 systemd-logind[1439]: Session 14 logged out. Waiting for processes to exit. Jul 11 00:14:17.821125 systemd-logind[1439]: Removed session 14. Jul 11 00:14:18.462445 containerd[1453]: time="2025-07-11T00:14:18.462402295Z" level=info msg="StopPodSandbox for \"467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78\"" Jul 11 00:14:18.681036 containerd[1453]: 2025-07-11 00:14:18.625 [WARNING][5591] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b5f5d84fb--77rqg-eth0", GenerateName:"calico-apiserver-5b5f5d84fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"170759a5-3b9a-4f76-b425-9d4e9c482064", ResourceVersion:"1127", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b5f5d84fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"03f2a2c377eac9892504f63e039d51a29429ee543e524701f14d98ef0d7d2f5d", Pod:"calico-apiserver-5b5f5d84fb-77rqg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali715f07f8fff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:14:18.681036 containerd[1453]: 2025-07-11 00:14:18.627 [INFO][5591] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78" Jul 11 00:14:18.681036 containerd[1453]: 2025-07-11 00:14:18.627 [INFO][5591] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78" iface="eth0" netns="" Jul 11 00:14:18.681036 containerd[1453]: 2025-07-11 00:14:18.627 [INFO][5591] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78" Jul 11 00:14:18.681036 containerd[1453]: 2025-07-11 00:14:18.628 [INFO][5591] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78" Jul 11 00:14:18.681036 containerd[1453]: 2025-07-11 00:14:18.658 [INFO][5600] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78" HandleID="k8s-pod-network.467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78" Workload="localhost-k8s-calico--apiserver--5b5f5d84fb--77rqg-eth0" Jul 11 00:14:18.681036 containerd[1453]: 2025-07-11 00:14:18.659 [INFO][5600] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:14:18.681036 containerd[1453]: 2025-07-11 00:14:18.659 [INFO][5600] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:14:18.681036 containerd[1453]: 2025-07-11 00:14:18.664 [WARNING][5600] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78" HandleID="k8s-pod-network.467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78" Workload="localhost-k8s-calico--apiserver--5b5f5d84fb--77rqg-eth0" Jul 11 00:14:18.681036 containerd[1453]: 2025-07-11 00:14:18.664 [INFO][5600] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78" HandleID="k8s-pod-network.467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78" Workload="localhost-k8s-calico--apiserver--5b5f5d84fb--77rqg-eth0" Jul 11 00:14:18.681036 containerd[1453]: 2025-07-11 00:14:18.666 [INFO][5600] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:14:18.681036 containerd[1453]: 2025-07-11 00:14:18.672 [INFO][5591] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78" Jul 11 00:14:18.690169 containerd[1453]: time="2025-07-11T00:14:18.690076776Z" level=info msg="TearDown network for sandbox \"467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78\" successfully" Jul 11 00:14:18.690169 containerd[1453]: time="2025-07-11T00:14:18.690158219Z" level=info msg="StopPodSandbox for \"467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78\" returns successfully" Jul 11 00:14:18.735085 containerd[1453]: time="2025-07-11T00:14:18.734930264Z" level=info msg="RemovePodSandbox for \"467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78\"" Jul 11 00:14:18.737201 containerd[1453]: time="2025-07-11T00:14:18.737153967Z" level=info msg="Forcibly stopping sandbox \"467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78\"" Jul 11 00:14:18.812220 containerd[1453]: 2025-07-11 00:14:18.775 [WARNING][5618] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b5f5d84fb--77rqg-eth0", GenerateName:"calico-apiserver-5b5f5d84fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"170759a5-3b9a-4f76-b425-9d4e9c482064", ResourceVersion:"1127", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b5f5d84fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"03f2a2c377eac9892504f63e039d51a29429ee543e524701f14d98ef0d7d2f5d", Pod:"calico-apiserver-5b5f5d84fb-77rqg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali715f07f8fff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:14:18.812220 containerd[1453]: 2025-07-11 00:14:18.775 [INFO][5618] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78" Jul 11 00:14:18.812220 containerd[1453]: 2025-07-11 00:14:18.775 [INFO][5618] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78" iface="eth0" netns="" Jul 11 00:14:18.812220 containerd[1453]: 2025-07-11 00:14:18.775 [INFO][5618] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78" Jul 11 00:14:18.812220 containerd[1453]: 2025-07-11 00:14:18.775 [INFO][5618] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78" Jul 11 00:14:18.812220 containerd[1453]: 2025-07-11 00:14:18.796 [INFO][5627] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78" HandleID="k8s-pod-network.467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78" Workload="localhost-k8s-calico--apiserver--5b5f5d84fb--77rqg-eth0" Jul 11 00:14:18.812220 containerd[1453]: 2025-07-11 00:14:18.797 [INFO][5627] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:14:18.812220 containerd[1453]: 2025-07-11 00:14:18.797 [INFO][5627] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:14:18.812220 containerd[1453]: 2025-07-11 00:14:18.804 [WARNING][5627] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78" HandleID="k8s-pod-network.467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78" Workload="localhost-k8s-calico--apiserver--5b5f5d84fb--77rqg-eth0" Jul 11 00:14:18.812220 containerd[1453]: 2025-07-11 00:14:18.804 [INFO][5627] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78" HandleID="k8s-pod-network.467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78" Workload="localhost-k8s-calico--apiserver--5b5f5d84fb--77rqg-eth0" Jul 11 00:14:18.812220 containerd[1453]: 2025-07-11 00:14:18.806 [INFO][5627] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:14:18.812220 containerd[1453]: 2025-07-11 00:14:18.808 [INFO][5618] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78" Jul 11 00:14:18.812675 containerd[1453]: time="2025-07-11T00:14:18.812295353Z" level=info msg="TearDown network for sandbox \"467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78\" successfully" Jul 11 00:14:18.845775 containerd[1453]: time="2025-07-11T00:14:18.845688689Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:14:18.845922 containerd[1453]: time="2025-07-11T00:14:18.845820346Z" level=info msg="RemovePodSandbox \"467ddefc9e3f00e0ea2ab0e10eb023b1a14819180ffe9020b4cd4f22a91bdb78\" returns successfully" Jul 11 00:14:18.856219 containerd[1453]: time="2025-07-11T00:14:18.856177258Z" level=info msg="StopPodSandbox for \"cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce\"" Jul 11 00:14:18.927580 containerd[1453]: 2025-07-11 00:14:18.889 [WARNING][5645] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce" WorkloadEndpoint="localhost-k8s-whisker--6bf88d8cb8--nt5bv-eth0" Jul 11 00:14:18.927580 containerd[1453]: 2025-07-11 00:14:18.890 [INFO][5645] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce" Jul 11 00:14:18.927580 containerd[1453]: 2025-07-11 00:14:18.890 [INFO][5645] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce" iface="eth0" netns="" Jul 11 00:14:18.927580 containerd[1453]: 2025-07-11 00:14:18.890 [INFO][5645] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce" Jul 11 00:14:18.927580 containerd[1453]: 2025-07-11 00:14:18.890 [INFO][5645] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce" Jul 11 00:14:18.927580 containerd[1453]: 2025-07-11 00:14:18.913 [INFO][5654] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce" HandleID="k8s-pod-network.cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce" Workload="localhost-k8s-whisker--6bf88d8cb8--nt5bv-eth0" Jul 11 00:14:18.927580 containerd[1453]: 2025-07-11 00:14:18.913 [INFO][5654] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:14:18.927580 containerd[1453]: 2025-07-11 00:14:18.914 [INFO][5654] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:14:18.927580 containerd[1453]: 2025-07-11 00:14:18.919 [WARNING][5654] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce" HandleID="k8s-pod-network.cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce" Workload="localhost-k8s-whisker--6bf88d8cb8--nt5bv-eth0" Jul 11 00:14:18.927580 containerd[1453]: 2025-07-11 00:14:18.919 [INFO][5654] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce" HandleID="k8s-pod-network.cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce" Workload="localhost-k8s-whisker--6bf88d8cb8--nt5bv-eth0" Jul 11 00:14:18.927580 containerd[1453]: 2025-07-11 00:14:18.921 [INFO][5654] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:14:18.927580 containerd[1453]: 2025-07-11 00:14:18.924 [INFO][5645] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce" Jul 11 00:14:18.928079 containerd[1453]: time="2025-07-11T00:14:18.927638720Z" level=info msg="TearDown network for sandbox \"cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce\" successfully" Jul 11 00:14:18.928079 containerd[1453]: time="2025-07-11T00:14:18.927679897Z" level=info msg="StopPodSandbox for \"cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce\" returns successfully" Jul 11 00:14:18.928308 containerd[1453]: time="2025-07-11T00:14:18.928274372Z" level=info msg="RemovePodSandbox for \"cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce\"" Jul 11 00:14:18.928308 containerd[1453]: time="2025-07-11T00:14:18.928306273Z" level=info msg="Forcibly stopping sandbox \"cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce\"" Jul 11 00:14:19.000992 containerd[1453]: 2025-07-11 00:14:18.962 [WARNING][5672] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce" WorkloadEndpoint="localhost-k8s-whisker--6bf88d8cb8--nt5bv-eth0" Jul 11 00:14:19.000992 containerd[1453]: 2025-07-11 00:14:18.963 [INFO][5672] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce" Jul 11 00:14:19.000992 containerd[1453]: 2025-07-11 00:14:18.963 [INFO][5672] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce" iface="eth0" netns="" Jul 11 00:14:19.000992 containerd[1453]: 2025-07-11 00:14:18.963 [INFO][5672] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce" Jul 11 00:14:19.000992 containerd[1453]: 2025-07-11 00:14:18.963 [INFO][5672] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce" Jul 11 00:14:19.000992 containerd[1453]: 2025-07-11 00:14:18.986 [INFO][5680] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce" HandleID="k8s-pod-network.cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce" Workload="localhost-k8s-whisker--6bf88d8cb8--nt5bv-eth0" Jul 11 00:14:19.000992 containerd[1453]: 2025-07-11 00:14:18.986 [INFO][5680] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:14:19.000992 containerd[1453]: 2025-07-11 00:14:18.986 [INFO][5680] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:14:19.000992 containerd[1453]: 2025-07-11 00:14:18.992 [WARNING][5680] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce" HandleID="k8s-pod-network.cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce" Workload="localhost-k8s-whisker--6bf88d8cb8--nt5bv-eth0" Jul 11 00:14:19.000992 containerd[1453]: 2025-07-11 00:14:18.992 [INFO][5680] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce" HandleID="k8s-pod-network.cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce" Workload="localhost-k8s-whisker--6bf88d8cb8--nt5bv-eth0" Jul 11 00:14:19.000992 containerd[1453]: 2025-07-11 00:14:18.994 [INFO][5680] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:14:19.000992 containerd[1453]: 2025-07-11 00:14:18.997 [INFO][5672] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce" Jul 11 00:14:19.000992 containerd[1453]: time="2025-07-11T00:14:19.000948741Z" level=info msg="TearDown network for sandbox \"cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce\" successfully" Jul 11 00:14:19.012790 containerd[1453]: time="2025-07-11T00:14:19.012715778Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:14:19.012933 containerd[1453]: time="2025-07-11T00:14:19.012822929Z" level=info msg="RemovePodSandbox \"cc4c458c494be2af4be449848efd868d9364ddf583b199f0c7c058e664c0c6ce\" returns successfully" Jul 11 00:14:19.013432 containerd[1453]: time="2025-07-11T00:14:19.013393330Z" level=info msg="StopPodSandbox for \"65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c\"" Jul 11 00:14:19.086027 containerd[1453]: 2025-07-11 00:14:19.049 [WARNING][5698] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--67b7cfcdf9--djn6p-eth0", GenerateName:"calico-kube-controllers-67b7cfcdf9-", Namespace:"calico-system", SelfLink:"", UID:"2b528b1a-2869-4a1f-9208-de9d96c1a0aa", ResourceVersion:"1218", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67b7cfcdf9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4a7848a5c363b91dda18f14d1e7900dc623df2022e01692c1b6a929c66858066", Pod:"calico-kube-controllers-67b7cfcdf9-djn6p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia9b6101d2fc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:14:19.086027 containerd[1453]: 2025-07-11 00:14:19.049 [INFO][5698] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c" Jul 11 00:14:19.086027 containerd[1453]: 2025-07-11 00:14:19.049 [INFO][5698] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c" iface="eth0" netns="" Jul 11 00:14:19.086027 containerd[1453]: 2025-07-11 00:14:19.049 [INFO][5698] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c" Jul 11 00:14:19.086027 containerd[1453]: 2025-07-11 00:14:19.049 [INFO][5698] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c" Jul 11 00:14:19.086027 containerd[1453]: 2025-07-11 00:14:19.073 [INFO][5707] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c" HandleID="k8s-pod-network.65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c" Workload="localhost-k8s-calico--kube--controllers--67b7cfcdf9--djn6p-eth0" Jul 11 00:14:19.086027 containerd[1453]: 2025-07-11 00:14:19.073 [INFO][5707] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:14:19.086027 containerd[1453]: 2025-07-11 00:14:19.073 [INFO][5707] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:14:19.086027 containerd[1453]: 2025-07-11 00:14:19.078 [WARNING][5707] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c" HandleID="k8s-pod-network.65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c" Workload="localhost-k8s-calico--kube--controllers--67b7cfcdf9--djn6p-eth0" Jul 11 00:14:19.086027 containerd[1453]: 2025-07-11 00:14:19.078 [INFO][5707] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c" HandleID="k8s-pod-network.65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c" Workload="localhost-k8s-calico--kube--controllers--67b7cfcdf9--djn6p-eth0" Jul 11 00:14:19.086027 containerd[1453]: 2025-07-11 00:14:19.080 [INFO][5707] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:14:19.086027 containerd[1453]: 2025-07-11 00:14:19.082 [INFO][5698] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c" Jul 11 00:14:19.088831 containerd[1453]: time="2025-07-11T00:14:19.086079395Z" level=info msg="TearDown network for sandbox \"65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c\" successfully" Jul 11 00:14:19.088831 containerd[1453]: time="2025-07-11T00:14:19.086107367Z" level=info msg="StopPodSandbox for \"65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c\" returns successfully" Jul 11 00:14:19.088831 containerd[1453]: time="2025-07-11T00:14:19.086658943Z" level=info msg="RemovePodSandbox for \"65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c\"" Jul 11 00:14:19.088831 containerd[1453]: time="2025-07-11T00:14:19.086685933Z" level=info msg="Forcibly stopping sandbox \"65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c\"" Jul 11 00:14:19.152275 containerd[1453]: 2025-07-11 00:14:19.118 [WARNING][5724] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--67b7cfcdf9--djn6p-eth0", GenerateName:"calico-kube-controllers-67b7cfcdf9-", Namespace:"calico-system", SelfLink:"", UID:"2b528b1a-2869-4a1f-9208-de9d96c1a0aa", ResourceVersion:"1218", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67b7cfcdf9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4a7848a5c363b91dda18f14d1e7900dc623df2022e01692c1b6a929c66858066", Pod:"calico-kube-controllers-67b7cfcdf9-djn6p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia9b6101d2fc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:14:19.152275 containerd[1453]: 2025-07-11 00:14:19.118 [INFO][5724] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c" Jul 11 00:14:19.152275 containerd[1453]: 2025-07-11 00:14:19.118 [INFO][5724] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c" iface="eth0" netns="" Jul 11 00:14:19.152275 containerd[1453]: 2025-07-11 00:14:19.118 [INFO][5724] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c" Jul 11 00:14:19.152275 containerd[1453]: 2025-07-11 00:14:19.118 [INFO][5724] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c" Jul 11 00:14:19.152275 containerd[1453]: 2025-07-11 00:14:19.140 [INFO][5733] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c" HandleID="k8s-pod-network.65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c" Workload="localhost-k8s-calico--kube--controllers--67b7cfcdf9--djn6p-eth0" Jul 11 00:14:19.152275 containerd[1453]: 2025-07-11 00:14:19.140 [INFO][5733] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:14:19.152275 containerd[1453]: 2025-07-11 00:14:19.140 [INFO][5733] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:14:19.152275 containerd[1453]: 2025-07-11 00:14:19.145 [WARNING][5733] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c" HandleID="k8s-pod-network.65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c" Workload="localhost-k8s-calico--kube--controllers--67b7cfcdf9--djn6p-eth0" Jul 11 00:14:19.152275 containerd[1453]: 2025-07-11 00:14:19.145 [INFO][5733] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c" HandleID="k8s-pod-network.65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c" Workload="localhost-k8s-calico--kube--controllers--67b7cfcdf9--djn6p-eth0" Jul 11 00:14:19.152275 containerd[1453]: 2025-07-11 00:14:19.146 [INFO][5733] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:14:19.152275 containerd[1453]: 2025-07-11 00:14:19.149 [INFO][5724] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c" Jul 11 00:14:19.153126 containerd[1453]: time="2025-07-11T00:14:19.152318194Z" level=info msg="TearDown network for sandbox \"65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c\" successfully" Jul 11 00:14:19.160922 containerd[1453]: time="2025-07-11T00:14:19.160867664Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:14:19.161079 containerd[1453]: time="2025-07-11T00:14:19.160945070Z" level=info msg="RemovePodSandbox \"65b2f54bec370433fe212270207bdd9b509e77bafb266e0c95f040aba3b30c9c\" returns successfully" Jul 11 00:14:19.161568 containerd[1453]: time="2025-07-11T00:14:19.161515961Z" level=info msg="StopPodSandbox for \"14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58\"" Jul 11 00:14:19.226107 containerd[1453]: 2025-07-11 00:14:19.193 [WARNING][5752] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--cx6rf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1ca1aa6c-2238-4c4c-869b-a6dd1b43a48c", ResourceVersion:"1155", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"98eee2b1ee60faed1392912b3c056f2e6e90b562f1e452e711a35b37dfdc3a1a", Pod:"csi-node-driver-cx6rf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic5279c42b24", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:14:19.226107 containerd[1453]: 2025-07-11 00:14:19.193 [INFO][5752] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58" Jul 11 00:14:19.226107 containerd[1453]: 2025-07-11 00:14:19.193 [INFO][5752] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58" iface="eth0" netns="" Jul 11 00:14:19.226107 containerd[1453]: 2025-07-11 00:14:19.193 [INFO][5752] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58" Jul 11 00:14:19.226107 containerd[1453]: 2025-07-11 00:14:19.194 [INFO][5752] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58" Jul 11 00:14:19.226107 containerd[1453]: 2025-07-11 00:14:19.213 [INFO][5762] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58" HandleID="k8s-pod-network.14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58" Workload="localhost-k8s-csi--node--driver--cx6rf-eth0" Jul 11 00:14:19.226107 containerd[1453]: 2025-07-11 00:14:19.213 [INFO][5762] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:14:19.226107 containerd[1453]: 2025-07-11 00:14:19.213 [INFO][5762] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:14:19.226107 containerd[1453]: 2025-07-11 00:14:19.219 [WARNING][5762] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58" HandleID="k8s-pod-network.14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58" Workload="localhost-k8s-csi--node--driver--cx6rf-eth0" Jul 11 00:14:19.226107 containerd[1453]: 2025-07-11 00:14:19.219 [INFO][5762] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58" HandleID="k8s-pod-network.14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58" Workload="localhost-k8s-csi--node--driver--cx6rf-eth0" Jul 11 00:14:19.226107 containerd[1453]: 2025-07-11 00:14:19.220 [INFO][5762] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:14:19.226107 containerd[1453]: 2025-07-11 00:14:19.223 [INFO][5752] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58" Jul 11 00:14:19.226961 containerd[1453]: time="2025-07-11T00:14:19.226144058Z" level=info msg="TearDown network for sandbox \"14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58\" successfully" Jul 11 00:14:19.226961 containerd[1453]: time="2025-07-11T00:14:19.226170858Z" level=info msg="StopPodSandbox for \"14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58\" returns successfully" Jul 11 00:14:19.226961 containerd[1453]: time="2025-07-11T00:14:19.226809387Z" level=info msg="RemovePodSandbox for \"14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58\"" Jul 11 00:14:19.226961 containerd[1453]: time="2025-07-11T00:14:19.226852958Z" level=info msg="Forcibly stopping sandbox \"14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58\"" Jul 11 00:14:19.295181 containerd[1453]: 2025-07-11 00:14:19.260 [WARNING][5780] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--cx6rf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1ca1aa6c-2238-4c4c-869b-a6dd1b43a48c", ResourceVersion:"1155", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"98eee2b1ee60faed1392912b3c056f2e6e90b562f1e452e711a35b37dfdc3a1a", Pod:"csi-node-driver-cx6rf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic5279c42b24", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:14:19.295181 containerd[1453]: 2025-07-11 00:14:19.260 [INFO][5780] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58" Jul 11 00:14:19.295181 containerd[1453]: 2025-07-11 00:14:19.260 [INFO][5780] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58" iface="eth0" netns="" Jul 11 00:14:19.295181 containerd[1453]: 2025-07-11 00:14:19.260 [INFO][5780] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58" Jul 11 00:14:19.295181 containerd[1453]: 2025-07-11 00:14:19.260 [INFO][5780] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58" Jul 11 00:14:19.295181 containerd[1453]: 2025-07-11 00:14:19.282 [INFO][5788] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58" HandleID="k8s-pod-network.14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58" Workload="localhost-k8s-csi--node--driver--cx6rf-eth0" Jul 11 00:14:19.295181 containerd[1453]: 2025-07-11 00:14:19.282 [INFO][5788] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:14:19.295181 containerd[1453]: 2025-07-11 00:14:19.282 [INFO][5788] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:14:19.295181 containerd[1453]: 2025-07-11 00:14:19.287 [WARNING][5788] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58" HandleID="k8s-pod-network.14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58" Workload="localhost-k8s-csi--node--driver--cx6rf-eth0" Jul 11 00:14:19.295181 containerd[1453]: 2025-07-11 00:14:19.287 [INFO][5788] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58" HandleID="k8s-pod-network.14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58" Workload="localhost-k8s-csi--node--driver--cx6rf-eth0" Jul 11 00:14:19.295181 containerd[1453]: 2025-07-11 00:14:19.289 [INFO][5788] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:14:19.295181 containerd[1453]: 2025-07-11 00:14:19.292 [INFO][5780] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58" Jul 11 00:14:19.295181 containerd[1453]: time="2025-07-11T00:14:19.295155058Z" level=info msg="TearDown network for sandbox \"14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58\" successfully" Jul 11 00:14:19.302051 containerd[1453]: time="2025-07-11T00:14:19.302006012Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:14:19.302164 containerd[1453]: time="2025-07-11T00:14:19.302060664Z" level=info msg="RemovePodSandbox \"14b43602b353ecf2ccab771f47a3ceb59147bf5c04be37a654a76d2fd2edeb58\" returns successfully" Jul 11 00:14:19.302837 containerd[1453]: time="2025-07-11T00:14:19.302799050Z" level=info msg="StopPodSandbox for \"37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16\"" Jul 11 00:14:19.386420 containerd[1453]: 2025-07-11 00:14:19.350 [WARNING][5806] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--26c2d-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f72dd124-c970-4b6b-a074-81167ad3af44", ResourceVersion:"1107", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"63b1dc3052e5bca0afad005ab8db879d58ccba26e34bf97e1d04c891a47de05c", Pod:"coredns-668d6bf9bc-26c2d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali25a2a249da7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:14:19.386420 containerd[1453]: 2025-07-11 00:14:19.350 [INFO][5806] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16" Jul 11 00:14:19.386420 containerd[1453]: 2025-07-11 00:14:19.350 [INFO][5806] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16" iface="eth0" netns="" Jul 11 00:14:19.386420 containerd[1453]: 2025-07-11 00:14:19.350 [INFO][5806] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16" Jul 11 00:14:19.386420 containerd[1453]: 2025-07-11 00:14:19.350 [INFO][5806] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16" Jul 11 00:14:19.386420 containerd[1453]: 2025-07-11 00:14:19.372 [INFO][5815] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16" HandleID="k8s-pod-network.37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16" Workload="localhost-k8s-coredns--668d6bf9bc--26c2d-eth0" Jul 11 00:14:19.386420 containerd[1453]: 2025-07-11 00:14:19.372 [INFO][5815] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:14:19.386420 containerd[1453]: 2025-07-11 00:14:19.373 [INFO][5815] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:14:19.386420 containerd[1453]: 2025-07-11 00:14:19.378 [WARNING][5815] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16" HandleID="k8s-pod-network.37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16" Workload="localhost-k8s-coredns--668d6bf9bc--26c2d-eth0" Jul 11 00:14:19.386420 containerd[1453]: 2025-07-11 00:14:19.378 [INFO][5815] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16" HandleID="k8s-pod-network.37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16" Workload="localhost-k8s-coredns--668d6bf9bc--26c2d-eth0" Jul 11 00:14:19.386420 containerd[1453]: 2025-07-11 00:14:19.380 [INFO][5815] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:14:19.386420 containerd[1453]: 2025-07-11 00:14:19.383 [INFO][5806] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16" Jul 11 00:14:19.387510 containerd[1453]: time="2025-07-11T00:14:19.386463414Z" level=info msg="TearDown network for sandbox \"37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16\" successfully" Jul 11 00:14:19.387510 containerd[1453]: time="2025-07-11T00:14:19.386494181Z" level=info msg="StopPodSandbox for \"37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16\" returns successfully" Jul 11 00:14:19.387510 containerd[1453]: time="2025-07-11T00:14:19.387036168Z" level=info msg="RemovePodSandbox for \"37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16\"" Jul 11 00:14:19.387510 containerd[1453]: time="2025-07-11T00:14:19.387072406Z" level=info msg="Forcibly stopping sandbox \"37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16\"" Jul 11 00:14:19.456153 containerd[1453]: 2025-07-11 00:14:19.422 [WARNING][5832] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--26c2d-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f72dd124-c970-4b6b-a074-81167ad3af44", ResourceVersion:"1107", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"63b1dc3052e5bca0afad005ab8db879d58ccba26e34bf97e1d04c891a47de05c", Pod:"coredns-668d6bf9bc-26c2d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali25a2a249da7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:14:19.456153 containerd[1453]: 2025-07-11 00:14:19.423 [INFO][5832] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16" Jul 11 00:14:19.456153 containerd[1453]: 2025-07-11 00:14:19.423 [INFO][5832] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16" iface="eth0" netns="" Jul 11 00:14:19.456153 containerd[1453]: 2025-07-11 00:14:19.423 [INFO][5832] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16" Jul 11 00:14:19.456153 containerd[1453]: 2025-07-11 00:14:19.423 [INFO][5832] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16" Jul 11 00:14:19.456153 containerd[1453]: 2025-07-11 00:14:19.444 [INFO][5841] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16" HandleID="k8s-pod-network.37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16" Workload="localhost-k8s-coredns--668d6bf9bc--26c2d-eth0" Jul 11 00:14:19.456153 containerd[1453]: 2025-07-11 00:14:19.444 [INFO][5841] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:14:19.456153 containerd[1453]: 2025-07-11 00:14:19.444 [INFO][5841] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:14:19.456153 containerd[1453]: 2025-07-11 00:14:19.449 [WARNING][5841] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16" HandleID="k8s-pod-network.37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16" Workload="localhost-k8s-coredns--668d6bf9bc--26c2d-eth0" Jul 11 00:14:19.456153 containerd[1453]: 2025-07-11 00:14:19.449 [INFO][5841] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16" HandleID="k8s-pod-network.37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16" Workload="localhost-k8s-coredns--668d6bf9bc--26c2d-eth0" Jul 11 00:14:19.456153 containerd[1453]: 2025-07-11 00:14:19.450 [INFO][5841] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:14:19.456153 containerd[1453]: 2025-07-11 00:14:19.453 [INFO][5832] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16" Jul 11 00:14:19.456620 containerd[1453]: time="2025-07-11T00:14:19.456196779Z" level=info msg="TearDown network for sandbox \"37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16\" successfully" Jul 11 00:14:19.461227 containerd[1453]: time="2025-07-11T00:14:19.461195498Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:14:19.461291 containerd[1453]: time="2025-07-11T00:14:19.461256703Z" level=info msg="RemovePodSandbox \"37c040193b49c714dc88d5a71642d578db822947a0cfde5d1ed57fa3b07f0f16\" returns successfully" Jul 11 00:14:19.461831 containerd[1453]: time="2025-07-11T00:14:19.461796835Z" level=info msg="StopPodSandbox for \"993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243\"" Jul 11 00:14:19.530836 containerd[1453]: 2025-07-11 00:14:19.495 [WARNING][5859] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--csf5s-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"dc370272-6e6b-40f4-bdc3-9aff809787e2", ResourceVersion:"1131", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6e19f29b7ee86637947e92012fa7a06e543812dcc07f87eba7be2d12bec63573", Pod:"coredns-668d6bf9bc-csf5s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6a67c0d1d65", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:14:19.530836 containerd[1453]: 2025-07-11 00:14:19.495 [INFO][5859] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243" Jul 11 00:14:19.530836 containerd[1453]: 2025-07-11 00:14:19.495 [INFO][5859] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243" iface="eth0" netns="" Jul 11 00:14:19.530836 containerd[1453]: 2025-07-11 00:14:19.495 [INFO][5859] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243" Jul 11 00:14:19.530836 containerd[1453]: 2025-07-11 00:14:19.495 [INFO][5859] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243" Jul 11 00:14:19.530836 containerd[1453]: 2025-07-11 00:14:19.516 [INFO][5868] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243" HandleID="k8s-pod-network.993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243" Workload="localhost-k8s-coredns--668d6bf9bc--csf5s-eth0" Jul 11 00:14:19.530836 containerd[1453]: 2025-07-11 00:14:19.517 [INFO][5868] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:14:19.530836 containerd[1453]: 2025-07-11 00:14:19.517 [INFO][5868] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:14:19.530836 containerd[1453]: 2025-07-11 00:14:19.522 [WARNING][5868] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243" HandleID="k8s-pod-network.993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243" Workload="localhost-k8s-coredns--668d6bf9bc--csf5s-eth0" Jul 11 00:14:19.530836 containerd[1453]: 2025-07-11 00:14:19.522 [INFO][5868] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243" HandleID="k8s-pod-network.993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243" Workload="localhost-k8s-coredns--668d6bf9bc--csf5s-eth0" Jul 11 00:14:19.530836 containerd[1453]: 2025-07-11 00:14:19.523 [INFO][5868] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:14:19.530836 containerd[1453]: 2025-07-11 00:14:19.527 [INFO][5859] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243" Jul 11 00:14:19.531605 containerd[1453]: time="2025-07-11T00:14:19.530891283Z" level=info msg="TearDown network for sandbox \"993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243\" successfully" Jul 11 00:14:19.531605 containerd[1453]: time="2025-07-11T00:14:19.530918434Z" level=info msg="StopPodSandbox for \"993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243\" returns successfully" Jul 11 00:14:19.531605 containerd[1453]: time="2025-07-11T00:14:19.531354041Z" level=info msg="RemovePodSandbox for \"993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243\"" Jul 11 00:14:19.531605 containerd[1453]: time="2025-07-11T00:14:19.531378266Z" level=info msg="Forcibly stopping sandbox \"993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243\"" Jul 11 00:14:19.597979 containerd[1453]: 2025-07-11 00:14:19.561 [WARNING][5886] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--csf5s-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"dc370272-6e6b-40f4-bdc3-9aff809787e2", ResourceVersion:"1131", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6e19f29b7ee86637947e92012fa7a06e543812dcc07f87eba7be2d12bec63573", Pod:"coredns-668d6bf9bc-csf5s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6a67c0d1d65", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:14:19.597979 containerd[1453]: 2025-07-11 00:14:19.562 [INFO][5886] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243" Jul 11 00:14:19.597979 containerd[1453]: 2025-07-11 00:14:19.562 [INFO][5886] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243" iface="eth0" netns="" Jul 11 00:14:19.597979 containerd[1453]: 2025-07-11 00:14:19.562 [INFO][5886] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243" Jul 11 00:14:19.597979 containerd[1453]: 2025-07-11 00:14:19.562 [INFO][5886] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243" Jul 11 00:14:19.597979 containerd[1453]: 2025-07-11 00:14:19.586 [INFO][5895] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243" HandleID="k8s-pod-network.993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243" Workload="localhost-k8s-coredns--668d6bf9bc--csf5s-eth0" Jul 11 00:14:19.597979 containerd[1453]: 2025-07-11 00:14:19.586 [INFO][5895] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:14:19.597979 containerd[1453]: 2025-07-11 00:14:19.586 [INFO][5895] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:14:19.597979 containerd[1453]: 2025-07-11 00:14:19.591 [WARNING][5895] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243" HandleID="k8s-pod-network.993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243" Workload="localhost-k8s-coredns--668d6bf9bc--csf5s-eth0" Jul 11 00:14:19.597979 containerd[1453]: 2025-07-11 00:14:19.591 [INFO][5895] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243" HandleID="k8s-pod-network.993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243" Workload="localhost-k8s-coredns--668d6bf9bc--csf5s-eth0" Jul 11 00:14:19.597979 containerd[1453]: 2025-07-11 00:14:19.592 [INFO][5895] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:14:19.597979 containerd[1453]: 2025-07-11 00:14:19.595 [INFO][5886] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243" Jul 11 00:14:19.597979 containerd[1453]: time="2025-07-11T00:14:19.597926656Z" level=info msg="TearDown network for sandbox \"993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243\" successfully" Jul 11 00:14:19.603235 containerd[1453]: time="2025-07-11T00:14:19.603199539Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:14:19.603294 containerd[1453]: time="2025-07-11T00:14:19.603257018Z" level=info msg="RemovePodSandbox \"993c32a1f2ee5e03005feed4321d5e9277b98f5f147b5dc6499e8f0c38371243\" returns successfully" Jul 11 00:14:19.603837 containerd[1453]: time="2025-07-11T00:14:19.603796620Z" level=info msg="StopPodSandbox for \"9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45\"" Jul 11 00:14:19.669950 containerd[1453]: 2025-07-11 00:14:19.636 [WARNING][5912] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b5f5d84fb--jgmkp-eth0", GenerateName:"calico-apiserver-5b5f5d84fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"b110a319-ec46-48e9-afde-f14e51fe5798", ResourceVersion:"1222", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b5f5d84fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"da9e259fc13e048d858ce144f2ec7e74ff01a7658de44f393864990832959da2", Pod:"calico-apiserver-5b5f5d84fb-jgmkp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliea1627560b9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:14:19.669950 containerd[1453]: 2025-07-11 00:14:19.636 [INFO][5912] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45" Jul 11 00:14:19.669950 containerd[1453]: 2025-07-11 00:14:19.636 [INFO][5912] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45" iface="eth0" netns="" Jul 11 00:14:19.669950 containerd[1453]: 2025-07-11 00:14:19.636 [INFO][5912] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45" Jul 11 00:14:19.669950 containerd[1453]: 2025-07-11 00:14:19.636 [INFO][5912] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45" Jul 11 00:14:19.669950 containerd[1453]: 2025-07-11 00:14:19.658 [INFO][5921] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45" HandleID="k8s-pod-network.9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45" Workload="localhost-k8s-calico--apiserver--5b5f5d84fb--jgmkp-eth0" Jul 11 00:14:19.669950 containerd[1453]: 2025-07-11 00:14:19.658 [INFO][5921] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:14:19.669950 containerd[1453]: 2025-07-11 00:14:19.658 [INFO][5921] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:14:19.669950 containerd[1453]: 2025-07-11 00:14:19.663 [WARNING][5921] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45" HandleID="k8s-pod-network.9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45" Workload="localhost-k8s-calico--apiserver--5b5f5d84fb--jgmkp-eth0" Jul 11 00:14:19.669950 containerd[1453]: 2025-07-11 00:14:19.663 [INFO][5921] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45" HandleID="k8s-pod-network.9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45" Workload="localhost-k8s-calico--apiserver--5b5f5d84fb--jgmkp-eth0" Jul 11 00:14:19.669950 containerd[1453]: 2025-07-11 00:14:19.664 [INFO][5921] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:14:19.669950 containerd[1453]: 2025-07-11 00:14:19.667 [INFO][5912] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45" Jul 11 00:14:19.670441 containerd[1453]: time="2025-07-11T00:14:19.670000624Z" level=info msg="TearDown network for sandbox \"9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45\" successfully" Jul 11 00:14:19.670441 containerd[1453]: time="2025-07-11T00:14:19.670026983Z" level=info msg="StopPodSandbox for \"9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45\" returns successfully" Jul 11 00:14:19.670586 containerd[1453]: time="2025-07-11T00:14:19.670557007Z" level=info msg="RemovePodSandbox for \"9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45\"" Jul 11 00:14:19.670626 containerd[1453]: time="2025-07-11T00:14:19.670596371Z" level=info msg="Forcibly stopping sandbox \"9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45\"" Jul 11 00:14:19.738266 containerd[1453]: 2025-07-11 00:14:19.706 [WARNING][5938] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b5f5d84fb--jgmkp-eth0", GenerateName:"calico-apiserver-5b5f5d84fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"b110a319-ec46-48e9-afde-f14e51fe5798", ResourceVersion:"1222", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b5f5d84fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"da9e259fc13e048d858ce144f2ec7e74ff01a7658de44f393864990832959da2", Pod:"calico-apiserver-5b5f5d84fb-jgmkp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliea1627560b9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:14:19.738266 containerd[1453]: 2025-07-11 00:14:19.706 [INFO][5938] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45" Jul 11 00:14:19.738266 containerd[1453]: 2025-07-11 00:14:19.706 [INFO][5938] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45" iface="eth0" netns="" Jul 11 00:14:19.738266 containerd[1453]: 2025-07-11 00:14:19.706 [INFO][5938] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45" Jul 11 00:14:19.738266 containerd[1453]: 2025-07-11 00:14:19.706 [INFO][5938] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45" Jul 11 00:14:19.738266 containerd[1453]: 2025-07-11 00:14:19.726 [INFO][5947] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45" HandleID="k8s-pod-network.9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45" Workload="localhost-k8s-calico--apiserver--5b5f5d84fb--jgmkp-eth0" Jul 11 00:14:19.738266 containerd[1453]: 2025-07-11 00:14:19.726 [INFO][5947] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:14:19.738266 containerd[1453]: 2025-07-11 00:14:19.726 [INFO][5947] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:14:19.738266 containerd[1453]: 2025-07-11 00:14:19.731 [WARNING][5947] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45" HandleID="k8s-pod-network.9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45" Workload="localhost-k8s-calico--apiserver--5b5f5d84fb--jgmkp-eth0" Jul 11 00:14:19.738266 containerd[1453]: 2025-07-11 00:14:19.731 [INFO][5947] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45" HandleID="k8s-pod-network.9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45" Workload="localhost-k8s-calico--apiserver--5b5f5d84fb--jgmkp-eth0" Jul 11 00:14:19.738266 containerd[1453]: 2025-07-11 00:14:19.732 [INFO][5947] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:14:19.738266 containerd[1453]: 2025-07-11 00:14:19.735 [INFO][5938] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45" Jul 11 00:14:19.738883 containerd[1453]: time="2025-07-11T00:14:19.738308855Z" level=info msg="TearDown network for sandbox \"9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45\" successfully" Jul 11 00:14:19.744643 containerd[1453]: time="2025-07-11T00:14:19.744611230Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:14:19.744711 containerd[1453]: time="2025-07-11T00:14:19.744667125Z" level=info msg="RemovePodSandbox \"9a07a5ba5659023ff75c32d80673456ee8646162d36b1b2e0278067f95904d45\" returns successfully" Jul 11 00:14:19.745256 containerd[1453]: time="2025-07-11T00:14:19.745211175Z" level=info msg="StopPodSandbox for \"54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35\"" Jul 11 00:14:19.811467 containerd[1453]: 2025-07-11 00:14:19.777 [WARNING][5964] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--kqs4z-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"09f6ed65-1087-4a6c-9f3f-47a108556bd1", ResourceVersion:"1176", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9af6cf80095206aa785494ea1c02e69978499076d85f4ac4f26b7384966be6ab", Pod:"goldmane-768f4c5c69-kqs4z", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7de7e3e1d9b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:14:19.811467 containerd[1453]: 2025-07-11 00:14:19.777 [INFO][5964] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35" Jul 11 00:14:19.811467 containerd[1453]: 2025-07-11 00:14:19.777 [INFO][5964] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35" iface="eth0" netns="" Jul 11 00:14:19.811467 containerd[1453]: 2025-07-11 00:14:19.777 [INFO][5964] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35" Jul 11 00:14:19.811467 containerd[1453]: 2025-07-11 00:14:19.777 [INFO][5964] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35" Jul 11 00:14:19.811467 containerd[1453]: 2025-07-11 00:14:19.797 [INFO][5972] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35" HandleID="k8s-pod-network.54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35" Workload="localhost-k8s-goldmane--768f4c5c69--kqs4z-eth0" Jul 11 00:14:19.811467 containerd[1453]: 2025-07-11 00:14:19.797 [INFO][5972] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:14:19.811467 containerd[1453]: 2025-07-11 00:14:19.797 [INFO][5972] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:14:19.811467 containerd[1453]: 2025-07-11 00:14:19.803 [WARNING][5972] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35" HandleID="k8s-pod-network.54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35" Workload="localhost-k8s-goldmane--768f4c5c69--kqs4z-eth0" Jul 11 00:14:19.811467 containerd[1453]: 2025-07-11 00:14:19.803 [INFO][5972] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35" HandleID="k8s-pod-network.54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35" Workload="localhost-k8s-goldmane--768f4c5c69--kqs4z-eth0" Jul 11 00:14:19.811467 containerd[1453]: 2025-07-11 00:14:19.805 [INFO][5972] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:14:19.811467 containerd[1453]: 2025-07-11 00:14:19.808 [INFO][5964] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35" Jul 11 00:14:19.811951 containerd[1453]: time="2025-07-11T00:14:19.811492173Z" level=info msg="TearDown network for sandbox \"54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35\" successfully" Jul 11 00:14:19.811951 containerd[1453]: time="2025-07-11T00:14:19.811513273Z" level=info msg="StopPodSandbox for \"54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35\" returns successfully" Jul 11 00:14:19.811951 containerd[1453]: time="2025-07-11T00:14:19.811788499Z" level=info msg="RemovePodSandbox for \"54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35\"" Jul 11 00:14:19.811951 containerd[1453]: time="2025-07-11T00:14:19.811820058Z" level=info msg="Forcibly stopping sandbox \"54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35\"" Jul 11 00:14:19.878281 containerd[1453]: 2025-07-11 00:14:19.844 [WARNING][5990] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--kqs4z-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"09f6ed65-1087-4a6c-9f3f-47a108556bd1", ResourceVersion:"1176", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 13, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9af6cf80095206aa785494ea1c02e69978499076d85f4ac4f26b7384966be6ab", Pod:"goldmane-768f4c5c69-kqs4z", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7de7e3e1d9b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:14:19.878281 containerd[1453]: 2025-07-11 00:14:19.845 [INFO][5990] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35" Jul 11 00:14:19.878281 containerd[1453]: 2025-07-11 00:14:19.845 [INFO][5990] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35" iface="eth0" netns="" Jul 11 00:14:19.878281 containerd[1453]: 2025-07-11 00:14:19.845 [INFO][5990] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35" Jul 11 00:14:19.878281 containerd[1453]: 2025-07-11 00:14:19.845 [INFO][5990] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35" Jul 11 00:14:19.878281 containerd[1453]: 2025-07-11 00:14:19.865 [INFO][5999] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35" HandleID="k8s-pod-network.54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35" Workload="localhost-k8s-goldmane--768f4c5c69--kqs4z-eth0" Jul 11 00:14:19.878281 containerd[1453]: 2025-07-11 00:14:19.865 [INFO][5999] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:14:19.878281 containerd[1453]: 2025-07-11 00:14:19.865 [INFO][5999] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:14:19.878281 containerd[1453]: 2025-07-11 00:14:19.871 [WARNING][5999] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35" HandleID="k8s-pod-network.54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35" Workload="localhost-k8s-goldmane--768f4c5c69--kqs4z-eth0" Jul 11 00:14:19.878281 containerd[1453]: 2025-07-11 00:14:19.871 [INFO][5999] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35" HandleID="k8s-pod-network.54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35" Workload="localhost-k8s-goldmane--768f4c5c69--kqs4z-eth0" Jul 11 00:14:19.878281 containerd[1453]: 2025-07-11 00:14:19.872 [INFO][5999] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:14:19.878281 containerd[1453]: 2025-07-11 00:14:19.875 [INFO][5990] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35" Jul 11 00:14:19.878281 containerd[1453]: time="2025-07-11T00:14:19.878242061Z" level=info msg="TearDown network for sandbox \"54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35\" successfully" Jul 11 00:14:19.882389 containerd[1453]: time="2025-07-11T00:14:19.882363685Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:14:19.882449 containerd[1453]: time="2025-07-11T00:14:19.882432654Z" level=info msg="RemovePodSandbox \"54ba9746ecc161eb08adfb9ed28075fdf9ae7ab357af92364648e2657ffb7d35\" returns successfully" Jul 11 00:14:22.826517 systemd[1]: Started sshd@14-10.0.0.36:22-10.0.0.1:49464.service - OpenSSH per-connection server daemon (10.0.0.1:49464). Jul 11 00:14:22.884934 sshd[6007]: Accepted publickey for core from 10.0.0.1 port 49464 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:14:22.886594 sshd[6007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:14:22.890697 systemd-logind[1439]: New session 15 of user core. Jul 11 00:14:22.898899 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 11 00:14:23.052741 sshd[6007]: pam_unix(sshd:session): session closed for user core Jul 11 00:14:23.057139 systemd[1]: sshd@14-10.0.0.36:22-10.0.0.1:49464.service: Deactivated successfully. Jul 11 00:14:23.059168 systemd[1]: session-15.scope: Deactivated successfully. Jul 11 00:14:23.059753 systemd-logind[1439]: Session 15 logged out. Waiting for processes to exit. Jul 11 00:14:23.060696 systemd-logind[1439]: Removed session 15. Jul 11 00:14:28.063972 systemd[1]: Started sshd@15-10.0.0.36:22-10.0.0.1:49466.service - OpenSSH per-connection server daemon (10.0.0.1:49466). Jul 11 00:14:28.102184 sshd[6026]: Accepted publickey for core from 10.0.0.1 port 49466 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:14:28.103803 sshd[6026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:14:28.107532 systemd-logind[1439]: New session 16 of user core. Jul 11 00:14:28.114921 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 11 00:14:28.237591 sshd[6026]: pam_unix(sshd:session): session closed for user core Jul 11 00:14:28.242843 systemd[1]: sshd@15-10.0.0.36:22-10.0.0.1:49466.service: Deactivated successfully. Jul 11 00:14:28.245130 systemd[1]: session-16.scope: Deactivated successfully. Jul 11 00:14:28.245715 systemd-logind[1439]: Session 16 logged out. Waiting for processes to exit. Jul 11 00:14:28.246625 systemd-logind[1439]: Removed session 16. Jul 11 00:14:33.249129 systemd[1]: Started sshd@16-10.0.0.36:22-10.0.0.1:48936.service - OpenSSH per-connection server daemon (10.0.0.1:48936). Jul 11 00:14:33.291798 sshd[6042]: Accepted publickey for core from 10.0.0.1 port 48936 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:14:33.293535 sshd[6042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:14:33.297744 systemd-logind[1439]: New session 17 of user core. Jul 11 00:14:33.312882 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 11 00:14:33.420083 systemd[1]: run-containerd-runc-k8s.io-b5173d7cdeb88bd739941934bad1f86a781957cf7ef2500802daed3a65f3d56e-runc.t4AAn4.mount: Deactivated successfully. Jul 11 00:14:33.458624 kubelet[2482]: E0711 00:14:33.457718 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:14:33.459002 sshd[6042]: pam_unix(sshd:session): session closed for user core Jul 11 00:14:33.470026 systemd[1]: sshd@16-10.0.0.36:22-10.0.0.1:48936.service: Deactivated successfully. Jul 11 00:14:33.472141 systemd[1]: session-17.scope: Deactivated successfully. Jul 11 00:14:33.474152 systemd-logind[1439]: Session 17 logged out. Waiting for processes to exit. Jul 11 00:14:33.480071 systemd[1]: Started sshd@17-10.0.0.36:22-10.0.0.1:48950.service - OpenSSH per-connection server daemon (10.0.0.1:48950). Jul 11 00:14:33.482324 systemd-logind[1439]: Removed session 17. Jul 11 00:14:33.520196 sshd[6079]: Accepted publickey for core from 10.0.0.1 port 48950 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:14:33.522052 sshd[6079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:14:33.526618 systemd-logind[1439]: New session 18 of user core. Jul 11 00:14:33.539897 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 11 00:14:34.242375 sshd[6079]: pam_unix(sshd:session): session closed for user core Jul 11 00:14:34.256151 systemd[1]: sshd@17-10.0.0.36:22-10.0.0.1:48950.service: Deactivated successfully. Jul 11 00:14:34.258150 systemd[1]: session-18.scope: Deactivated successfully. Jul 11 00:14:34.259543 systemd-logind[1439]: Session 18 logged out. Waiting for processes to exit. Jul 11 00:14:34.266003 systemd[1]: Started sshd@18-10.0.0.36:22-10.0.0.1:48964.service - OpenSSH per-connection server daemon (10.0.0.1:48964). Jul 11 00:14:34.267746 systemd-logind[1439]: Removed session 18. Jul 11 00:14:34.302420 sshd[6093]: Accepted publickey for core from 10.0.0.1 port 48964 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:14:34.304084 sshd[6093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:14:34.308378 systemd-logind[1439]: New session 19 of user core. Jul 11 00:14:34.320933 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 11 00:14:35.221359 sshd[6093]: pam_unix(sshd:session): session closed for user core Jul 11 00:14:35.230654 systemd[1]: sshd@18-10.0.0.36:22-10.0.0.1:48964.service: Deactivated successfully. Jul 11 00:14:35.234740 systemd[1]: session-19.scope: Deactivated successfully. Jul 11 00:14:35.236576 systemd-logind[1439]: Session 19 logged out. Waiting for processes to exit. Jul 11 00:14:35.248093 systemd[1]: Started sshd@19-10.0.0.36:22-10.0.0.1:48978.service - OpenSSH per-connection server daemon (10.0.0.1:48978). Jul 11 00:14:35.250706 systemd-logind[1439]: Removed session 19. Jul 11 00:14:35.287386 sshd[6119]: Accepted publickey for core from 10.0.0.1 port 48978 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:14:35.289092 sshd[6119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:14:35.293593 systemd-logind[1439]: New session 20 of user core. Jul 11 00:14:35.303941 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 11 00:14:35.742624 sshd[6119]: pam_unix(sshd:session): session closed for user core Jul 11 00:14:35.754456 systemd[1]: sshd@19-10.0.0.36:22-10.0.0.1:48978.service: Deactivated successfully. Jul 11 00:14:35.757233 systemd[1]: session-20.scope: Deactivated successfully. Jul 11 00:14:35.761744 systemd-logind[1439]: Session 20 logged out. Waiting for processes to exit. Jul 11 00:14:35.768374 systemd[1]: Started sshd@20-10.0.0.36:22-10.0.0.1:48986.service - OpenSSH per-connection server daemon (10.0.0.1:48986). Jul 11 00:14:35.770039 systemd-logind[1439]: Removed session 20. Jul 11 00:14:35.817371 sshd[6133]: Accepted publickey for core from 10.0.0.1 port 48986 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:14:35.819962 sshd[6133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:14:35.825500 systemd-logind[1439]: New session 21 of user core. Jul 11 00:14:35.831926 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 11 00:14:35.952796 sshd[6133]: pam_unix(sshd:session): session closed for user core Jul 11 00:14:35.957515 systemd[1]: sshd@20-10.0.0.36:22-10.0.0.1:48986.service: Deactivated successfully. Jul 11 00:14:35.959855 systemd[1]: session-21.scope: Deactivated successfully. Jul 11 00:14:35.960954 systemd-logind[1439]: Session 21 logged out. Waiting for processes to exit. Jul 11 00:14:35.962179 systemd-logind[1439]: Removed session 21. Jul 11 00:14:38.892405 kubelet[2482]: I0711 00:14:38.892292 2482 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:14:40.964741 systemd[1]: Started sshd@21-10.0.0.36:22-10.0.0.1:55138.service - OpenSSH per-connection server daemon (10.0.0.1:55138). Jul 11 00:14:41.009625 sshd[6155]: Accepted publickey for core from 10.0.0.1 port 55138 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:14:41.011452 sshd[6155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:14:41.015724 systemd-logind[1439]: New session 22 of user core. Jul 11 00:14:41.028921 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 11 00:14:41.214543 sshd[6155]: pam_unix(sshd:session): session closed for user core Jul 11 00:14:41.218602 systemd[1]: sshd@21-10.0.0.36:22-10.0.0.1:55138.service: Deactivated successfully. Jul 11 00:14:41.220805 systemd[1]: session-22.scope: Deactivated successfully. Jul 11 00:14:41.221453 systemd-logind[1439]: Session 22 logged out. Waiting for processes to exit. Jul 11 00:14:41.222354 systemd-logind[1439]: Removed session 22. Jul 11 00:14:42.457355 kubelet[2482]: E0711 00:14:42.457312 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:14:46.233669 systemd[1]: Started sshd@22-10.0.0.36:22-10.0.0.1:55154.service - OpenSSH per-connection server daemon (10.0.0.1:55154). Jul 11 00:14:46.270891 sshd[6217]: Accepted publickey for core from 10.0.0.1 port 55154 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:14:46.272773 sshd[6217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:14:46.277785 systemd-logind[1439]: New session 23 of user core. Jul 11 00:14:46.285902 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 11 00:14:46.434401 sshd[6217]: pam_unix(sshd:session): session closed for user core Jul 11 00:14:46.438250 systemd-logind[1439]: Session 23 logged out. Waiting for processes to exit. Jul 11 00:14:46.438505 systemd[1]: sshd@22-10.0.0.36:22-10.0.0.1:55154.service: Deactivated successfully. Jul 11 00:14:46.440973 systemd[1]: session-23.scope: Deactivated successfully. Jul 11 00:14:46.443073 systemd-logind[1439]: Removed session 23. Jul 11 00:14:51.458023 kubelet[2482]: E0711 00:14:51.457961 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:14:51.463063 systemd[1]: Started sshd@23-10.0.0.36:22-10.0.0.1:42786.service - OpenSSH per-connection server daemon (10.0.0.1:42786). Jul 11 00:14:51.514517 sshd[6232]: Accepted publickey for core from 10.0.0.1 port 42786 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:14:51.517721 sshd[6232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:14:51.523601 systemd-logind[1439]: New session 24 of user core. Jul 11 00:14:51.536086 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 11 00:14:51.744602 sshd[6232]: pam_unix(sshd:session): session closed for user core Jul 11 00:14:51.751655 systemd[1]: sshd@23-10.0.0.36:22-10.0.0.1:42786.service: Deactivated successfully. Jul 11 00:14:51.754303 systemd[1]: session-24.scope: Deactivated successfully. Jul 11 00:14:51.755330 systemd-logind[1439]: Session 24 logged out. Waiting for processes to exit. Jul 11 00:14:51.756439 systemd-logind[1439]: Removed session 24. Jul 11 00:14:54.461240 kubelet[2482]: E0711 00:14:54.461174 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:14:56.760507 systemd[1]: Started sshd@24-10.0.0.36:22-10.0.0.1:42794.service - OpenSSH per-connection server daemon (10.0.0.1:42794). Jul 11 00:14:56.815492 sshd[6248]: Accepted publickey for core from 10.0.0.1 port 42794 ssh2: RSA SHA256:ZG2VFSwdBfmn0pOyYTfSuR7MR6gTJl3QoMafQYL2V7E Jul 11 00:14:56.817414 sshd[6248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:14:56.821985 systemd-logind[1439]: New session 25 of user core. Jul 11 00:14:56.825894 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 11 00:14:57.064795 sshd[6248]: pam_unix(sshd:session): session closed for user core Jul 11 00:14:57.070900 systemd[1]: sshd@24-10.0.0.36:22-10.0.0.1:42794.service: Deactivated successfully. Jul 11 00:14:57.073785 systemd[1]: session-25.scope: Deactivated successfully. Jul 11 00:14:57.074863 systemd-logind[1439]: Session 25 logged out. Waiting for processes to exit. Jul 11 00:14:57.075904 systemd-logind[1439]: Removed session 25.