Jul 6 23:59:20.882530 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 22:23:50 -00 2025 Jul 6 23:59:20.882558 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:59:20.882572 kernel: BIOS-provided physical RAM map: Jul 6 23:59:20.882581 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 6 23:59:20.882589 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 6 23:59:20.882598 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 6 23:59:20.882609 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jul 6 23:59:20.882618 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jul 6 23:59:20.882627 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 6 23:59:20.882640 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 6 23:59:20.882650 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 6 23:59:20.882659 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 6 23:59:20.882682 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 6 23:59:20.882690 kernel: NX (Execute Disable) protection: active Jul 6 23:59:20.882701 kernel: APIC: Static calls initialized Jul 6 23:59:20.882714 kernel: SMBIOS 2.8 present. Jul 6 23:59:20.882723 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jul 6 23:59:20.882732 kernel: Hypervisor detected: KVM Jul 6 23:59:20.882741 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 6 23:59:20.882750 kernel: kvm-clock: using sched offset of 2183881697 cycles Jul 6 23:59:20.882759 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 6 23:59:20.882769 kernel: tsc: Detected 2794.748 MHz processor Jul 6 23:59:20.882778 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 6 23:59:20.882788 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 6 23:59:20.882798 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jul 6 23:59:20.882810 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 6 23:59:20.882819 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 6 23:59:20.882829 kernel: Using GB pages for direct mapping Jul 6 23:59:20.882838 kernel: ACPI: Early table checksum verification disabled Jul 6 23:59:20.882847 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jul 6 23:59:20.882856 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:59:20.882866 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:59:20.882875 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:59:20.882887 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jul 6 23:59:20.882897 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:59:20.882906 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:59:20.882915 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:59:20.882925 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:59:20.882934 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jul 6 23:59:20.882960 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jul 6 23:59:20.882983 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jul 6 23:59:20.882995 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jul 6 23:59:20.883005 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jul 6 23:59:20.883015 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jul 6 23:59:20.883025 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jul 6 23:59:20.883034 kernel: No NUMA configuration found Jul 6 23:59:20.883048 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jul 6 23:59:20.883057 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jul 6 23:59:20.883070 kernel: Zone ranges: Jul 6 23:59:20.883080 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 6 23:59:20.883090 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jul 6 23:59:20.883099 kernel: Normal empty Jul 6 23:59:20.883109 kernel: Movable zone start for each node Jul 6 23:59:20.883119 kernel: Early memory node ranges Jul 6 23:59:20.883129 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 6 23:59:20.883138 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jul 6 23:59:20.883148 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jul 6 23:59:20.883161 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 6 23:59:20.883170 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 6 23:59:20.883180 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jul 6 23:59:20.883189 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 6 23:59:20.883199 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 6 23:59:20.883209 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 6 23:59:20.883219 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 6 23:59:20.883229 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 6 23:59:20.883238 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 6 23:59:20.883251 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 6 23:59:20.883261 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 6 23:59:20.883271 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 6 23:59:20.883280 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 6 23:59:20.883290 kernel: TSC deadline timer available Jul 6 23:59:20.883300 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jul 6 23:59:20.883310 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 6 23:59:20.883329 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 6 23:59:20.883339 kernel: kvm-guest: setup PV sched yield Jul 6 23:59:20.883349 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 6 23:59:20.883362 kernel: Booting paravirtualized kernel on KVM Jul 6 23:59:20.883372 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 6 23:59:20.883382 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 6 23:59:20.883391 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u524288 Jul 6 23:59:20.883402 kernel: pcpu-alloc: s197096 r8192 d32280 u524288 alloc=1*2097152 Jul 6 23:59:20.883411 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 6 23:59:20.883421 kernel: kvm-guest: PV spinlocks enabled Jul 6 23:59:20.883430 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 6 23:59:20.883441 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:59:20.883455 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 6 23:59:20.883465 kernel: random: crng init done Jul 6 23:59:20.883474 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 6 23:59:20.883484 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 6 23:59:20.883494 kernel: Fallback order for Node 0: 0 Jul 6 23:59:20.883504 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jul 6 23:59:20.883514 kernel: Policy zone: DMA32 Jul 6 23:59:20.883524 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 6 23:59:20.883537 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42868K init, 2324K bss, 136900K reserved, 0K cma-reserved) Jul 6 23:59:20.883547 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 6 23:59:20.883557 kernel: ftrace: allocating 37966 entries in 149 pages Jul 6 23:59:20.883566 kernel: ftrace: allocated 149 pages with 4 groups Jul 6 23:59:20.883576 kernel: Dynamic Preempt: voluntary Jul 6 23:59:20.883586 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 6 23:59:20.883597 kernel: rcu: RCU event tracing is enabled. Jul 6 23:59:20.883608 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 6 23:59:20.883620 kernel: Trampoline variant of Tasks RCU enabled. Jul 6 23:59:20.883636 kernel: Rude variant of Tasks RCU enabled. Jul 6 23:59:20.883646 kernel: Tracing variant of Tasks RCU enabled. Jul 6 23:59:20.883655 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 6 23:59:20.883677 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 6 23:59:20.883697 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 6 23:59:20.883707 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 6 23:59:20.883717 kernel: Console: colour VGA+ 80x25 Jul 6 23:59:20.883727 kernel: printk: console [ttyS0] enabled Jul 6 23:59:20.883736 kernel: ACPI: Core revision 20230628 Jul 6 23:59:20.883750 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 6 23:59:20.883760 kernel: APIC: Switch to symmetric I/O mode setup Jul 6 23:59:20.883770 kernel: x2apic enabled Jul 6 23:59:20.883779 kernel: APIC: Switched APIC routing to: physical x2apic Jul 6 23:59:20.883789 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 6 23:59:20.883799 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 6 23:59:20.883809 kernel: kvm-guest: setup PV IPIs Jul 6 23:59:20.883831 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 6 23:59:20.883842 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 6 23:59:20.883852 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jul 6 23:59:20.883862 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 6 23:59:20.883873 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 6 23:59:20.883886 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 6 23:59:20.883896 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 6 23:59:20.883906 kernel: Spectre V2 : Mitigation: Retpolines Jul 6 23:59:20.883917 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 6 23:59:20.883930 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 6 23:59:20.883940 kernel: RETBleed: Mitigation: untrained return thunk Jul 6 23:59:20.883951 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 6 23:59:20.883961 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 6 23:59:20.883971 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 6 23:59:20.883982 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 6 23:59:20.883992 kernel: x86/bugs: return thunk changed Jul 6 23:59:20.884002 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 6 23:59:20.884013 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 6 23:59:20.884026 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 6 23:59:20.884036 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 6 23:59:20.884047 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 6 23:59:20.884057 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 6 23:59:20.884067 kernel: Freeing SMP alternatives memory: 32K Jul 6 23:59:20.884077 kernel: pid_max: default: 32768 minimum: 301 Jul 6 23:59:20.884088 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 6 23:59:20.884098 kernel: landlock: Up and running. Jul 6 23:59:20.884108 kernel: SELinux: Initializing. Jul 6 23:59:20.884122 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:59:20.884132 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:59:20.884143 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 6 23:59:20.884153 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 6 23:59:20.884163 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 6 23:59:20.884174 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 6 23:59:20.884184 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 6 23:59:20.884195 kernel: ... version: 0 Jul 6 23:59:20.884207 kernel: ... bit width: 48 Jul 6 23:59:20.884218 kernel: ... generic registers: 6 Jul 6 23:59:20.884228 kernel: ... value mask: 0000ffffffffffff Jul 6 23:59:20.884238 kernel: ... max period: 00007fffffffffff Jul 6 23:59:20.884248 kernel: ... fixed-purpose events: 0 Jul 6 23:59:20.884258 kernel: ... event mask: 000000000000003f Jul 6 23:59:20.884268 kernel: signal: max sigframe size: 1776 Jul 6 23:59:20.884278 kernel: rcu: Hierarchical SRCU implementation. Jul 6 23:59:20.884289 kernel: rcu: Max phase no-delay instances is 400. Jul 6 23:59:20.884299 kernel: smp: Bringing up secondary CPUs ... Jul 6 23:59:20.884312 kernel: smpboot: x86: Booting SMP configuration: Jul 6 23:59:20.884331 kernel: .... node #0, CPUs: #1 #2 #3 Jul 6 23:59:20.884342 kernel: smp: Brought up 1 node, 4 CPUs Jul 6 23:59:20.884352 kernel: smpboot: Max logical packages: 1 Jul 6 23:59:20.884362 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jul 6 23:59:20.884373 kernel: devtmpfs: initialized Jul 6 23:59:20.884383 kernel: x86/mm: Memory block size: 128MB Jul 6 23:59:20.884393 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 6 23:59:20.884403 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 6 23:59:20.884417 kernel: pinctrl core: initialized pinctrl subsystem Jul 6 23:59:20.884434 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 6 23:59:20.884444 kernel: audit: initializing netlink subsys (disabled) Jul 6 23:59:20.884454 kernel: audit: type=2000 audit(1751846360.307:1): state=initialized audit_enabled=0 res=1 Jul 6 23:59:20.884464 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 6 23:59:20.884475 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 6 23:59:20.884485 kernel: cpuidle: using governor menu Jul 6 23:59:20.884495 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 6 23:59:20.884506 kernel: dca service started, version 1.12.1 Jul 6 23:59:20.884519 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jul 6 23:59:20.884530 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jul 6 23:59:20.884540 kernel: PCI: Using configuration type 1 for base access Jul 6 23:59:20.884550 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 6 23:59:20.884561 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 6 23:59:20.884571 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 6 23:59:20.884582 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 6 23:59:20.884592 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 6 23:59:20.884602 kernel: ACPI: Added _OSI(Module Device) Jul 6 23:59:20.884616 kernel: ACPI: Added _OSI(Processor Device) Jul 6 23:59:20.884626 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 6 23:59:20.884636 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 6 23:59:20.884646 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 6 23:59:20.884656 kernel: ACPI: Interpreter enabled Jul 6 23:59:20.884704 kernel: ACPI: PM: (supports S0 S3 S5) Jul 6 23:59:20.884721 kernel: ACPI: Using IOAPIC for interrupt routing Jul 6 23:59:20.884732 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 6 23:59:20.884742 kernel: PCI: Using E820 reservations for host bridge windows Jul 6 23:59:20.884756 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 6 23:59:20.884767 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 6 23:59:20.884985 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 6 23:59:20.885136 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 6 23:59:20.885279 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 6 23:59:20.885293 kernel: PCI host bridge to bus 0000:00 Jul 6 23:59:20.885451 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 6 23:59:20.885593 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 6 23:59:20.885758 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 6 23:59:20.885892 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jul 6 23:59:20.886022 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 6 23:59:20.886152 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jul 6 23:59:20.886281 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 6 23:59:20.886455 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jul 6 23:59:20.886628 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jul 6 23:59:20.886799 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jul 6 23:59:20.886944 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jul 6 23:59:20.887085 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jul 6 23:59:20.887228 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 6 23:59:20.887395 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jul 6 23:59:20.887547 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jul 6 23:59:20.887721 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jul 6 23:59:20.887869 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jul 6 23:59:20.888024 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jul 6 23:59:20.888169 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jul 6 23:59:20.888314 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jul 6 23:59:20.888471 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jul 6 23:59:20.888632 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jul 6 23:59:20.888807 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jul 6 23:59:20.888950 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jul 6 23:59:20.889093 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jul 6 23:59:20.889235 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jul 6 23:59:20.889398 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jul 6 23:59:20.889541 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 6 23:59:20.889719 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jul 6 23:59:20.889864 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jul 6 23:59:20.890005 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jul 6 23:59:20.890156 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jul 6 23:59:20.890298 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jul 6 23:59:20.890312 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 6 23:59:20.890337 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 6 23:59:20.890347 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 6 23:59:20.890358 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 6 23:59:20.890368 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 6 23:59:20.890378 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 6 23:59:20.890388 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 6 23:59:20.890399 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 6 23:59:20.890409 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 6 23:59:20.890420 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 6 23:59:20.890434 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 6 23:59:20.890444 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 6 23:59:20.890454 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 6 23:59:20.890464 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 6 23:59:20.890475 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 6 23:59:20.890485 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 6 23:59:20.890495 kernel: iommu: Default domain type: Translated Jul 6 23:59:20.890506 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 6 23:59:20.890516 kernel: PCI: Using ACPI for IRQ routing Jul 6 23:59:20.890526 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 6 23:59:20.890540 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 6 23:59:20.890550 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jul 6 23:59:20.890744 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 6 23:59:20.890891 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 6 23:59:20.891034 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 6 23:59:20.891048 kernel: vgaarb: loaded Jul 6 23:59:20.891058 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 6 23:59:20.891069 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 6 23:59:20.891084 kernel: clocksource: Switched to clocksource kvm-clock Jul 6 23:59:20.891095 kernel: VFS: Disk quotas dquot_6.6.0 Jul 6 23:59:20.891106 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 6 23:59:20.891116 kernel: pnp: PnP ACPI init Jul 6 23:59:20.891277 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 6 23:59:20.891292 kernel: pnp: PnP ACPI: found 6 devices Jul 6 23:59:20.891303 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 6 23:59:20.891313 kernel: NET: Registered PF_INET protocol family Jul 6 23:59:20.891336 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 6 23:59:20.891347 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 6 23:59:20.891357 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 6 23:59:20.891368 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 6 23:59:20.891378 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 6 23:59:20.891389 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 6 23:59:20.891399 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:59:20.891410 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:59:20.891420 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 6 23:59:20.891434 kernel: NET: Registered PF_XDP protocol family Jul 6 23:59:20.891568 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 6 23:59:20.891728 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 6 23:59:20.891877 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 6 23:59:20.892010 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jul 6 23:59:20.892140 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 6 23:59:20.892272 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jul 6 23:59:20.892285 kernel: PCI: CLS 0 bytes, default 64 Jul 6 23:59:20.892300 kernel: Initialise system trusted keyrings Jul 6 23:59:20.892311 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 6 23:59:20.892332 kernel: Key type asymmetric registered Jul 6 23:59:20.892342 kernel: Asymmetric key parser 'x509' registered Jul 6 23:59:20.892352 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 6 23:59:20.892363 kernel: io scheduler mq-deadline registered Jul 6 23:59:20.892374 kernel: io scheduler kyber registered Jul 6 23:59:20.892384 kernel: io scheduler bfq registered Jul 6 23:59:20.892395 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 6 23:59:20.892410 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 6 23:59:20.892421 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 6 23:59:20.892431 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 6 23:59:20.892441 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 6 23:59:20.892452 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 6 23:59:20.892463 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 6 23:59:20.892474 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 6 23:59:20.892484 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 6 23:59:20.892635 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 6 23:59:20.892840 kernel: rtc_cmos 00:04: registered as rtc0 Jul 6 23:59:20.892855 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 6 23:59:20.892987 kernel: rtc_cmos 00:04: setting system clock to 2025-07-06T23:59:20 UTC (1751846360) Jul 6 23:59:20.893120 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 6 23:59:20.893133 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 6 23:59:20.893144 kernel: NET: Registered PF_INET6 protocol family Jul 6 23:59:20.893154 kernel: Segment Routing with IPv6 Jul 6 23:59:20.893165 kernel: In-situ OAM (IOAM) with IPv6 Jul 6 23:59:20.893179 kernel: NET: Registered PF_PACKET protocol family Jul 6 23:59:20.893190 kernel: Key type dns_resolver registered Jul 6 23:59:20.893200 kernel: IPI shorthand broadcast: enabled Jul 6 23:59:20.893211 kernel: sched_clock: Marking stable (610002398, 104844313)->(729486448, -14639737) Jul 6 23:59:20.893221 kernel: registered taskstats version 1 Jul 6 23:59:20.893232 kernel: Loading compiled-in X.509 certificates Jul 6 23:59:20.893243 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 6372c48ca52cc7f7bbee5675b604584c1c68ec5b' Jul 6 23:59:20.893254 kernel: Key type .fscrypt registered Jul 6 23:59:20.893264 kernel: Key type fscrypt-provisioning registered Jul 6 23:59:20.893277 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 6 23:59:20.893288 kernel: ima: Allocated hash algorithm: sha1 Jul 6 23:59:20.893298 kernel: ima: No architecture policies found Jul 6 23:59:20.893308 kernel: clk: Disabling unused clocks Jul 6 23:59:20.893319 kernel: Freeing unused kernel image (initmem) memory: 42868K Jul 6 23:59:20.893339 kernel: Write protecting the kernel read-only data: 36864k Jul 6 23:59:20.893349 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Jul 6 23:59:20.893360 kernel: Run /init as init process Jul 6 23:59:20.893370 kernel: with arguments: Jul 6 23:59:20.893383 kernel: /init Jul 6 23:59:20.893394 kernel: with environment: Jul 6 23:59:20.893404 kernel: HOME=/ Jul 6 23:59:20.893414 kernel: TERM=linux Jul 6 23:59:20.893424 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 6 23:59:20.893437 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 6 23:59:20.893450 systemd[1]: Detected virtualization kvm. Jul 6 23:59:20.893461 systemd[1]: Detected architecture x86-64. Jul 6 23:59:20.893476 systemd[1]: Running in initrd. Jul 6 23:59:20.893486 systemd[1]: No hostname configured, using default hostname. Jul 6 23:59:20.893497 systemd[1]: Hostname set to . Jul 6 23:59:20.893509 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:59:20.893520 systemd[1]: Queued start job for default target initrd.target. Jul 6 23:59:20.893531 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:59:20.893542 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:59:20.893554 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 6 23:59:20.893569 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:59:20.893597 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 6 23:59:20.893611 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 6 23:59:20.893624 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 6 23:59:20.893639 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 6 23:59:20.893650 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:59:20.893676 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:59:20.893688 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:59:20.893699 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:59:20.893711 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:59:20.893722 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:59:20.893734 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:59:20.893745 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:59:20.893760 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 6 23:59:20.893772 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 6 23:59:20.893783 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:59:20.893794 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:59:20.893806 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:59:20.893817 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:59:20.893828 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 6 23:59:20.893839 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:59:20.893854 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 6 23:59:20.893865 systemd[1]: Starting systemd-fsck-usr.service... Jul 6 23:59:20.893876 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:59:20.893887 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:59:20.893899 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:59:20.893910 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 6 23:59:20.893922 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:59:20.893933 systemd[1]: Finished systemd-fsck-usr.service. Jul 6 23:59:20.893973 systemd-journald[193]: Collecting audit messages is disabled. Jul 6 23:59:20.894003 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:59:20.894015 systemd-journald[193]: Journal started Jul 6 23:59:20.894042 systemd-journald[193]: Runtime Journal (/run/log/journal/690dc7a99b824c70b270fda38a4f6a91) is 6.0M, max 48.4M, 42.3M free. Jul 6 23:59:20.880400 systemd-modules-load[194]: Inserted module 'overlay' Jul 6 23:59:20.919250 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 6 23:59:20.919273 kernel: Bridge firewalling registered Jul 6 23:59:20.911709 systemd-modules-load[194]: Inserted module 'br_netfilter' Jul 6 23:59:20.920925 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:59:20.922274 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:59:20.924531 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:59:20.926912 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:59:20.944827 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:59:20.945647 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:59:20.946745 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:59:20.949804 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:59:20.962154 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:59:20.962457 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:59:20.966807 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:59:20.972862 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:59:20.975448 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:59:20.976308 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 6 23:59:20.991098 dracut-cmdline[231]: dracut-dracut-053 Jul 6 23:59:20.994522 dracut-cmdline[231]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:59:21.010853 systemd-resolved[224]: Positive Trust Anchors: Jul 6 23:59:21.010867 systemd-resolved[224]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:59:21.010900 systemd-resolved[224]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:59:21.014003 systemd-resolved[224]: Defaulting to hostname 'linux'. Jul 6 23:59:21.015214 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:59:21.020770 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:59:21.080691 kernel: SCSI subsystem initialized Jul 6 23:59:21.090692 kernel: Loading iSCSI transport class v2.0-870. Jul 6 23:59:21.100696 kernel: iscsi: registered transport (tcp) Jul 6 23:59:21.122702 kernel: iscsi: registered transport (qla4xxx) Jul 6 23:59:21.122759 kernel: QLogic iSCSI HBA Driver Jul 6 23:59:21.167422 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 6 23:59:21.177817 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 6 23:59:21.204647 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 6 23:59:21.204733 kernel: device-mapper: uevent: version 1.0.3 Jul 6 23:59:21.204748 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 6 23:59:21.245692 kernel: raid6: avx2x4 gen() 29869 MB/s Jul 6 23:59:21.262686 kernel: raid6: avx2x2 gen() 30013 MB/s Jul 6 23:59:21.279719 kernel: raid6: avx2x1 gen() 25552 MB/s Jul 6 23:59:21.279738 kernel: raid6: using algorithm avx2x2 gen() 30013 MB/s Jul 6 23:59:21.297739 kernel: raid6: .... xor() 19432 MB/s, rmw enabled Jul 6 23:59:21.297768 kernel: raid6: using avx2x2 recovery algorithm Jul 6 23:59:21.317684 kernel: xor: automatically using best checksumming function avx Jul 6 23:59:21.470693 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 6 23:59:21.482158 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:59:21.489845 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:59:21.500923 systemd-udevd[414]: Using default interface naming scheme 'v255'. Jul 6 23:59:21.505511 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:59:21.520818 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 6 23:59:21.532248 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation Jul 6 23:59:21.559769 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:59:21.575847 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:59:21.640118 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:59:21.648843 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 6 23:59:21.665057 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 6 23:59:21.668631 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:59:21.670644 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:59:21.672009 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:59:21.686745 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 6 23:59:21.692973 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 6 23:59:21.693182 kernel: cryptd: max_cpu_qlen set to 1000 Jul 6 23:59:21.689706 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 6 23:59:21.700729 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 6 23:59:21.700756 kernel: GPT:9289727 != 19775487 Jul 6 23:59:21.700770 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 6 23:59:21.700784 kernel: GPT:9289727 != 19775487 Jul 6 23:59:21.700798 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 6 23:59:21.700811 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:59:21.705745 kernel: AVX2 version of gcm_enc/dec engaged. Jul 6 23:59:21.705772 kernel: AES CTR mode by8 optimization enabled Jul 6 23:59:21.718352 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:59:21.720799 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:59:21.720964 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:59:21.727111 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:59:21.730378 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:59:21.730721 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:59:21.738044 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (466) Jul 6 23:59:21.738527 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:59:21.743339 kernel: BTRFS: device fsid 01287863-c21f-4cbb-820d-bbae8208f32f devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (458) Jul 6 23:59:21.743361 kernel: libata version 3.00 loaded. Jul 6 23:59:21.749004 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:59:21.754745 kernel: ahci 0000:00:1f.2: version 3.0 Jul 6 23:59:21.755218 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 6 23:59:21.755236 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jul 6 23:59:21.755451 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 6 23:59:21.756683 kernel: scsi host0: ahci Jul 6 23:59:21.757765 kernel: scsi host1: ahci Jul 6 23:59:21.758688 kernel: scsi host2: ahci Jul 6 23:59:21.759971 kernel: scsi host3: ahci Jul 6 23:59:21.760183 kernel: scsi host4: ahci Jul 6 23:59:21.763226 kernel: scsi host5: ahci Jul 6 23:59:21.763403 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jul 6 23:59:21.763416 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jul 6 23:59:21.763426 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jul 6 23:59:21.763435 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jul 6 23:59:21.763445 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jul 6 23:59:21.763460 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jul 6 23:59:21.773871 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 6 23:59:21.802048 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:59:21.813567 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 6 23:59:21.820987 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 6 23:59:21.827252 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 6 23:59:21.830526 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 6 23:59:21.844810 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 6 23:59:21.846621 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:59:21.854416 disk-uuid[558]: Primary Header is updated. Jul 6 23:59:21.854416 disk-uuid[558]: Secondary Entries is updated. Jul 6 23:59:21.854416 disk-uuid[558]: Secondary Header is updated. Jul 6 23:59:21.858699 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:59:21.862684 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:59:21.863582 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:59:22.069697 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 6 23:59:22.077691 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 6 23:59:22.077721 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 6 23:59:22.078685 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 6 23:59:22.078709 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 6 23:59:22.079693 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 6 23:59:22.080692 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 6 23:59:22.080707 kernel: ata3.00: applying bridge limits Jul 6 23:59:22.081685 kernel: ata3.00: configured for UDMA/100 Jul 6 23:59:22.083693 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 6 23:59:22.129149 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 6 23:59:22.129365 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 6 23:59:22.141688 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 6 23:59:22.864693 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:59:22.864917 disk-uuid[562]: The operation has completed successfully. Jul 6 23:59:22.895055 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 6 23:59:22.895179 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 6 23:59:22.921809 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 6 23:59:22.927192 sh[592]: Success Jul 6 23:59:22.939711 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 6 23:59:22.970955 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 6 23:59:22.983277 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 6 23:59:22.985603 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 6 23:59:23.000819 kernel: BTRFS info (device dm-0): first mount of filesystem 01287863-c21f-4cbb-820d-bbae8208f32f Jul 6 23:59:23.000857 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:59:23.000868 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 6 23:59:23.001790 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 6 23:59:23.003076 kernel: BTRFS info (device dm-0): using free space tree Jul 6 23:59:23.006849 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 6 23:59:23.007447 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 6 23:59:23.012820 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 6 23:59:23.015060 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 6 23:59:23.022422 kernel: BTRFS info (device vda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:59:23.022456 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:59:23.022471 kernel: BTRFS info (device vda6): using free space tree Jul 6 23:59:23.025679 kernel: BTRFS info (device vda6): auto enabling async discard Jul 6 23:59:23.033885 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 6 23:59:23.035616 kernel: BTRFS info (device vda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:59:23.043802 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 6 23:59:23.050857 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 6 23:59:23.106342 ignition[676]: Ignition 2.19.0 Jul 6 23:59:23.106354 ignition[676]: Stage: fetch-offline Jul 6 23:59:23.106390 ignition[676]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:59:23.106399 ignition[676]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:59:23.106487 ignition[676]: parsed url from cmdline: "" Jul 6 23:59:23.106490 ignition[676]: no config URL provided Jul 6 23:59:23.106495 ignition[676]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:59:23.106504 ignition[676]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:59:23.106529 ignition[676]: op(1): [started] loading QEMU firmware config module Jul 6 23:59:23.106535 ignition[676]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 6 23:59:23.114843 ignition[676]: op(1): [finished] loading QEMU firmware config module Jul 6 23:59:23.114867 ignition[676]: QEMU firmware config was not found. Ignoring... Jul 6 23:59:23.132848 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:59:23.145859 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:59:23.160883 ignition[676]: parsing config with SHA512: 16308bd4f09d437afff49179acdb97808c0ad0877217d86190cfe371b20e9e2bf34b60badc7dad6258e124a7963648af383a36b2ba1da8254c8f630dbe79bc66 Jul 6 23:59:23.165145 unknown[676]: fetched base config from "system" Jul 6 23:59:23.165305 unknown[676]: fetched user config from "qemu" Jul 6 23:59:23.165706 ignition[676]: fetch-offline: fetch-offline passed Jul 6 23:59:23.165784 ignition[676]: Ignition finished successfully Jul 6 23:59:23.169848 systemd-networkd[781]: lo: Link UP Jul 6 23:59:23.169858 systemd-networkd[781]: lo: Gained carrier Jul 6 23:59:23.171437 systemd-networkd[781]: Enumeration completed Jul 6 23:59:23.171733 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:59:23.171933 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:59:23.171938 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:59:23.173297 systemd[1]: Reached target network.target - Network. Jul 6 23:59:23.173476 systemd-networkd[781]: eth0: Link UP Jul 6 23:59:23.173481 systemd-networkd[781]: eth0: Gained carrier Jul 6 23:59:23.173489 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:59:23.181625 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:59:23.183896 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 6 23:59:23.192821 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 6 23:59:23.198707 systemd-networkd[781]: eth0: DHCPv4 address 10.0.0.146/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 6 23:59:23.211835 ignition[784]: Ignition 2.19.0 Jul 6 23:59:23.211845 ignition[784]: Stage: kargs Jul 6 23:59:23.212035 ignition[784]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:59:23.212046 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:59:23.215599 ignition[784]: kargs: kargs passed Jul 6 23:59:23.215645 ignition[784]: Ignition finished successfully Jul 6 23:59:23.219322 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 6 23:59:23.232850 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 6 23:59:23.244426 ignition[793]: Ignition 2.19.0 Jul 6 23:59:23.244435 ignition[793]: Stage: disks Jul 6 23:59:23.244592 ignition[793]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:59:23.244602 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:59:23.245362 ignition[793]: disks: disks passed Jul 6 23:59:23.247881 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 6 23:59:23.245405 ignition[793]: Ignition finished successfully Jul 6 23:59:23.249068 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 6 23:59:23.250751 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 6 23:59:23.252154 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:59:23.254330 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:59:23.255331 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:59:23.268878 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 6 23:59:23.280732 systemd-fsck[804]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 6 23:59:23.287317 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 6 23:59:23.296819 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 6 23:59:23.385696 kernel: EXT4-fs (vda9): mounted filesystem c3eefe20-4a42-420d-8034-4d5498275b2f r/w with ordered data mode. Quota mode: none. Jul 6 23:59:23.386596 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 6 23:59:23.387505 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 6 23:59:23.401790 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:59:23.403722 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 6 23:59:23.404972 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 6 23:59:23.405023 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 6 23:59:23.413166 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (812) Jul 6 23:59:23.413193 kernel: BTRFS info (device vda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:59:23.405046 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:59:23.418855 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:59:23.418877 kernel: BTRFS info (device vda6): using free space tree Jul 6 23:59:23.418890 kernel: BTRFS info (device vda6): auto enabling async discard Jul 6 23:59:23.414401 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 6 23:59:23.419733 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 6 23:59:23.422234 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:59:23.466243 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Jul 6 23:59:23.471015 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Jul 6 23:59:23.475275 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Jul 6 23:59:23.479162 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Jul 6 23:59:23.564069 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 6 23:59:23.579747 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 6 23:59:23.581535 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 6 23:59:23.587695 kernel: BTRFS info (device vda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:59:23.608575 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 6 23:59:23.610774 ignition[925]: INFO : Ignition 2.19.0 Jul 6 23:59:23.610774 ignition[925]: INFO : Stage: mount Jul 6 23:59:23.612465 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:59:23.612465 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:59:23.614966 ignition[925]: INFO : mount: mount passed Jul 6 23:59:23.615717 ignition[925]: INFO : Ignition finished successfully Jul 6 23:59:23.618274 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 6 23:59:23.625771 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 6 23:59:24.000210 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 6 23:59:24.014818 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:59:24.020687 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (937) Jul 6 23:59:24.022737 kernel: BTRFS info (device vda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:59:24.022760 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:59:24.022770 kernel: BTRFS info (device vda6): using free space tree Jul 6 23:59:24.025687 kernel: BTRFS info (device vda6): auto enabling async discard Jul 6 23:59:24.026995 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:59:24.049907 ignition[954]: INFO : Ignition 2.19.0 Jul 6 23:59:24.049907 ignition[954]: INFO : Stage: files Jul 6 23:59:24.051749 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:59:24.051749 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:59:24.051749 ignition[954]: DEBUG : files: compiled without relabeling support, skipping Jul 6 23:59:24.051749 ignition[954]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 6 23:59:24.051749 ignition[954]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 6 23:59:24.057997 ignition[954]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 6 23:59:24.057997 ignition[954]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 6 23:59:24.057997 ignition[954]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 6 23:59:24.057997 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 6 23:59:24.057997 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jul 6 23:59:24.053686 unknown[954]: wrote ssh authorized keys file for user: core Jul 6 23:59:24.094642 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 6 23:59:24.225740 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 6 23:59:24.225740 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 6 23:59:24.229596 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 6 23:59:24.229596 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:59:24.229596 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:59:24.229596 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:59:24.229596 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:59:24.229596 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:59:24.229596 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:59:24.229596 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:59:24.229596 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:59:24.229596 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 6 23:59:24.229596 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 6 23:59:24.229596 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 6 23:59:24.229596 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jul 6 23:59:24.902870 systemd-networkd[781]: eth0: Gained IPv6LL Jul 6 23:59:24.912959 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 6 23:59:25.365209 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 6 23:59:25.365209 ignition[954]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 6 23:59:25.369458 ignition[954]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:59:25.369458 ignition[954]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:59:25.369458 ignition[954]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 6 23:59:25.369458 ignition[954]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 6 23:59:25.369458 ignition[954]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 6 23:59:25.369458 ignition[954]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 6 23:59:25.369458 ignition[954]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 6 23:59:25.369458 ignition[954]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 6 23:59:25.392449 ignition[954]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 6 23:59:25.398739 ignition[954]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 6 23:59:25.400427 ignition[954]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 6 23:59:25.400427 ignition[954]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 6 23:59:25.400427 ignition[954]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 6 23:59:25.400427 ignition[954]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:59:25.400427 ignition[954]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:59:25.400427 ignition[954]: INFO : files: files passed Jul 6 23:59:25.400427 ignition[954]: INFO : Ignition finished successfully Jul 6 23:59:25.411839 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 6 23:59:25.434002 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 6 23:59:25.436269 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 6 23:59:25.442877 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 6 23:59:25.443044 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 6 23:59:25.447415 initrd-setup-root-after-ignition[983]: grep: /sysroot/oem/oem-release: No such file or directory Jul 6 23:59:25.450875 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:59:25.450875 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:59:25.454011 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:59:25.457547 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:59:25.460240 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 6 23:59:25.473955 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 6 23:59:25.504512 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 6 23:59:25.504708 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 6 23:59:25.507301 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 6 23:59:25.509508 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 6 23:59:25.511787 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 6 23:59:25.523933 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 6 23:59:25.540819 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:59:25.550863 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 6 23:59:25.563125 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:59:25.565530 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:59:25.566788 systemd[1]: Stopped target timers.target - Timer Units. Jul 6 23:59:25.568760 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 6 23:59:25.568908 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:59:25.571150 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 6 23:59:25.572644 systemd[1]: Stopped target basic.target - Basic System. Jul 6 23:59:25.574627 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 6 23:59:25.576629 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:59:25.578602 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 6 23:59:25.580835 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 6 23:59:25.582852 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:59:25.585066 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 6 23:59:25.587021 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 6 23:59:25.589132 systemd[1]: Stopped target swap.target - Swaps. Jul 6 23:59:25.590876 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 6 23:59:25.591070 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:59:25.593278 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:59:25.594685 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:59:25.596709 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 6 23:59:25.596857 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:59:25.598874 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 6 23:59:25.599044 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 6 23:59:25.601315 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 6 23:59:25.601454 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:59:25.603291 systemd[1]: Stopped target paths.target - Path Units. Jul 6 23:59:25.604973 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 6 23:59:25.608765 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:59:25.610515 systemd[1]: Stopped target slices.target - Slice Units. Jul 6 23:59:25.612476 systemd[1]: Stopped target sockets.target - Socket Units. Jul 6 23:59:25.614313 systemd[1]: iscsid.socket: Deactivated successfully. Jul 6 23:59:25.614449 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:59:25.616285 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 6 23:59:25.616397 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:59:25.618721 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 6 23:59:25.618880 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:59:25.620726 systemd[1]: ignition-files.service: Deactivated successfully. Jul 6 23:59:25.620860 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 6 23:59:25.634009 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 6 23:59:25.637056 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 6 23:59:25.637976 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 6 23:59:25.638138 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:59:25.639139 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 6 23:59:25.639283 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:59:25.644551 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 6 23:59:25.644725 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 6 23:59:25.655418 ignition[1009]: INFO : Ignition 2.19.0 Jul 6 23:59:25.655418 ignition[1009]: INFO : Stage: umount Jul 6 23:59:25.657760 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:59:25.657760 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:59:25.657760 ignition[1009]: INFO : umount: umount passed Jul 6 23:59:25.657760 ignition[1009]: INFO : Ignition finished successfully Jul 6 23:59:25.659021 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 6 23:59:25.659179 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 6 23:59:25.661743 systemd[1]: Stopped target network.target - Network. Jul 6 23:59:25.663453 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 6 23:59:25.663531 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 6 23:59:25.665373 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 6 23:59:25.665436 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 6 23:59:25.667237 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 6 23:59:25.667300 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 6 23:59:25.668194 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 6 23:59:25.668270 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 6 23:59:25.668771 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 6 23:59:25.671896 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 6 23:59:25.673780 systemd-networkd[781]: eth0: DHCPv6 lease lost Jul 6 23:59:25.676404 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 6 23:59:25.676563 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 6 23:59:25.678993 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 6 23:59:25.679120 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 6 23:59:25.681298 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 6 23:59:25.681355 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:59:25.688814 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 6 23:59:25.689080 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 6 23:59:25.689154 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:59:25.689488 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:59:25.689544 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:59:25.689942 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 6 23:59:25.689996 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 6 23:59:25.690271 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 6 23:59:25.690326 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:59:25.690731 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:59:25.704764 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 6 23:59:25.704946 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 6 23:59:25.711938 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 6 23:59:25.712189 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:59:25.715643 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 6 23:59:25.715731 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 6 23:59:25.718653 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 6 23:59:25.718775 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:59:25.719791 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 6 23:59:25.719854 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:59:25.720484 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 6 23:59:25.720538 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 6 23:59:25.721472 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:59:25.721527 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:59:25.736021 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 6 23:59:25.738548 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 6 23:59:25.738632 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:59:25.740921 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 6 23:59:25.742139 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:59:25.745978 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 6 23:59:25.746035 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:59:25.748284 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:59:25.748335 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:59:25.753725 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 6 23:59:25.755300 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 6 23:59:25.756444 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 6 23:59:25.887561 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 6 23:59:25.888590 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 6 23:59:25.890614 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 6 23:59:25.892583 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 6 23:59:25.892637 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 6 23:59:25.908809 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 6 23:59:25.920461 systemd[1]: Switching root. Jul 6 23:59:25.947632 systemd-journald[193]: Journal stopped Jul 6 23:59:27.309039 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Jul 6 23:59:27.309106 kernel: SELinux: policy capability network_peer_controls=1 Jul 6 23:59:27.309126 kernel: SELinux: policy capability open_perms=1 Jul 6 23:59:27.309137 kernel: SELinux: policy capability extended_socket_class=1 Jul 6 23:59:27.309152 kernel: SELinux: policy capability always_check_network=0 Jul 6 23:59:27.309172 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 6 23:59:27.309184 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 6 23:59:27.309200 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 6 23:59:27.309217 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 6 23:59:27.309228 kernel: audit: type=1403 audit(1751846366.497:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 6 23:59:27.309240 systemd[1]: Successfully loaded SELinux policy in 41.302ms. Jul 6 23:59:27.309263 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.023ms. Jul 6 23:59:27.309276 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 6 23:59:27.309291 systemd[1]: Detected virtualization kvm. Jul 6 23:59:27.309303 systemd[1]: Detected architecture x86-64. Jul 6 23:59:27.309315 systemd[1]: Detected first boot. Jul 6 23:59:27.309327 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:59:27.309339 zram_generator::config[1054]: No configuration found. Jul 6 23:59:27.309357 systemd[1]: Populated /etc with preset unit settings. Jul 6 23:59:27.309370 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 6 23:59:27.309382 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 6 23:59:27.309396 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 6 23:59:27.309409 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 6 23:59:27.309421 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 6 23:59:27.309432 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 6 23:59:27.309444 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 6 23:59:27.309456 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 6 23:59:27.309468 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 6 23:59:27.309484 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 6 23:59:27.309509 systemd[1]: Created slice user.slice - User and Session Slice. Jul 6 23:59:27.309532 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:59:27.309579 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:59:27.309615 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 6 23:59:27.309653 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 6 23:59:27.309682 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 6 23:59:27.309709 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:59:27.309731 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 6 23:59:27.309757 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:59:27.309784 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 6 23:59:27.309816 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 6 23:59:27.309840 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 6 23:59:27.309861 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 6 23:59:27.309887 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:59:27.309906 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:59:27.309929 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:59:27.309944 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:59:27.309958 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 6 23:59:27.309970 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 6 23:59:27.309981 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:59:27.309993 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:59:27.310005 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:59:27.310016 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 6 23:59:27.310028 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 6 23:59:27.310040 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 6 23:59:27.310052 systemd[1]: Mounting media.mount - External Media Directory... Jul 6 23:59:27.310066 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:59:27.310078 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 6 23:59:27.310090 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 6 23:59:27.310101 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 6 23:59:27.310114 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 6 23:59:27.310125 systemd[1]: Reached target machines.target - Containers. Jul 6 23:59:27.310138 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 6 23:59:27.310150 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:59:27.310171 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:59:27.310184 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 6 23:59:27.310196 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:59:27.310208 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:59:27.310220 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:59:27.310232 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 6 23:59:27.310243 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:59:27.310255 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 6 23:59:27.310267 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 6 23:59:27.310282 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 6 23:59:27.310294 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 6 23:59:27.310305 systemd[1]: Stopped systemd-fsck-usr.service. Jul 6 23:59:27.310317 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:59:27.310328 kernel: fuse: init (API version 7.39) Jul 6 23:59:27.310340 kernel: loop: module loaded Jul 6 23:59:27.310351 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:59:27.310363 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:59:27.310376 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 6 23:59:27.310390 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:59:27.310403 systemd[1]: verity-setup.service: Deactivated successfully. Jul 6 23:59:27.310415 systemd[1]: Stopped verity-setup.service. Jul 6 23:59:27.310428 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:59:27.310440 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 6 23:59:27.310452 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 6 23:59:27.310463 systemd[1]: Mounted media.mount - External Media Directory. Jul 6 23:59:27.310475 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 6 23:59:27.310504 systemd-journald[1124]: Collecting audit messages is disabled. Jul 6 23:59:27.310532 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 6 23:59:27.310544 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 6 23:59:27.310555 kernel: ACPI: bus type drm_connector registered Jul 6 23:59:27.310569 systemd-journald[1124]: Journal started Jul 6 23:59:27.310590 systemd-journald[1124]: Runtime Journal (/run/log/journal/690dc7a99b824c70b270fda38a4f6a91) is 6.0M, max 48.4M, 42.3M free. Jul 6 23:59:27.081179 systemd[1]: Queued start job for default target multi-user.target. Jul 6 23:59:27.099436 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 6 23:59:27.099916 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 6 23:59:27.313706 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:59:27.315552 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:59:27.317452 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 6 23:59:27.317633 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 6 23:59:27.319296 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:59:27.319507 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:59:27.321014 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:59:27.321289 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:59:27.322991 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:59:27.323209 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:59:27.324794 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 6 23:59:27.326364 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 6 23:59:27.326574 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 6 23:59:27.328367 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:59:27.328579 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:59:27.330014 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:59:27.331433 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:59:27.332928 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 6 23:59:27.346574 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:59:27.352763 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 6 23:59:27.354995 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 6 23:59:27.356090 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 6 23:59:27.356116 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:59:27.358054 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 6 23:59:27.360331 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 6 23:59:27.362875 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 6 23:59:27.363968 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:59:27.366608 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 6 23:59:27.369737 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 6 23:59:27.370952 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:59:27.372407 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 6 23:59:27.373619 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:59:27.376430 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:59:27.386833 systemd-journald[1124]: Time spent on flushing to /var/log/journal/690dc7a99b824c70b270fda38a4f6a91 is 13.688ms for 950 entries. Jul 6 23:59:27.386833 systemd-journald[1124]: System Journal (/var/log/journal/690dc7a99b824c70b270fda38a4f6a91) is 8.0M, max 195.6M, 187.6M free. Jul 6 23:59:27.420203 systemd-journald[1124]: Received client request to flush runtime journal. Jul 6 23:59:27.420249 kernel: loop0: detected capacity change from 0 to 229808 Jul 6 23:59:27.382209 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 6 23:59:27.388173 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:59:27.393091 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 6 23:59:27.394661 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 6 23:59:27.396411 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 6 23:59:27.399518 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 6 23:59:27.404384 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 6 23:59:27.420027 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 6 23:59:27.421754 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 6 23:59:27.423435 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:59:27.434958 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 6 23:59:27.440842 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:59:27.449958 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 6 23:59:27.452744 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 6 23:59:27.453764 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 6 23:59:27.457235 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Jul 6 23:59:27.457251 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Jul 6 23:59:27.463891 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:59:27.465990 kernel: loop1: detected capacity change from 0 to 142488 Jul 6 23:59:27.474137 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 6 23:59:27.476532 udevadm[1184]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 6 23:59:27.501373 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 6 23:59:27.509686 kernel: loop2: detected capacity change from 0 to 140768 Jul 6 23:59:27.512217 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:59:27.533000 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Jul 6 23:59:27.533022 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Jul 6 23:59:27.539381 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:59:27.546742 kernel: loop3: detected capacity change from 0 to 229808 Jul 6 23:59:27.554701 kernel: loop4: detected capacity change from 0 to 142488 Jul 6 23:59:27.566054 kernel: loop5: detected capacity change from 0 to 140768 Jul 6 23:59:27.578363 (sd-merge)[1196]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 6 23:59:27.578961 (sd-merge)[1196]: Merged extensions into '/usr'. Jul 6 23:59:27.583386 systemd[1]: Reloading requested from client PID 1168 ('systemd-sysext') (unit systemd-sysext.service)... Jul 6 23:59:27.583400 systemd[1]: Reloading... Jul 6 23:59:27.643039 zram_generator::config[1220]: No configuration found. Jul 6 23:59:27.709273 ldconfig[1163]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 6 23:59:27.771936 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:59:27.822864 systemd[1]: Reloading finished in 238 ms. Jul 6 23:59:27.860445 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 6 23:59:27.862363 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 6 23:59:27.887064 systemd[1]: Starting ensure-sysext.service... Jul 6 23:59:27.889551 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:59:27.895012 systemd[1]: Reloading requested from client PID 1259 ('systemctl') (unit ensure-sysext.service)... Jul 6 23:59:27.895126 systemd[1]: Reloading... Jul 6 23:59:27.938608 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 6 23:59:27.939156 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 6 23:59:27.941057 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 6 23:59:27.941564 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Jul 6 23:59:27.942870 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Jul 6 23:59:27.948077 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:59:27.948551 systemd-tmpfiles[1260]: Skipping /boot Jul 6 23:59:27.950709 zram_generator::config[1289]: No configuration found. Jul 6 23:59:27.960864 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:59:27.960883 systemd-tmpfiles[1260]: Skipping /boot Jul 6 23:59:28.063054 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:59:28.113693 systemd[1]: Reloading finished in 218 ms. Jul 6 23:59:28.134062 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 6 23:59:28.148455 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:59:28.157798 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 6 23:59:28.160446 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 6 23:59:28.162926 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 6 23:59:28.166744 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:59:28.170408 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:59:28.173114 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 6 23:59:28.177491 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:59:28.177687 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:59:28.182811 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:59:28.186736 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:59:28.198908 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:59:28.200157 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:59:28.204201 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 6 23:59:28.205337 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:59:28.206529 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 6 23:59:28.207489 systemd-udevd[1333]: Using default interface naming scheme 'v255'. Jul 6 23:59:28.208362 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:59:28.208536 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:59:28.211042 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:59:28.211226 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:59:28.213200 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:59:28.213650 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:59:28.216034 augenrules[1350]: No rules Jul 6 23:59:28.220075 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 6 23:59:28.229924 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 6 23:59:28.234303 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:59:28.237993 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:59:28.238203 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:59:28.250916 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:59:28.255266 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:59:28.258879 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:59:28.271839 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:59:28.274871 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:59:28.278849 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:59:28.281801 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 6 23:59:28.282865 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:59:28.283343 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 6 23:59:28.285146 systemd[1]: Finished ensure-sysext.service. Jul 6 23:59:28.287046 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 6 23:59:28.289732 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:59:28.289924 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:59:28.290333 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:59:28.290495 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:59:28.291321 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:59:28.291482 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:59:28.297134 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:59:28.297325 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:59:28.314736 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 6 23:59:28.321262 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 6 23:59:28.321590 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:59:28.321662 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:59:28.335816 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 6 23:59:28.337147 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:59:28.348697 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1362) Jul 6 23:59:28.388469 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 6 23:59:28.401944 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 6 23:59:28.409073 systemd-resolved[1329]: Positive Trust Anchors: Jul 6 23:59:28.409094 systemd-resolved[1329]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:59:28.409125 systemd-resolved[1329]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:59:28.411012 systemd-networkd[1387]: lo: Link UP Jul 6 23:59:28.411017 systemd-networkd[1387]: lo: Gained carrier Jul 6 23:59:28.416914 systemd-resolved[1329]: Defaulting to hostname 'linux'. Jul 6 23:59:28.432432 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:59:28.434992 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 6 23:59:28.437787 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 6 23:59:28.439193 systemd-networkd[1387]: Enumeration completed Jul 6 23:59:28.439628 systemd-networkd[1387]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:59:28.439639 systemd-networkd[1387]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:59:28.440328 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:59:28.442440 systemd[1]: Reached target network.target - Network. Jul 6 23:59:28.444285 systemd-networkd[1387]: eth0: Link UP Jul 6 23:59:28.444296 systemd-networkd[1387]: eth0: Gained carrier Jul 6 23:59:28.444309 systemd-networkd[1387]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:59:28.445114 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:59:28.449689 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 6 23:59:28.449723 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 6 23:59:28.449744 systemd[1]: Reached target time-set.target - System Time Set. Jul 6 23:59:28.452215 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jul 6 23:59:28.452432 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 6 23:59:28.458684 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jul 6 23:59:28.461740 systemd-networkd[1387]: eth0: DHCPv4 address 10.0.0.146/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 6 23:59:28.462572 systemd-timesyncd[1404]: Network configuration changed, trying to establish connection. Jul 6 23:59:28.882137 kernel: ACPI: button: Power Button [PWRF] Jul 6 23:59:28.874678 systemd-timesyncd[1404]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 6 23:59:28.874718 systemd-timesyncd[1404]: Initial clock synchronization to Sun 2025-07-06 23:59:28.874569 UTC. Jul 6 23:59:28.874984 systemd-resolved[1329]: Clock change detected. Flushing caches. Jul 6 23:59:28.881241 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 6 23:59:28.891573 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:59:28.937977 kernel: mousedev: PS/2 mouse device common for all mice Jul 6 23:59:28.951177 kernel: kvm_amd: TSC scaling supported Jul 6 23:59:28.951229 kernel: kvm_amd: Nested Virtualization enabled Jul 6 23:59:28.951243 kernel: kvm_amd: Nested Paging enabled Jul 6 23:59:28.951255 kernel: kvm_amd: LBR virtualization supported Jul 6 23:59:28.952394 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 6 23:59:28.952434 kernel: kvm_amd: Virtual GIF supported Jul 6 23:59:28.973995 kernel: EDAC MC: Ver: 3.0.0 Jul 6 23:59:29.009627 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 6 23:59:29.029181 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 6 23:59:29.030761 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:59:29.040103 lvm[1423]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:59:29.075449 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 6 23:59:29.078339 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:59:29.079445 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:59:29.082208 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 6 23:59:29.083429 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 6 23:59:29.084893 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 6 23:59:29.086065 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 6 23:59:29.087275 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 6 23:59:29.088485 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 6 23:59:29.088516 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:59:29.089379 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:59:29.091160 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 6 23:59:29.094042 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 6 23:59:29.104858 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 6 23:59:29.107429 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 6 23:59:29.109011 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 6 23:59:29.110128 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:59:29.111061 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:59:29.112006 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:59:29.112036 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:59:29.113181 systemd[1]: Starting containerd.service - containerd container runtime... Jul 6 23:59:29.115224 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 6 23:59:29.117003 lvm[1428]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:59:29.120060 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 6 23:59:29.123227 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 6 23:59:29.124785 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 6 23:59:29.127108 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 6 23:59:29.130132 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 6 23:59:29.132850 jq[1431]: false Jul 6 23:59:29.135130 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 6 23:59:29.139181 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 6 23:59:29.144608 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 6 23:59:29.146153 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 6 23:59:29.146759 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 6 23:59:29.148817 systemd[1]: Starting update-engine.service - Update Engine... Jul 6 23:59:29.154245 dbus-daemon[1430]: [system] SELinux support is enabled Jul 6 23:59:29.155094 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 6 23:59:29.157148 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 6 23:59:29.160270 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 6 23:59:29.164116 extend-filesystems[1432]: Found loop3 Jul 6 23:59:29.165077 extend-filesystems[1432]: Found loop4 Jul 6 23:59:29.165077 extend-filesystems[1432]: Found loop5 Jul 6 23:59:29.165077 extend-filesystems[1432]: Found sr0 Jul 6 23:59:29.165077 extend-filesystems[1432]: Found vda Jul 6 23:59:29.165077 extend-filesystems[1432]: Found vda1 Jul 6 23:59:29.165077 extend-filesystems[1432]: Found vda2 Jul 6 23:59:29.165077 extend-filesystems[1432]: Found vda3 Jul 6 23:59:29.165077 extend-filesystems[1432]: Found usr Jul 6 23:59:29.165077 extend-filesystems[1432]: Found vda4 Jul 6 23:59:29.165077 extend-filesystems[1432]: Found vda6 Jul 6 23:59:29.165077 extend-filesystems[1432]: Found vda7 Jul 6 23:59:29.165077 extend-filesystems[1432]: Found vda9 Jul 6 23:59:29.165077 extend-filesystems[1432]: Checking size of /dev/vda9 Jul 6 23:59:29.183081 jq[1445]: true Jul 6 23:59:29.167712 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 6 23:59:29.183279 update_engine[1444]: I20250706 23:59:29.177078 1444 main.cc:92] Flatcar Update Engine starting Jul 6 23:59:29.183279 update_engine[1444]: I20250706 23:59:29.178393 1444 update_check_scheduler.cc:74] Next update check in 9m52s Jul 6 23:59:29.167983 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 6 23:59:29.169646 systemd[1]: motdgen.service: Deactivated successfully. Jul 6 23:59:29.169874 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 6 23:59:29.177343 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 6 23:59:29.177569 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 6 23:59:29.190186 (ntainerd)[1456]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 6 23:59:29.194926 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 6 23:59:29.195043 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 6 23:59:29.196472 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 6 23:59:29.196493 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 6 23:59:29.199060 jq[1453]: true Jul 6 23:59:29.199252 extend-filesystems[1432]: Resized partition /dev/vda9 Jul 6 23:59:29.202216 systemd[1]: Started update-engine.service - Update Engine. Jul 6 23:59:29.206544 extend-filesystems[1468]: resize2fs 1.47.1 (20-May-2024) Jul 6 23:59:29.215841 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 6 23:59:29.215866 tar[1451]: linux-amd64/LICENSE Jul 6 23:59:29.215866 tar[1451]: linux-amd64/helm Jul 6 23:59:29.212236 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 6 23:59:29.223989 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1374) Jul 6 23:59:29.240970 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 6 23:59:29.270625 systemd-logind[1440]: Watching system buttons on /dev/input/event2 (Power Button) Jul 6 23:59:29.270657 systemd-logind[1440]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 6 23:59:29.271751 extend-filesystems[1468]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 6 23:59:29.271751 extend-filesystems[1468]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 6 23:59:29.271751 extend-filesystems[1468]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 6 23:59:29.279624 extend-filesystems[1432]: Resized filesystem in /dev/vda9 Jul 6 23:59:29.273997 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 6 23:59:29.274232 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 6 23:59:29.274550 systemd-logind[1440]: New seat seat0. Jul 6 23:59:29.281816 systemd[1]: Started systemd-logind.service - User Login Management. Jul 6 23:59:29.285668 bash[1484]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:59:29.288654 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 6 23:59:29.290632 locksmithd[1469]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 6 23:59:29.291658 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 6 23:59:29.408122 containerd[1456]: time="2025-07-06T23:59:29.407920170Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 6 23:59:29.413926 sshd_keygen[1450]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 6 23:59:29.431499 containerd[1456]: time="2025-07-06T23:59:29.431420782Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:59:29.433380 containerd[1456]: time="2025-07-06T23:59:29.433329662Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:59:29.433380 containerd[1456]: time="2025-07-06T23:59:29.433369586Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 6 23:59:29.433380 containerd[1456]: time="2025-07-06T23:59:29.433387460Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 6 23:59:29.433603 containerd[1456]: time="2025-07-06T23:59:29.433584379Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 6 23:59:29.433626 containerd[1456]: time="2025-07-06T23:59:29.433606431Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 6 23:59:29.433692 containerd[1456]: time="2025-07-06T23:59:29.433674919Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:59:29.433719 containerd[1456]: time="2025-07-06T23:59:29.433690338Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:59:29.433902 containerd[1456]: time="2025-07-06T23:59:29.433883510Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:59:29.433926 containerd[1456]: time="2025-07-06T23:59:29.433900502Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 6 23:59:29.433926 containerd[1456]: time="2025-07-06T23:59:29.433914178Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:59:29.433981 containerd[1456]: time="2025-07-06T23:59:29.433925088Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 6 23:59:29.434057 containerd[1456]: time="2025-07-06T23:59:29.434039122Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:59:29.434327 containerd[1456]: time="2025-07-06T23:59:29.434297817Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:59:29.434502 containerd[1456]: time="2025-07-06T23:59:29.434470291Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:59:29.434502 containerd[1456]: time="2025-07-06T23:59:29.434495598Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 6 23:59:29.434643 containerd[1456]: time="2025-07-06T23:59:29.434624881Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 6 23:59:29.434702 containerd[1456]: time="2025-07-06T23:59:29.434687428Z" level=info msg="metadata content store policy set" policy=shared Jul 6 23:59:29.440739 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 6 23:59:29.442337 containerd[1456]: time="2025-07-06T23:59:29.442285086Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 6 23:59:29.442467 containerd[1456]: time="2025-07-06T23:59:29.442359766Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 6 23:59:29.442467 containerd[1456]: time="2025-07-06T23:59:29.442378211Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 6 23:59:29.442467 containerd[1456]: time="2025-07-06T23:59:29.442394511Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 6 23:59:29.442467 containerd[1456]: time="2025-07-06T23:59:29.442418306Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 6 23:59:29.442652 containerd[1456]: time="2025-07-06T23:59:29.442596109Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 6 23:59:29.442989 containerd[1456]: time="2025-07-06T23:59:29.442922131Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 6 23:59:29.443181 containerd[1456]: time="2025-07-06T23:59:29.443160348Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 6 23:59:29.443215 containerd[1456]: time="2025-07-06T23:59:29.443183641Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 6 23:59:29.443215 containerd[1456]: time="2025-07-06T23:59:29.443200543Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 6 23:59:29.443253 containerd[1456]: time="2025-07-06T23:59:29.443215661Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 6 23:59:29.443253 containerd[1456]: time="2025-07-06T23:59:29.443230088Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 6 23:59:29.443253 containerd[1456]: time="2025-07-06T23:59:29.443243133Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 6 23:59:29.443320 containerd[1456]: time="2025-07-06T23:59:29.443257359Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 6 23:59:29.443320 containerd[1456]: time="2025-07-06T23:59:29.443272839Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 6 23:59:29.443320 containerd[1456]: time="2025-07-06T23:59:29.443286544Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 6 23:59:29.443320 containerd[1456]: time="2025-07-06T23:59:29.443300350Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 6 23:59:29.443320 containerd[1456]: time="2025-07-06T23:59:29.443312974Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 6 23:59:29.443411 containerd[1456]: time="2025-07-06T23:59:29.443334695Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 6 23:59:29.443411 containerd[1456]: time="2025-07-06T23:59:29.443348520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 6 23:59:29.443411 containerd[1456]: time="2025-07-06T23:59:29.443361144Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 6 23:59:29.443411 containerd[1456]: time="2025-07-06T23:59:29.443373567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 6 23:59:29.443411 containerd[1456]: time="2025-07-06T23:59:29.443385800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 6 23:59:29.443411 containerd[1456]: time="2025-07-06T23:59:29.443398454Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 6 23:59:29.443530 containerd[1456]: time="2025-07-06T23:59:29.443420345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 6 23:59:29.443530 containerd[1456]: time="2025-07-06T23:59:29.443434502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 6 23:59:29.443530 containerd[1456]: time="2025-07-06T23:59:29.443451193Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 6 23:59:29.443530 containerd[1456]: time="2025-07-06T23:59:29.443471471Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 6 23:59:29.443530 containerd[1456]: time="2025-07-06T23:59:29.443487190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 6 23:59:29.443530 containerd[1456]: time="2025-07-06T23:59:29.443507599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 6 23:59:29.443530 containerd[1456]: time="2025-07-06T23:59:29.443520523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 6 23:59:29.443684 containerd[1456]: time="2025-07-06T23:59:29.443535271Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 6 23:59:29.443684 containerd[1456]: time="2025-07-06T23:59:29.443555649Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 6 23:59:29.443684 containerd[1456]: time="2025-07-06T23:59:29.443566870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 6 23:59:29.443684 containerd[1456]: time="2025-07-06T23:59:29.443577810Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 6 23:59:29.443684 containerd[1456]: time="2025-07-06T23:59:29.443637593Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 6 23:59:29.443684 containerd[1456]: time="2025-07-06T23:59:29.443655586Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 6 23:59:29.443684 containerd[1456]: time="2025-07-06T23:59:29.443667899Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 6 23:59:29.443684 containerd[1456]: time="2025-07-06T23:59:29.443679311Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 6 23:59:29.443684 containerd[1456]: time="2025-07-06T23:59:29.443689710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 6 23:59:29.444000 containerd[1456]: time="2025-07-06T23:59:29.443704378Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 6 23:59:29.444000 containerd[1456]: time="2025-07-06T23:59:29.443719857Z" level=info msg="NRI interface is disabled by configuration." Jul 6 23:59:29.444000 containerd[1456]: time="2025-07-06T23:59:29.443753560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 6 23:59:29.444132 containerd[1456]: time="2025-07-06T23:59:29.444041039Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 6 23:59:29.444282 containerd[1456]: time="2025-07-06T23:59:29.444137911Z" level=info msg="Connect containerd service" Jul 6 23:59:29.444282 containerd[1456]: time="2025-07-06T23:59:29.444178627Z" level=info msg="using legacy CRI server" Jul 6 23:59:29.444282 containerd[1456]: time="2025-07-06T23:59:29.444188776Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 6 23:59:29.444336 containerd[1456]: time="2025-07-06T23:59:29.444296969Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 6 23:59:29.445028 containerd[1456]: time="2025-07-06T23:59:29.444905821Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:59:29.445347 containerd[1456]: time="2025-07-06T23:59:29.445225541Z" level=info msg="Start subscribing containerd event" Jul 6 23:59:29.445347 containerd[1456]: time="2025-07-06T23:59:29.445289941Z" level=info msg="Start recovering state" Jul 6 23:59:29.445459 containerd[1456]: time="2025-07-06T23:59:29.445422941Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 6 23:59:29.445579 containerd[1456]: time="2025-07-06T23:59:29.445503291Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 6 23:59:29.445607 containerd[1456]: time="2025-07-06T23:59:29.445427499Z" level=info msg="Start event monitor" Jul 6 23:59:29.445607 containerd[1456]: time="2025-07-06T23:59:29.445596667Z" level=info msg="Start snapshots syncer" Jul 6 23:59:29.445607 containerd[1456]: time="2025-07-06T23:59:29.445605884Z" level=info msg="Start cni network conf syncer for default" Jul 6 23:59:29.445668 containerd[1456]: time="2025-07-06T23:59:29.445614139Z" level=info msg="Start streaming server" Jul 6 23:59:29.445689 containerd[1456]: time="2025-07-06T23:59:29.445667800Z" level=info msg="containerd successfully booted in 0.038932s" Jul 6 23:59:29.454661 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 6 23:59:29.455961 systemd[1]: Started containerd.service - containerd container runtime. Jul 6 23:59:29.464380 systemd[1]: issuegen.service: Deactivated successfully. Jul 6 23:59:29.464677 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 6 23:59:29.480299 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 6 23:59:29.492702 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 6 23:59:29.500350 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 6 23:59:29.502675 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 6 23:59:29.503875 systemd[1]: Reached target getty.target - Login Prompts. Jul 6 23:59:29.666514 tar[1451]: linux-amd64/README.md Jul 6 23:59:29.684075 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 6 23:59:30.178208 systemd-networkd[1387]: eth0: Gained IPv6LL Jul 6 23:59:30.181585 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 6 23:59:30.183555 systemd[1]: Reached target network-online.target - Network is Online. Jul 6 23:59:30.194250 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 6 23:59:30.197082 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:59:30.199436 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 6 23:59:30.219288 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 6 23:59:30.219744 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 6 23:59:30.221553 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 6 23:59:30.226145 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 6 23:59:31.630204 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:59:31.635866 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 6 23:59:31.637170 systemd[1]: Startup finished in 742ms (kernel) + 5.799s (initrd) + 4.767s (userspace) = 11.309s. Jul 6 23:59:31.649505 (kubelet)[1543]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:59:32.523965 kubelet[1543]: E0706 23:59:32.523888 1543 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:59:32.527915 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:59:32.528162 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:59:32.528611 systemd[1]: kubelet.service: Consumed 2.243s CPU time. Jul 6 23:59:34.160299 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 6 23:59:34.161631 systemd[1]: Started sshd@0-10.0.0.146:22-10.0.0.1:33454.service - OpenSSH per-connection server daemon (10.0.0.1:33454). Jul 6 23:59:34.212823 sshd[1556]: Accepted publickey for core from 10.0.0.1 port 33454 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:59:34.214542 sshd[1556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:34.223388 systemd-logind[1440]: New session 1 of user core. Jul 6 23:59:34.224669 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 6 23:59:34.235150 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 6 23:59:34.247237 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 6 23:59:34.249221 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 6 23:59:34.257140 (systemd)[1560]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 6 23:59:34.359991 systemd[1560]: Queued start job for default target default.target. Jul 6 23:59:34.371206 systemd[1560]: Created slice app.slice - User Application Slice. Jul 6 23:59:34.371231 systemd[1560]: Reached target paths.target - Paths. Jul 6 23:59:34.371245 systemd[1560]: Reached target timers.target - Timers. Jul 6 23:59:34.372737 systemd[1560]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 6 23:59:34.384074 systemd[1560]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 6 23:59:34.384216 systemd[1560]: Reached target sockets.target - Sockets. Jul 6 23:59:34.384236 systemd[1560]: Reached target basic.target - Basic System. Jul 6 23:59:34.384276 systemd[1560]: Reached target default.target - Main User Target. Jul 6 23:59:34.384339 systemd[1560]: Startup finished in 120ms. Jul 6 23:59:34.384661 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 6 23:59:34.386171 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 6 23:59:34.448194 systemd[1]: Started sshd@1-10.0.0.146:22-10.0.0.1:33468.service - OpenSSH per-connection server daemon (10.0.0.1:33468). Jul 6 23:59:34.488223 sshd[1571]: Accepted publickey for core from 10.0.0.1 port 33468 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:59:34.489662 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:34.493383 systemd-logind[1440]: New session 2 of user core. Jul 6 23:59:34.502055 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 6 23:59:34.555680 sshd[1571]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:34.563237 systemd[1]: sshd@1-10.0.0.146:22-10.0.0.1:33468.service: Deactivated successfully. Jul 6 23:59:34.565536 systemd[1]: session-2.scope: Deactivated successfully. Jul 6 23:59:34.567536 systemd-logind[1440]: Session 2 logged out. Waiting for processes to exit. Jul 6 23:59:34.578273 systemd[1]: Started sshd@2-10.0.0.146:22-10.0.0.1:33470.service - OpenSSH per-connection server daemon (10.0.0.1:33470). Jul 6 23:59:34.579294 systemd-logind[1440]: Removed session 2. Jul 6 23:59:34.612697 sshd[1578]: Accepted publickey for core from 10.0.0.1 port 33470 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:59:34.614466 sshd[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:34.619168 systemd-logind[1440]: New session 3 of user core. Jul 6 23:59:34.630156 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 6 23:59:34.681804 sshd[1578]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:34.698002 systemd[1]: sshd@2-10.0.0.146:22-10.0.0.1:33470.service: Deactivated successfully. Jul 6 23:59:34.699642 systemd[1]: session-3.scope: Deactivated successfully. Jul 6 23:59:34.701370 systemd-logind[1440]: Session 3 logged out. Waiting for processes to exit. Jul 6 23:59:34.702663 systemd[1]: Started sshd@3-10.0.0.146:22-10.0.0.1:33472.service - OpenSSH per-connection server daemon (10.0.0.1:33472). Jul 6 23:59:34.703390 systemd-logind[1440]: Removed session 3. Jul 6 23:59:34.745643 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 33472 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:59:34.747362 sshd[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:34.751411 systemd-logind[1440]: New session 4 of user core. Jul 6 23:59:34.761146 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 6 23:59:34.815092 sshd[1585]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:34.828037 systemd[1]: sshd@3-10.0.0.146:22-10.0.0.1:33472.service: Deactivated successfully. Jul 6 23:59:34.829885 systemd[1]: session-4.scope: Deactivated successfully. Jul 6 23:59:34.831501 systemd-logind[1440]: Session 4 logged out. Waiting for processes to exit. Jul 6 23:59:34.832750 systemd[1]: Started sshd@4-10.0.0.146:22-10.0.0.1:33474.service - OpenSSH per-connection server daemon (10.0.0.1:33474). Jul 6 23:59:34.833507 systemd-logind[1440]: Removed session 4. Jul 6 23:59:34.871213 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 33474 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:59:34.873005 sshd[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:34.877229 systemd-logind[1440]: New session 5 of user core. Jul 6 23:59:34.887139 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 6 23:59:34.945938 sudo[1595]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 6 23:59:34.946349 sudo[1595]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:59:34.961751 sudo[1595]: pam_unix(sudo:session): session closed for user root Jul 6 23:59:34.963899 sshd[1592]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:34.981826 systemd[1]: sshd@4-10.0.0.146:22-10.0.0.1:33474.service: Deactivated successfully. Jul 6 23:59:34.983466 systemd[1]: session-5.scope: Deactivated successfully. Jul 6 23:59:34.984991 systemd-logind[1440]: Session 5 logged out. Waiting for processes to exit. Jul 6 23:59:34.997180 systemd[1]: Started sshd@5-10.0.0.146:22-10.0.0.1:33478.service - OpenSSH per-connection server daemon (10.0.0.1:33478). Jul 6 23:59:34.998193 systemd-logind[1440]: Removed session 5. Jul 6 23:59:35.030863 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 33478 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:59:35.032328 sshd[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:35.036070 systemd-logind[1440]: New session 6 of user core. Jul 6 23:59:35.046051 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 6 23:59:35.099017 sudo[1604]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 6 23:59:35.099344 sudo[1604]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:59:35.102717 sudo[1604]: pam_unix(sudo:session): session closed for user root Jul 6 23:59:35.109326 sudo[1603]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 6 23:59:35.109671 sudo[1603]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:59:35.128140 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 6 23:59:35.129656 auditctl[1607]: No rules Jul 6 23:59:35.130783 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:59:35.131040 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 6 23:59:35.132667 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 6 23:59:35.162341 augenrules[1625]: No rules Jul 6 23:59:35.163878 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 6 23:59:35.165234 sudo[1603]: pam_unix(sudo:session): session closed for user root Jul 6 23:59:35.167059 sshd[1600]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:35.178737 systemd[1]: sshd@5-10.0.0.146:22-10.0.0.1:33478.service: Deactivated successfully. Jul 6 23:59:35.180384 systemd[1]: session-6.scope: Deactivated successfully. Jul 6 23:59:35.181860 systemd-logind[1440]: Session 6 logged out. Waiting for processes to exit. Jul 6 23:59:35.189171 systemd[1]: Started sshd@6-10.0.0.146:22-10.0.0.1:33480.service - OpenSSH per-connection server daemon (10.0.0.1:33480). Jul 6 23:59:35.189987 systemd-logind[1440]: Removed session 6. Jul 6 23:59:35.223286 sshd[1633]: Accepted publickey for core from 10.0.0.1 port 33480 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:59:35.225108 sshd[1633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:35.228751 systemd-logind[1440]: New session 7 of user core. Jul 6 23:59:35.238055 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 6 23:59:35.290397 sudo[1636]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 6 23:59:35.290709 sudo[1636]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:59:35.779190 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 6 23:59:35.779353 (dockerd)[1654]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 6 23:59:36.439263 dockerd[1654]: time="2025-07-06T23:59:36.439163671Z" level=info msg="Starting up" Jul 6 23:59:36.983880 systemd[1]: var-lib-docker-metacopy\x2dcheck1816899487-merged.mount: Deactivated successfully. Jul 6 23:59:37.014275 dockerd[1654]: time="2025-07-06T23:59:37.014201432Z" level=info msg="Loading containers: start." Jul 6 23:59:37.122972 kernel: Initializing XFRM netlink socket Jul 6 23:59:37.210179 systemd-networkd[1387]: docker0: Link UP Jul 6 23:59:37.230596 dockerd[1654]: time="2025-07-06T23:59:37.230547358Z" level=info msg="Loading containers: done." Jul 6 23:59:37.253037 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3266073515-merged.mount: Deactivated successfully. Jul 6 23:59:37.255613 dockerd[1654]: time="2025-07-06T23:59:37.255568532Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 6 23:59:37.255720 dockerd[1654]: time="2025-07-06T23:59:37.255698856Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 6 23:59:37.255842 dockerd[1654]: time="2025-07-06T23:59:37.255823089Z" level=info msg="Daemon has completed initialization" Jul 6 23:59:37.299093 dockerd[1654]: time="2025-07-06T23:59:37.299012795Z" level=info msg="API listen on /run/docker.sock" Jul 6 23:59:37.299292 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 6 23:59:38.188967 containerd[1456]: time="2025-07-06T23:59:38.188918716Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 6 23:59:38.893824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount753009432.mount: Deactivated successfully. Jul 6 23:59:40.263133 containerd[1456]: time="2025-07-06T23:59:40.263050544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:40.263709 containerd[1456]: time="2025-07-06T23:59:40.263633447Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=30079099" Jul 6 23:59:40.264755 containerd[1456]: time="2025-07-06T23:59:40.264697212Z" level=info msg="ImageCreate event name:\"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:40.267602 containerd[1456]: time="2025-07-06T23:59:40.267567585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:40.268735 containerd[1456]: time="2025-07-06T23:59:40.268686012Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"30075899\" in 2.079715599s" Jul 6 23:59:40.268735 containerd[1456]: time="2025-07-06T23:59:40.268732580Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\"" Jul 6 23:59:40.269691 containerd[1456]: time="2025-07-06T23:59:40.269658666Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 6 23:59:41.525046 containerd[1456]: time="2025-07-06T23:59:41.524983819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:41.526144 containerd[1456]: time="2025-07-06T23:59:41.526059847Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=26018946" Jul 6 23:59:41.527746 containerd[1456]: time="2025-07-06T23:59:41.527698019Z" level=info msg="ImageCreate event name:\"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:41.530780 containerd[1456]: time="2025-07-06T23:59:41.530741407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:41.531716 containerd[1456]: time="2025-07-06T23:59:41.531686429Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"27646507\" in 1.261996073s" Jul 6 23:59:41.531716 containerd[1456]: time="2025-07-06T23:59:41.531716656Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\"" Jul 6 23:59:41.532554 containerd[1456]: time="2025-07-06T23:59:41.532527556Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 6 23:59:42.778395 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 6 23:59:42.800190 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:59:43.035505 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:59:43.040008 (kubelet)[1873]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:59:43.417881 kubelet[1873]: E0706 23:59:43.417719 1873 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:59:43.418513 containerd[1456]: time="2025-07-06T23:59:43.418470412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:43.419485 containerd[1456]: time="2025-07-06T23:59:43.419342478Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=20155055" Jul 6 23:59:43.420629 containerd[1456]: time="2025-07-06T23:59:43.420606098Z" level=info msg="ImageCreate event name:\"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:43.424933 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:59:43.425168 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:59:43.431315 containerd[1456]: time="2025-07-06T23:59:43.431284654Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:43.432270 containerd[1456]: time="2025-07-06T23:59:43.432238132Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"21782634\" in 1.899681901s" Jul 6 23:59:43.432325 containerd[1456]: time="2025-07-06T23:59:43.432270623Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\"" Jul 6 23:59:43.432791 containerd[1456]: time="2025-07-06T23:59:43.432772264Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 6 23:59:44.932411 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount240957399.mount: Deactivated successfully. Jul 6 23:59:45.321205 containerd[1456]: time="2025-07-06T23:59:45.321064065Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:45.321967 containerd[1456]: time="2025-07-06T23:59:45.321883311Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=31892746" Jul 6 23:59:45.323160 containerd[1456]: time="2025-07-06T23:59:45.323127094Z" level=info msg="ImageCreate event name:\"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:45.325505 containerd[1456]: time="2025-07-06T23:59:45.325455170Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:45.326040 containerd[1456]: time="2025-07-06T23:59:45.325997436Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"31891765\" in 1.893200146s" Jul 6 23:59:45.326067 containerd[1456]: time="2025-07-06T23:59:45.326038584Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\"" Jul 6 23:59:45.326885 containerd[1456]: time="2025-07-06T23:59:45.326853412Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 6 23:59:45.930798 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2682234102.mount: Deactivated successfully. Jul 6 23:59:47.112811 containerd[1456]: time="2025-07-06T23:59:47.112749795Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:47.113623 containerd[1456]: time="2025-07-06T23:59:47.113562639Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Jul 6 23:59:47.114660 containerd[1456]: time="2025-07-06T23:59:47.114626344Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:47.117337 containerd[1456]: time="2025-07-06T23:59:47.117302333Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:47.118421 containerd[1456]: time="2025-07-06T23:59:47.118391565Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.791510662s" Jul 6 23:59:47.118483 containerd[1456]: time="2025-07-06T23:59:47.118422804Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jul 6 23:59:47.119024 containerd[1456]: time="2025-07-06T23:59:47.119002882Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 6 23:59:47.634555 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount77457257.mount: Deactivated successfully. Jul 6 23:59:47.639480 containerd[1456]: time="2025-07-06T23:59:47.639450817Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:47.640159 containerd[1456]: time="2025-07-06T23:59:47.640103652Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 6 23:59:47.641199 containerd[1456]: time="2025-07-06T23:59:47.641161205Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:47.643374 containerd[1456]: time="2025-07-06T23:59:47.643331405Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:47.644264 containerd[1456]: time="2025-07-06T23:59:47.644226644Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 525.194127ms" Jul 6 23:59:47.644327 containerd[1456]: time="2025-07-06T23:59:47.644260247Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 6 23:59:47.644889 containerd[1456]: time="2025-07-06T23:59:47.644772427Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 6 23:59:48.239930 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3261349511.mount: Deactivated successfully. Jul 6 23:59:51.112306 containerd[1456]: time="2025-07-06T23:59:51.112241389Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:51.113002 containerd[1456]: time="2025-07-06T23:59:51.112933046Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58247175" Jul 6 23:59:51.114276 containerd[1456]: time="2025-07-06T23:59:51.114238485Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:51.117295 containerd[1456]: time="2025-07-06T23:59:51.117269038Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:59:51.118585 containerd[1456]: time="2025-07-06T23:59:51.118532077Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.473716859s" Jul 6 23:59:51.118585 containerd[1456]: time="2025-07-06T23:59:51.118581359Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jul 6 23:59:53.675474 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 6 23:59:53.686147 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:59:53.870608 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:59:53.878196 (kubelet)[2034]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:59:54.024845 kubelet[2034]: E0706 23:59:54.024460 2034 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:59:54.028702 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:59:54.028923 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:59:54.675430 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:59:54.687168 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:59:54.712809 systemd[1]: Reloading requested from client PID 2050 ('systemctl') (unit session-7.scope)... Jul 6 23:59:54.712826 systemd[1]: Reloading... Jul 6 23:59:54.791967 zram_generator::config[2089]: No configuration found. Jul 6 23:59:55.275714 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:59:55.354088 systemd[1]: Reloading finished in 640 ms. Jul 6 23:59:55.406218 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 6 23:59:55.406309 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 6 23:59:55.406574 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:59:55.409414 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:59:55.574440 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:59:55.578972 (kubelet)[2138]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:59:55.626919 kubelet[2138]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:59:55.626919 kubelet[2138]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 6 23:59:55.626919 kubelet[2138]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:59:55.627327 kubelet[2138]: I0706 23:59:55.627067 2138 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:59:56.712609 kubelet[2138]: I0706 23:59:56.712337 2138 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 6 23:59:56.712609 kubelet[2138]: I0706 23:59:56.712379 2138 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:59:56.713526 kubelet[2138]: I0706 23:59:56.713489 2138 server.go:956] "Client rotation is on, will bootstrap in background" Jul 6 23:59:56.735215 kubelet[2138]: I0706 23:59:56.735000 2138 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:59:56.735465 kubelet[2138]: E0706 23:59:56.735416 2138 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.146:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.146:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 6 23:59:56.753780 kubelet[2138]: E0706 23:59:56.753706 2138 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:59:56.753780 kubelet[2138]: I0706 23:59:56.753779 2138 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:59:56.759441 kubelet[2138]: I0706 23:59:56.759417 2138 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:59:56.759720 kubelet[2138]: I0706 23:59:56.759686 2138 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:59:56.759935 kubelet[2138]: I0706 23:59:56.759712 2138 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:59:56.760057 kubelet[2138]: I0706 23:59:56.759968 2138 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:59:56.760057 kubelet[2138]: I0706 23:59:56.759981 2138 container_manager_linux.go:303] "Creating device plugin manager" Jul 6 23:59:56.760789 kubelet[2138]: I0706 23:59:56.760760 2138 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:59:56.762894 kubelet[2138]: I0706 23:59:56.762849 2138 kubelet.go:480] "Attempting to sync node with API server" Jul 6 23:59:56.762894 kubelet[2138]: I0706 23:59:56.762885 2138 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:59:56.763067 kubelet[2138]: I0706 23:59:56.762927 2138 kubelet.go:386] "Adding apiserver pod source" Jul 6 23:59:56.765021 kubelet[2138]: I0706 23:59:56.764866 2138 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:59:56.780866 kubelet[2138]: E0706 23:59:56.780796 2138 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.146:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.146:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 6 23:59:56.783495 kubelet[2138]: E0706 23:59:56.783425 2138 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.146:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.146:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 6 23:59:56.788439 kubelet[2138]: I0706 23:59:56.788400 2138 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 6 23:59:56.789832 kubelet[2138]: I0706 23:59:56.789791 2138 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 6 23:59:56.790472 kubelet[2138]: W0706 23:59:56.790444 2138 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 6 23:59:56.794320 kubelet[2138]: I0706 23:59:56.794296 2138 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 6 23:59:56.794368 kubelet[2138]: I0706 23:59:56.794360 2138 server.go:1289] "Started kubelet" Jul 6 23:59:56.794971 kubelet[2138]: I0706 23:59:56.794455 2138 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:59:56.795169 kubelet[2138]: I0706 23:59:56.795085 2138 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:59:56.795256 kubelet[2138]: I0706 23:59:56.795241 2138 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:59:56.797086 kubelet[2138]: I0706 23:59:56.796182 2138 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:59:56.797086 kubelet[2138]: I0706 23:59:56.796213 2138 server.go:317] "Adding debug handlers to kubelet server" Jul 6 23:59:56.797969 kubelet[2138]: I0706 23:59:56.797501 2138 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:59:56.800105 kubelet[2138]: E0706 23:59:56.799620 2138 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:59:56.800105 kubelet[2138]: I0706 23:59:56.799688 2138 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 6 23:59:56.800105 kubelet[2138]: I0706 23:59:56.799901 2138 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 6 23:59:56.800105 kubelet[2138]: I0706 23:59:56.800086 2138 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:59:56.800288 kubelet[2138]: E0706 23:59:56.798365 2138 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.146:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.146:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184fcf0425f93ec6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-06 23:59:56.79432263 +0000 UTC m=+1.207864338,LastTimestamp:2025-07-06 23:59:56.79432263 +0000 UTC m=+1.207864338,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 6 23:59:56.800556 kubelet[2138]: E0706 23:59:56.800497 2138 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.146:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.146:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 6 23:59:56.800556 kubelet[2138]: E0706 23:59:56.800546 2138 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.146:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.146:6443: connect: connection refused" interval="200ms" Jul 6 23:59:56.800985 kubelet[2138]: I0706 23:59:56.800967 2138 factory.go:223] Registration of the systemd container factory successfully Jul 6 23:59:56.801358 kubelet[2138]: I0706 23:59:56.801336 2138 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:59:56.801485 kubelet[2138]: E0706 23:59:56.801369 2138 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:59:56.802654 kubelet[2138]: I0706 23:59:56.802623 2138 factory.go:223] Registration of the containerd container factory successfully Jul 6 23:59:56.804562 kubelet[2138]: I0706 23:59:56.804388 2138 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 6 23:59:56.816791 kubelet[2138]: I0706 23:59:56.816763 2138 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 6 23:59:56.816791 kubelet[2138]: I0706 23:59:56.816784 2138 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 6 23:59:56.816908 kubelet[2138]: I0706 23:59:56.816811 2138 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:59:56.821970 kubelet[2138]: I0706 23:59:56.821886 2138 policy_none.go:49] "None policy: Start" Jul 6 23:59:56.821970 kubelet[2138]: I0706 23:59:56.821921 2138 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 6 23:59:56.821970 kubelet[2138]: I0706 23:59:56.821969 2138 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:59:56.824411 kubelet[2138]: I0706 23:59:56.824388 2138 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 6 23:59:56.824512 kubelet[2138]: I0706 23:59:56.824491 2138 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 6 23:59:56.824584 kubelet[2138]: I0706 23:59:56.824573 2138 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 6 23:59:56.824849 kubelet[2138]: I0706 23:59:56.824636 2138 kubelet.go:2436] "Starting kubelet main sync loop" Jul 6 23:59:56.824849 kubelet[2138]: E0706 23:59:56.824681 2138 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:59:56.825293 kubelet[2138]: E0706 23:59:56.825270 2138 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.146:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.146:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 6 23:59:56.829442 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 6 23:59:56.846759 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 6 23:59:56.853020 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 6 23:59:56.863920 kubelet[2138]: E0706 23:59:56.863879 2138 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 6 23:59:56.864171 kubelet[2138]: I0706 23:59:56.864145 2138 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:59:56.864202 kubelet[2138]: I0706 23:59:56.864173 2138 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:59:56.865472 kubelet[2138]: I0706 23:59:56.864987 2138 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:59:56.865756 kubelet[2138]: E0706 23:59:56.865726 2138 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 6 23:59:56.865870 kubelet[2138]: E0706 23:59:56.865856 2138 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 6 23:59:56.936305 systemd[1]: Created slice kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice - libcontainer container kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice. Jul 6 23:59:56.960711 kubelet[2138]: E0706 23:59:56.960674 2138 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:59:56.964740 systemd[1]: Created slice kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice - libcontainer container kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice. Jul 6 23:59:56.965559 kubelet[2138]: I0706 23:59:56.965523 2138 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 6 23:59:56.966377 kubelet[2138]: E0706 23:59:56.966148 2138 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.146:6443/api/v1/nodes\": dial tcp 10.0.0.146:6443: connect: connection refused" node="localhost" Jul 6 23:59:56.967523 kubelet[2138]: E0706 23:59:56.967498 2138 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:59:56.969324 systemd[1]: Created slice kubepods-burstable-pod7199498d9c6c2af142f4c9722c909c7a.slice - libcontainer container kubepods-burstable-pod7199498d9c6c2af142f4c9722c909c7a.slice. Jul 6 23:59:56.970988 kubelet[2138]: E0706 23:59:56.970958 2138 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:59:57.001299 kubelet[2138]: I0706 23:59:57.001210 2138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:59:57.001299 kubelet[2138]: I0706 23:59:57.001307 2138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:59:57.001299 kubelet[2138]: I0706 23:59:57.001333 2138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:59:57.001574 kubelet[2138]: I0706 23:59:57.001355 2138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 6 23:59:57.001574 kubelet[2138]: I0706 23:59:57.001380 2138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7199498d9c6c2af142f4c9722c909c7a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7199498d9c6c2af142f4c9722c909c7a\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:59:57.001574 kubelet[2138]: I0706 23:59:57.001400 2138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7199498d9c6c2af142f4c9722c909c7a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7199498d9c6c2af142f4c9722c909c7a\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:59:57.001574 kubelet[2138]: I0706 23:59:57.001415 2138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:59:57.001574 kubelet[2138]: I0706 23:59:57.001432 2138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:59:57.001750 kubelet[2138]: E0706 23:59:57.001570 2138 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.146:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.146:6443: connect: connection refused" interval="400ms" Jul 6 23:59:57.102168 kubelet[2138]: I0706 23:59:57.102126 2138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7199498d9c6c2af142f4c9722c909c7a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7199498d9c6c2af142f4c9722c909c7a\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:59:57.168544 kubelet[2138]: I0706 23:59:57.168496 2138 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 6 23:59:57.169027 kubelet[2138]: E0706 23:59:57.168980 2138 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.146:6443/api/v1/nodes\": dial tcp 10.0.0.146:6443: connect: connection refused" node="localhost" Jul 6 23:59:57.262126 kubelet[2138]: E0706 23:59:57.261930 2138 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:59:57.262828 containerd[1456]: time="2025-07-06T23:59:57.262776658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,}" Jul 6 23:59:57.268015 kubelet[2138]: E0706 23:59:57.267987 2138 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:59:57.268432 containerd[1456]: time="2025-07-06T23:59:57.268398231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,}" Jul 6 23:59:57.271662 kubelet[2138]: E0706 23:59:57.271640 2138 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:59:57.272106 containerd[1456]: time="2025-07-06T23:59:57.272068955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7199498d9c6c2af142f4c9722c909c7a,Namespace:kube-system,Attempt:0,}" Jul 6 23:59:57.402742 kubelet[2138]: E0706 23:59:57.402686 2138 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.146:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.146:6443: connect: connection refused" interval="800ms" Jul 6 23:59:57.570653 kubelet[2138]: I0706 23:59:57.570541 2138 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 6 23:59:57.570962 kubelet[2138]: E0706 23:59:57.570921 2138 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.146:6443/api/v1/nodes\": dial tcp 10.0.0.146:6443: connect: connection refused" node="localhost" Jul 6 23:59:57.718420 kubelet[2138]: E0706 23:59:57.718369 2138 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.146:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.146:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 6 23:59:57.730060 kubelet[2138]: E0706 23:59:57.730023 2138 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.146:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.146:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 6 23:59:57.760019 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2551694419.mount: Deactivated successfully. Jul 6 23:59:57.768110 containerd[1456]: time="2025-07-06T23:59:57.768060336Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:59:57.769207 containerd[1456]: time="2025-07-06T23:59:57.769161401Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:59:57.770092 containerd[1456]: time="2025-07-06T23:59:57.770065917Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:59:57.770870 containerd[1456]: time="2025-07-06T23:59:57.770812818Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:59:57.771743 containerd[1456]: time="2025-07-06T23:59:57.771693038Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 6 23:59:57.772725 containerd[1456]: time="2025-07-06T23:59:57.772688335Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:59:57.773587 containerd[1456]: time="2025-07-06T23:59:57.773556994Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:59:57.777271 containerd[1456]: time="2025-07-06T23:59:57.777231535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:59:57.778057 containerd[1456]: time="2025-07-06T23:59:57.778026476Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 505.877511ms" Jul 6 23:59:57.779210 containerd[1456]: time="2025-07-06T23:59:57.779178727Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 510.705565ms" Jul 6 23:59:57.780345 containerd[1456]: time="2025-07-06T23:59:57.780305981Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 517.40015ms" Jul 6 23:59:57.984580 kubelet[2138]: E0706 23:59:57.984437 2138 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.146:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.146:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 6 23:59:58.015167 containerd[1456]: time="2025-07-06T23:59:58.015006448Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:59:58.015167 containerd[1456]: time="2025-07-06T23:59:58.015110954Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:59:58.015167 containerd[1456]: time="2025-07-06T23:59:58.015148695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:59:58.016026 containerd[1456]: time="2025-07-06T23:59:58.015351214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:59:58.017166 containerd[1456]: time="2025-07-06T23:59:58.017006819Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:59:58.017166 containerd[1456]: time="2025-07-06T23:59:58.017089364Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:59:58.017371 containerd[1456]: time="2025-07-06T23:59:58.017105634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:59:58.017593 containerd[1456]: time="2025-07-06T23:59:58.017531854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:59:58.020382 containerd[1456]: time="2025-07-06T23:59:58.020141748Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:59:58.020382 containerd[1456]: time="2025-07-06T23:59:58.020190890Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:59:58.020382 containerd[1456]: time="2025-07-06T23:59:58.020205478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:59:58.020382 containerd[1456]: time="2025-07-06T23:59:58.020269408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:59:58.087088 systemd[1]: Started cri-containerd-fe08b557d31ea4811519e16b130d8e88d37c75775607c3a8ffba5ee5231aa1f3.scope - libcontainer container fe08b557d31ea4811519e16b130d8e88d37c75775607c3a8ffba5ee5231aa1f3. Jul 6 23:59:58.091482 systemd[1]: Started cri-containerd-d8928997d8d931c3b16831934a40abf7330eea856a552a406a675d2f41d253a2.scope - libcontainer container d8928997d8d931c3b16831934a40abf7330eea856a552a406a675d2f41d253a2. Jul 6 23:59:58.092935 systemd[1]: Started cri-containerd-eaa9d6bc311d56b9ef527f2ef3da1f0fcdd00dec3d82fb45ae7eb7fe802674eb.scope - libcontainer container eaa9d6bc311d56b9ef527f2ef3da1f0fcdd00dec3d82fb45ae7eb7fe802674eb. Jul 6 23:59:58.188851 containerd[1456]: time="2025-07-06T23:59:58.188800373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe08b557d31ea4811519e16b130d8e88d37c75775607c3a8ffba5ee5231aa1f3\"" Jul 6 23:59:58.191026 containerd[1456]: time="2025-07-06T23:59:58.190998736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7199498d9c6c2af142f4c9722c909c7a,Namespace:kube-system,Attempt:0,} returns sandbox id \"eaa9d6bc311d56b9ef527f2ef3da1f0fcdd00dec3d82fb45ae7eb7fe802674eb\"" Jul 6 23:59:58.191170 kubelet[2138]: E0706 23:59:58.191133 2138 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:59:58.192226 kubelet[2138]: E0706 23:59:58.192181 2138 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:59:58.194632 containerd[1456]: time="2025-07-06T23:59:58.194593377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"d8928997d8d931c3b16831934a40abf7330eea856a552a406a675d2f41d253a2\"" Jul 6 23:59:58.195097 kubelet[2138]: E0706 23:59:58.195078 2138 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:59:58.197723 containerd[1456]: time="2025-07-06T23:59:58.197688742Z" level=info msg="CreateContainer within sandbox \"fe08b557d31ea4811519e16b130d8e88d37c75775607c3a8ffba5ee5231aa1f3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 6 23:59:58.200813 containerd[1456]: time="2025-07-06T23:59:58.200766404Z" level=info msg="CreateContainer within sandbox \"eaa9d6bc311d56b9ef527f2ef3da1f0fcdd00dec3d82fb45ae7eb7fe802674eb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 6 23:59:58.203376 containerd[1456]: time="2025-07-06T23:59:58.203356000Z" level=info msg="CreateContainer within sandbox \"d8928997d8d931c3b16831934a40abf7330eea856a552a406a675d2f41d253a2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 6 23:59:58.203445 kubelet[2138]: E0706 23:59:58.203408 2138 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.146:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.146:6443: connect: connection refused" interval="1.6s" Jul 6 23:59:58.216869 containerd[1456]: time="2025-07-06T23:59:58.216765388Z" level=info msg="CreateContainer within sandbox \"fe08b557d31ea4811519e16b130d8e88d37c75775607c3a8ffba5ee5231aa1f3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0a44e76a69eb8d85bff213972fd14fa8ad51bdb8f7c1514d3cbb014917ca3e8a\"" Jul 6 23:59:58.217229 containerd[1456]: time="2025-07-06T23:59:58.217203549Z" level=info msg="StartContainer for \"0a44e76a69eb8d85bff213972fd14fa8ad51bdb8f7c1514d3cbb014917ca3e8a\"" Jul 6 23:59:58.226662 containerd[1456]: time="2025-07-06T23:59:58.226638313Z" level=info msg="CreateContainer within sandbox \"eaa9d6bc311d56b9ef527f2ef3da1f0fcdd00dec3d82fb45ae7eb7fe802674eb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4508957c734863b289a1ba1cd85290f22f3f0fd11f02dd22219be4313e67538b\"" Jul 6 23:59:58.227924 containerd[1456]: time="2025-07-06T23:59:58.227872277Z" level=info msg="StartContainer for \"4508957c734863b289a1ba1cd85290f22f3f0fd11f02dd22219be4313e67538b\"" Jul 6 23:59:58.232143 containerd[1456]: time="2025-07-06T23:59:58.232086420Z" level=info msg="CreateContainer within sandbox \"d8928997d8d931c3b16831934a40abf7330eea856a552a406a675d2f41d253a2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fa20249ac9480c875e89967d78e0334bb5efc09f946ad7c2946f67ced12ccfad\"" Jul 6 23:59:58.232773 containerd[1456]: time="2025-07-06T23:59:58.232743863Z" level=info msg="StartContainer for \"fa20249ac9480c875e89967d78e0334bb5efc09f946ad7c2946f67ced12ccfad\"" Jul 6 23:59:58.248612 systemd[1]: Started cri-containerd-0a44e76a69eb8d85bff213972fd14fa8ad51bdb8f7c1514d3cbb014917ca3e8a.scope - libcontainer container 0a44e76a69eb8d85bff213972fd14fa8ad51bdb8f7c1514d3cbb014917ca3e8a. Jul 6 23:59:58.257191 systemd[1]: Started cri-containerd-4508957c734863b289a1ba1cd85290f22f3f0fd11f02dd22219be4313e67538b.scope - libcontainer container 4508957c734863b289a1ba1cd85290f22f3f0fd11f02dd22219be4313e67538b. Jul 6 23:59:58.269148 systemd[1]: Started cri-containerd-fa20249ac9480c875e89967d78e0334bb5efc09f946ad7c2946f67ced12ccfad.scope - libcontainer container fa20249ac9480c875e89967d78e0334bb5efc09f946ad7c2946f67ced12ccfad. Jul 6 23:59:58.301264 containerd[1456]: time="2025-07-06T23:59:58.301179745Z" level=info msg="StartContainer for \"0a44e76a69eb8d85bff213972fd14fa8ad51bdb8f7c1514d3cbb014917ca3e8a\" returns successfully" Jul 6 23:59:58.308154 containerd[1456]: time="2025-07-06T23:59:58.308099462Z" level=info msg="StartContainer for \"4508957c734863b289a1ba1cd85290f22f3f0fd11f02dd22219be4313e67538b\" returns successfully" Jul 6 23:59:58.324996 kubelet[2138]: E0706 23:59:58.324882 2138 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.146:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.146:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 6 23:59:58.325805 containerd[1456]: time="2025-07-06T23:59:58.325282316Z" level=info msg="StartContainer for \"fa20249ac9480c875e89967d78e0334bb5efc09f946ad7c2946f67ced12ccfad\" returns successfully" Jul 6 23:59:58.372877 kubelet[2138]: I0706 23:59:58.372836 2138 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 6 23:59:58.833690 kubelet[2138]: E0706 23:59:58.833640 2138 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:59:58.834161 kubelet[2138]: E0706 23:59:58.833788 2138 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:59:58.855453 kubelet[2138]: E0706 23:59:58.855418 2138 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:59:58.855587 kubelet[2138]: E0706 23:59:58.855562 2138 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:59:58.858674 kubelet[2138]: E0706 23:59:58.858649 2138 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:59:58.858779 kubelet[2138]: E0706 23:59:58.858756 2138 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:59:59.860806 kubelet[2138]: E0706 23:59:59.860769 2138 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:59:59.861285 kubelet[2138]: E0706 23:59:59.860904 2138 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:59:59.861285 kubelet[2138]: E0706 23:59:59.860997 2138 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:59:59.861285 kubelet[2138]: E0706 23:59:59.861163 2138 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:00:00.391860 kubelet[2138]: E0707 00:00:00.391826 2138 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 7 00:00:00.527151 kubelet[2138]: E0707 00:00:00.527032 2138 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.184fcf0425f93ec6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-06 23:59:56.79432263 +0000 UTC m=+1.207864338,LastTimestamp:2025-07-06 23:59:56.79432263 +0000 UTC m=+1.207864338,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 7 00:00:00.687804 kubelet[2138]: I0707 00:00:00.686919 2138 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 7 00:00:00.700918 kubelet[2138]: I0707 00:00:00.700884 2138 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 7 00:00:00.707239 kubelet[2138]: E0707 00:00:00.707199 2138 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 7 00:00:00.707239 kubelet[2138]: I0707 00:00:00.707224 2138 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 7 00:00:00.708709 kubelet[2138]: E0707 00:00:00.708668 2138 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 7 00:00:00.708709 kubelet[2138]: I0707 00:00:00.708697 2138 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 7 00:00:00.710305 kubelet[2138]: E0707 00:00:00.710268 2138 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 7 00:00:00.768066 kubelet[2138]: I0707 00:00:00.768029 2138 apiserver.go:52] "Watching apiserver" Jul 7 00:00:00.800090 kubelet[2138]: I0707 00:00:00.800057 2138 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 00:00:02.525324 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. Jul 7 00:00:02.527211 systemd[1]: Starting mdadm.service - Initiates a check run of an MD array's redundancy information.... Jul 7 00:00:02.533893 systemd[1]: logrotate.service: Deactivated successfully. Jul 7 00:00:02.541690 systemd[1]: mdadm.service: Deactivated successfully. Jul 7 00:00:02.542001 systemd[1]: Finished mdadm.service - Initiates a check run of an MD array's redundancy information.. Jul 7 00:00:02.544933 systemd[1]: Reloading requested from client PID 2430 ('systemctl') (unit session-7.scope)... Jul 7 00:00:02.544972 systemd[1]: Reloading... Jul 7 00:00:02.626007 zram_generator::config[2470]: No configuration found. Jul 7 00:00:02.735827 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:00:02.827010 systemd[1]: Reloading finished in 281 ms. Jul 7 00:00:02.869678 kubelet[2138]: I0707 00:00:02.869634 2138 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 00:00:02.869710 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:00:02.886463 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 00:00:02.886739 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:00:02.886786 systemd[1]: kubelet.service: Consumed 1.349s CPU time, 136.0M memory peak, 0B memory swap peak. Jul 7 00:00:02.895148 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:00:03.070042 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:00:03.076062 (kubelet)[2515]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 00:00:03.117399 kubelet[2515]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:00:03.117399 kubelet[2515]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 00:00:03.117399 kubelet[2515]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:00:03.117764 kubelet[2515]: I0707 00:00:03.117378 2515 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 00:00:03.125708 kubelet[2515]: I0707 00:00:03.125666 2515 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 7 00:00:03.125708 kubelet[2515]: I0707 00:00:03.125693 2515 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 00:00:03.125915 kubelet[2515]: I0707 00:00:03.125897 2515 server.go:956] "Client rotation is on, will bootstrap in background" Jul 7 00:00:03.127168 kubelet[2515]: I0707 00:00:03.127144 2515 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 7 00:00:03.131076 kubelet[2515]: I0707 00:00:03.131059 2515 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 00:00:03.150302 kubelet[2515]: E0707 00:00:03.150261 2515 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 7 00:00:03.150302 kubelet[2515]: I0707 00:00:03.150299 2515 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 7 00:00:03.155316 kubelet[2515]: I0707 00:00:03.155291 2515 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 00:00:03.155519 kubelet[2515]: I0707 00:00:03.155487 2515 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 00:00:03.155672 kubelet[2515]: I0707 00:00:03.155511 2515 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 00:00:03.155760 kubelet[2515]: I0707 00:00:03.155675 2515 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 00:00:03.155760 kubelet[2515]: I0707 00:00:03.155684 2515 container_manager_linux.go:303] "Creating device plugin manager" Jul 7 00:00:03.155760 kubelet[2515]: I0707 00:00:03.155727 2515 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:00:03.155914 kubelet[2515]: I0707 00:00:03.155903 2515 kubelet.go:480] "Attempting to sync node with API server" Jul 7 00:00:03.155961 kubelet[2515]: I0707 00:00:03.155919 2515 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 00:00:03.155961 kubelet[2515]: I0707 00:00:03.155953 2515 kubelet.go:386] "Adding apiserver pod source" Jul 7 00:00:03.156005 kubelet[2515]: I0707 00:00:03.155969 2515 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 00:00:03.156844 kubelet[2515]: I0707 00:00:03.156775 2515 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 7 00:00:03.157917 kubelet[2515]: I0707 00:00:03.157490 2515 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 7 00:00:03.161107 kubelet[2515]: I0707 00:00:03.161084 2515 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 00:00:03.161164 kubelet[2515]: I0707 00:00:03.161137 2515 server.go:1289] "Started kubelet" Jul 7 00:00:03.166981 kubelet[2515]: I0707 00:00:03.164817 2515 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 00:00:03.167891 kubelet[2515]: E0707 00:00:03.167858 2515 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 00:00:03.168482 kubelet[2515]: I0707 00:00:03.168440 2515 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 00:00:03.169566 kubelet[2515]: I0707 00:00:03.169482 2515 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 00:00:03.169793 kubelet[2515]: I0707 00:00:03.169769 2515 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 00:00:03.169922 kubelet[2515]: I0707 00:00:03.169900 2515 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 00:00:03.170022 kubelet[2515]: I0707 00:00:03.169992 2515 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 00:00:03.170071 kubelet[2515]: I0707 00:00:03.170060 2515 reconciler.go:26] "Reconciler: start to sync state" Jul 7 00:00:03.170226 kubelet[2515]: I0707 00:00:03.170209 2515 server.go:317] "Adding debug handlers to kubelet server" Jul 7 00:00:03.170977 kubelet[2515]: I0707 00:00:03.170938 2515 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 00:00:03.172029 kubelet[2515]: I0707 00:00:03.172009 2515 factory.go:223] Registration of the systemd container factory successfully Jul 7 00:00:03.172201 kubelet[2515]: I0707 00:00:03.172178 2515 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 00:00:03.174275 kubelet[2515]: I0707 00:00:03.174230 2515 factory.go:223] Registration of the containerd container factory successfully Jul 7 00:00:03.183328 kubelet[2515]: I0707 00:00:03.183278 2515 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 7 00:00:03.184421 kubelet[2515]: I0707 00:00:03.184405 2515 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 7 00:00:03.184421 kubelet[2515]: I0707 00:00:03.184423 2515 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 7 00:00:03.184525 kubelet[2515]: I0707 00:00:03.184511 2515 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 00:00:03.184525 kubelet[2515]: I0707 00:00:03.184524 2515 kubelet.go:2436] "Starting kubelet main sync loop" Jul 7 00:00:03.184587 kubelet[2515]: E0707 00:00:03.184567 2515 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 00:00:03.207495 kubelet[2515]: I0707 00:00:03.207470 2515 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 00:00:03.207495 kubelet[2515]: I0707 00:00:03.207485 2515 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 00:00:03.207495 kubelet[2515]: I0707 00:00:03.207503 2515 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:00:03.207674 kubelet[2515]: I0707 00:00:03.207636 2515 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 00:00:03.207674 kubelet[2515]: I0707 00:00:03.207648 2515 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 00:00:03.207674 kubelet[2515]: I0707 00:00:03.207663 2515 policy_none.go:49] "None policy: Start" Jul 7 00:00:03.207674 kubelet[2515]: I0707 00:00:03.207673 2515 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 00:00:03.207754 kubelet[2515]: I0707 00:00:03.207682 2515 state_mem.go:35] "Initializing new in-memory state store" Jul 7 00:00:03.207777 kubelet[2515]: I0707 00:00:03.207771 2515 state_mem.go:75] "Updated machine memory state" Jul 7 00:00:03.211272 kubelet[2515]: E0707 00:00:03.211251 2515 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 7 00:00:03.211418 kubelet[2515]: I0707 00:00:03.211405 2515 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 00:00:03.211459 kubelet[2515]: I0707 00:00:03.211420 2515 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 00:00:03.211987 kubelet[2515]: I0707 00:00:03.211635 2515 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 00:00:03.214016 kubelet[2515]: E0707 00:00:03.212225 2515 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 00:00:03.286743 kubelet[2515]: I0707 00:00:03.286687 2515 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 7 00:00:03.286869 kubelet[2515]: I0707 00:00:03.286826 2515 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 7 00:00:03.286869 kubelet[2515]: I0707 00:00:03.286841 2515 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 7 00:00:03.317508 kubelet[2515]: I0707 00:00:03.317474 2515 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 00:00:03.322223 kubelet[2515]: I0707 00:00:03.322202 2515 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 7 00:00:03.322309 kubelet[2515]: I0707 00:00:03.322260 2515 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 7 00:00:03.370337 kubelet[2515]: I0707 00:00:03.370214 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7199498d9c6c2af142f4c9722c909c7a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7199498d9c6c2af142f4c9722c909c7a\") " pod="kube-system/kube-apiserver-localhost" Jul 7 00:00:03.370337 kubelet[2515]: I0707 00:00:03.370249 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7199498d9c6c2af142f4c9722c909c7a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7199498d9c6c2af142f4c9722c909c7a\") " pod="kube-system/kube-apiserver-localhost" Jul 7 00:00:03.370337 kubelet[2515]: I0707 00:00:03.370271 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:00:03.370337 kubelet[2515]: I0707 00:00:03.370287 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:00:03.370541 kubelet[2515]: I0707 00:00:03.370362 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:00:03.370541 kubelet[2515]: I0707 00:00:03.370403 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 7 00:00:03.370541 kubelet[2515]: I0707 00:00:03.370425 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7199498d9c6c2af142f4c9722c909c7a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7199498d9c6c2af142f4c9722c909c7a\") " pod="kube-system/kube-apiserver-localhost" Jul 7 00:00:03.370541 kubelet[2515]: I0707 00:00:03.370441 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:00:03.370541 kubelet[2515]: I0707 00:00:03.370454 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:00:03.591593 kubelet[2515]: E0707 00:00:03.591559 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:00:03.591766 kubelet[2515]: E0707 00:00:03.591724 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:00:03.592085 kubelet[2515]: E0707 00:00:03.592044 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:00:04.156937 kubelet[2515]: I0707 00:00:04.156900 2515 apiserver.go:52] "Watching apiserver" Jul 7 00:00:04.171010 kubelet[2515]: I0707 00:00:04.170982 2515 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 00:00:04.193746 kubelet[2515]: I0707 00:00:04.193706 2515 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 7 00:00:04.194478 kubelet[2515]: I0707 00:00:04.194308 2515 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 7 00:00:04.194478 kubelet[2515]: I0707 00:00:04.194346 2515 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 7 00:00:04.199373 kubelet[2515]: E0707 00:00:04.199334 2515 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 7 00:00:04.199625 kubelet[2515]: E0707 00:00:04.199503 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:00:04.200352 kubelet[2515]: E0707 00:00:04.200322 2515 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 7 00:00:04.200402 kubelet[2515]: E0707 00:00:04.200385 2515 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 7 00:00:04.200450 kubelet[2515]: E0707 00:00:04.200424 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:00:04.200586 kubelet[2515]: E0707 00:00:04.200521 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:00:04.214902 kubelet[2515]: I0707 00:00:04.214820 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.214792677 podStartE2EDuration="1.214792677s" podCreationTimestamp="2025-07-07 00:00:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:00:04.214572355 +0000 UTC m=+1.131305250" watchObservedRunningTime="2025-07-07 00:00:04.214792677 +0000 UTC m=+1.131525562" Jul 7 00:00:04.222106 kubelet[2515]: I0707 00:00:04.222042 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.222022598 podStartE2EDuration="1.222022598s" podCreationTimestamp="2025-07-07 00:00:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:00:04.221802616 +0000 UTC m=+1.138535511" watchObservedRunningTime="2025-07-07 00:00:04.222022598 +0000 UTC m=+1.138755493" Jul 7 00:00:04.235172 kubelet[2515]: I0707 00:00:04.235097 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.235073219 podStartE2EDuration="1.235073219s" podCreationTimestamp="2025-07-07 00:00:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:00:04.228173081 +0000 UTC m=+1.144905976" watchObservedRunningTime="2025-07-07 00:00:04.235073219 +0000 UTC m=+1.151806115" Jul 7 00:00:05.194966 kubelet[2515]: E0707 00:00:05.194912 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:00:05.195438 kubelet[2515]: E0707 00:00:05.195413 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:00:05.195619 kubelet[2515]: E0707 00:00:05.195592 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:00:06.196886 kubelet[2515]: E0707 00:00:06.196844 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:00:07.929845 kubelet[2515]: I0707 00:00:07.929807 2515 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 00:00:07.930318 kubelet[2515]: I0707 00:00:07.930218 2515 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 00:00:07.930358 containerd[1456]: time="2025-07-07T00:00:07.930087767Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 00:00:08.204750 systemd[1]: Created slice kubepods-besteffort-pod4a3b46de_5a11_41bd_8f20_a06402405223.slice - libcontainer container kubepods-besteffort-pod4a3b46de_5a11_41bd_8f20_a06402405223.slice. Jul 7 00:00:08.300886 kubelet[2515]: I0707 00:00:08.300826 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4a3b46de-5a11-41bd-8f20-a06402405223-kube-proxy\") pod \"kube-proxy-h6lst\" (UID: \"4a3b46de-5a11-41bd-8f20-a06402405223\") " pod="kube-system/kube-proxy-h6lst" Jul 7 00:00:08.300886 kubelet[2515]: I0707 00:00:08.300867 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56c8z\" (UniqueName: \"kubernetes.io/projected/4a3b46de-5a11-41bd-8f20-a06402405223-kube-api-access-56c8z\") pod \"kube-proxy-h6lst\" (UID: \"4a3b46de-5a11-41bd-8f20-a06402405223\") " pod="kube-system/kube-proxy-h6lst" Jul 7 00:00:08.300886 kubelet[2515]: I0707 00:00:08.300887 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4a3b46de-5a11-41bd-8f20-a06402405223-xtables-lock\") pod \"kube-proxy-h6lst\" (UID: \"4a3b46de-5a11-41bd-8f20-a06402405223\") " pod="kube-system/kube-proxy-h6lst" Jul 7 00:00:08.301114 kubelet[2515]: I0707 00:00:08.300902 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4a3b46de-5a11-41bd-8f20-a06402405223-lib-modules\") pod \"kube-proxy-h6lst\" (UID: \"4a3b46de-5a11-41bd-8f20-a06402405223\") " pod="kube-system/kube-proxy-h6lst" Jul 7 00:00:08.390261 kubelet[2515]: E0707 00:00:08.390218 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:00:08.408544 kubelet[2515]: E0707 00:00:08.408510 2515 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 7 00:00:08.408876 kubelet[2515]: E0707 00:00:08.408751 2515 projected.go:194] Error preparing data for projected volume kube-api-access-56c8z for pod kube-system/kube-proxy-h6lst: configmap "kube-root-ca.crt" not found Jul 7 00:00:08.408876 kubelet[2515]: E0707 00:00:08.408852 2515 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4a3b46de-5a11-41bd-8f20-a06402405223-kube-api-access-56c8z podName:4a3b46de-5a11-41bd-8f20-a06402405223 nodeName:}" failed. No retries permitted until 2025-07-07 00:00:08.908824033 +0000 UTC m=+5.825556929 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-56c8z" (UniqueName: "kubernetes.io/projected/4a3b46de-5a11-41bd-8f20-a06402405223-kube-api-access-56c8z") pod "kube-proxy-h6lst" (UID: "4a3b46de-5a11-41bd-8f20-a06402405223") : configmap "kube-root-ca.crt" not found Jul 7 00:00:08.964382 systemd[1]: Created slice kubepods-besteffort-pod2e73e43b_ca38_48b7_8b7e_9362ec0ca62f.slice - libcontainer container kubepods-besteffort-pod2e73e43b_ca38_48b7_8b7e_9362ec0ca62f.slice. Jul 7 00:00:09.006999 kubelet[2515]: I0707 00:00:09.006921 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2e73e43b-ca38-48b7-8b7e-9362ec0ca62f-var-lib-calico\") pod \"tigera-operator-747864d56d-flgqj\" (UID: \"2e73e43b-ca38-48b7-8b7e-9362ec0ca62f\") " pod="tigera-operator/tigera-operator-747864d56d-flgqj" Jul 7 00:00:09.007394 kubelet[2515]: I0707 00:00:09.007011 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj5rp\" (UniqueName: \"kubernetes.io/projected/2e73e43b-ca38-48b7-8b7e-9362ec0ca62f-kube-api-access-cj5rp\") pod \"tigera-operator-747864d56d-flgqj\" (UID: \"2e73e43b-ca38-48b7-8b7e-9362ec0ca62f\") " pod="tigera-operator/tigera-operator-747864d56d-flgqj" Jul 7 00:00:09.111905 kubelet[2515]: E0707 00:00:09.111841 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:00:09.112688 containerd[1456]: time="2025-07-07T00:00:09.112535718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h6lst,Uid:4a3b46de-5a11-41bd-8f20-a06402405223,Namespace:kube-system,Attempt:0,}" Jul 7 00:00:09.139365 containerd[1456]: time="2025-07-07T00:00:09.139259608Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:00:09.139365 containerd[1456]: time="2025-07-07T00:00:09.139332025Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:00:09.139365 containerd[1456]: time="2025-07-07T00:00:09.139348486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:09.139653 containerd[1456]: time="2025-07-07T00:00:09.139438478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:09.161098 systemd[1]: Started cri-containerd-7a6bc24d91585738f8f9b9b403f41f66da437359786cbd62a3f4702422174c00.scope - libcontainer container 7a6bc24d91585738f8f9b9b403f41f66da437359786cbd62a3f4702422174c00. Jul 7 00:00:09.182337 containerd[1456]: time="2025-07-07T00:00:09.182295114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h6lst,Uid:4a3b46de-5a11-41bd-8f20-a06402405223,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a6bc24d91585738f8f9b9b403f41f66da437359786cbd62a3f4702422174c00\"" Jul 7 00:00:09.183061 kubelet[2515]: E0707 00:00:09.183040 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:00:09.187506 containerd[1456]: time="2025-07-07T00:00:09.187469769Z" level=info msg="CreateContainer within sandbox \"7a6bc24d91585738f8f9b9b403f41f66da437359786cbd62a3f4702422174c00\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 00:00:09.203027 containerd[1456]: time="2025-07-07T00:00:09.202643269Z" level=info msg="CreateContainer within sandbox \"7a6bc24d91585738f8f9b9b403f41f66da437359786cbd62a3f4702422174c00\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7a54adcb0dc68b25ad065d242428722fbef21341e45bdfb53efd82cbd0b86c9d\"" Jul 7 00:00:09.203441 kubelet[2515]: E0707 00:00:09.203378 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:00:09.203571 containerd[1456]: time="2025-07-07T00:00:09.203515871Z" level=info msg="StartContainer for \"7a54adcb0dc68b25ad065d242428722fbef21341e45bdfb53efd82cbd0b86c9d\"" Jul 7 00:00:09.230090 systemd[1]: Started cri-containerd-7a54adcb0dc68b25ad065d242428722fbef21341e45bdfb53efd82cbd0b86c9d.scope - libcontainer container 7a54adcb0dc68b25ad065d242428722fbef21341e45bdfb53efd82cbd0b86c9d. Jul 7 00:00:09.267770 containerd[1456]: time="2025-07-07T00:00:09.267733642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-flgqj,Uid:2e73e43b-ca38-48b7-8b7e-9362ec0ca62f,Namespace:tigera-operator,Attempt:0,}" Jul 7 00:00:09.290511 containerd[1456]: time="2025-07-07T00:00:09.290462743Z" level=info msg="StartContainer for \"7a54adcb0dc68b25ad065d242428722fbef21341e45bdfb53efd82cbd0b86c9d\" returns successfully" Jul 7 00:00:09.313200 containerd[1456]: time="2025-07-07T00:00:09.313051666Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:00:09.313200 containerd[1456]: time="2025-07-07T00:00:09.313103185Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:00:09.313200 containerd[1456]: time="2025-07-07T00:00:09.313117281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:09.313365 containerd[1456]: time="2025-07-07T00:00:09.313186092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:09.333200 systemd[1]: Started cri-containerd-bdb50ad4e3a75a279ac2a4a1286d35c2dee3aec75e6e473c083c9bcb35b37639.scope - libcontainer container bdb50ad4e3a75a279ac2a4a1286d35c2dee3aec75e6e473c083c9bcb35b37639. Jul 7 00:00:09.372201 containerd[1456]: time="2025-07-07T00:00:09.372168883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-flgqj,Uid:2e73e43b-ca38-48b7-8b7e-9362ec0ca62f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"bdb50ad4e3a75a279ac2a4a1286d35c2dee3aec75e6e473c083c9bcb35b37639\"" Jul 7 00:00:09.374203 containerd[1456]: time="2025-07-07T00:00:09.374174062Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 7 00:00:10.206439 kubelet[2515]: E0707 00:00:10.206405 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:00:10.214639 kubelet[2515]: I0707 00:00:10.214577 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-h6lst" podStartSLOduration=2.214555976 podStartE2EDuration="2.214555976s" podCreationTimestamp="2025-07-07 00:00:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:00:10.213895408 +0000 UTC m=+7.130628303" watchObservedRunningTime="2025-07-07 00:00:10.214555976 +0000 UTC m=+7.131288871" Jul 7 00:00:10.926224 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2598980038.mount: Deactivated successfully. Jul 7 00:00:11.209438 kubelet[2515]: E0707 00:00:11.208987 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:00:11.590431 containerd[1456]: time="2025-07-07T00:00:11.590271614Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:11.591262 containerd[1456]: time="2025-07-07T00:00:11.591227261Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 7 00:00:11.592518 containerd[1456]: time="2025-07-07T00:00:11.592465475Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:11.594501 containerd[1456]: time="2025-07-07T00:00:11.594468984Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:11.595148 containerd[1456]: time="2025-07-07T00:00:11.595111254Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.220906434s" Jul 7 00:00:11.595148 containerd[1456]: time="2025-07-07T00:00:11.595140250Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 7 00:00:11.599584 containerd[1456]: time="2025-07-07T00:00:11.599552408Z" level=info msg="CreateContainer within sandbox \"bdb50ad4e3a75a279ac2a4a1286d35c2dee3aec75e6e473c083c9bcb35b37639\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 7 00:00:11.610519 containerd[1456]: time="2025-07-07T00:00:11.610477417Z" level=info msg="CreateContainer within sandbox \"bdb50ad4e3a75a279ac2a4a1286d35c2dee3aec75e6e473c083c9bcb35b37639\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8391fa1fc33cc8c121dde2a849efea40ca25696d036bf69a6446aca64a25b961\"" Jul 7 00:00:11.610901 containerd[1456]: time="2025-07-07T00:00:11.610859203Z" level=info msg="StartContainer for \"8391fa1fc33cc8c121dde2a849efea40ca25696d036bf69a6446aca64a25b961\"" Jul 7 00:00:11.643113 systemd[1]: Started cri-containerd-8391fa1fc33cc8c121dde2a849efea40ca25696d036bf69a6446aca64a25b961.scope - libcontainer container 8391fa1fc33cc8c121dde2a849efea40ca25696d036bf69a6446aca64a25b961. Jul 7 00:00:11.821605 containerd[1456]: time="2025-07-07T00:00:11.821560803Z" level=info msg="StartContainer for \"8391fa1fc33cc8c121dde2a849efea40ca25696d036bf69a6446aca64a25b961\" returns successfully" Jul 7 00:00:11.886721 kubelet[2515]: E0707 00:00:11.886565 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:00:12.212005 kubelet[2515]: E0707 00:00:12.211845 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:00:12.312641 kubelet[2515]: I0707 00:00:12.312579 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-flgqj" podStartSLOduration=2.089823187 podStartE2EDuration="4.312554583s" podCreationTimestamp="2025-07-07 00:00:08 +0000 UTC" firstStartedPulling="2025-07-07 00:00:09.373434774 +0000 UTC m=+6.290167669" lastFinishedPulling="2025-07-07 00:00:11.59616617 +0000 UTC m=+8.512899065" observedRunningTime="2025-07-07 00:00:12.311700781 +0000 UTC m=+9.228433676" watchObservedRunningTime="2025-07-07 00:00:12.312554583 +0000 UTC m=+9.229287508" Jul 7 00:00:12.864561 kubelet[2515]: E0707 00:00:12.864515 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:00:13.214478 kubelet[2515]: E0707 00:00:13.214270 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:00:14.216475 kubelet[2515]: E0707 00:00:14.216345 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:00:14.330568 update_engine[1444]: I20250707 00:00:14.330387 1444 update_attempter.cc:509] Updating boot flags... Jul 7 00:00:14.362268 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2909) Jul 7 00:00:14.412686 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2907) Jul 7 00:00:14.452981 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2907) Jul 7 00:00:16.821066 sudo[1636]: pam_unix(sudo:session): session closed for user root Jul 7 00:00:16.824236 sshd[1633]: pam_unix(sshd:session): session closed for user core Jul 7 00:00:16.828655 systemd[1]: sshd@6-10.0.0.146:22-10.0.0.1:33480.service: Deactivated successfully. Jul 7 00:00:16.830911 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 00:00:16.831147 systemd[1]: session-7.scope: Consumed 6.247s CPU time, 159.7M memory peak, 0B memory swap peak. Jul 7 00:00:16.833454 systemd-logind[1440]: Session 7 logged out. Waiting for processes to exit. Jul 7 00:00:16.834562 systemd-logind[1440]: Removed session 7. Jul 7 00:00:19.165270 systemd[1]: Created slice kubepods-besteffort-pod2c2d9e67_3baa_4408_a499_9b6a3e288310.slice - libcontainer container kubepods-besteffort-pod2c2d9e67_3baa_4408_a499_9b6a3e288310.slice. Jul 7 00:00:19.172293 kubelet[2515]: I0707 00:00:19.172205 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwfhn\" (UniqueName: \"kubernetes.io/projected/2c2d9e67-3baa-4408-a499-9b6a3e288310-kube-api-access-kwfhn\") pod \"calico-typha-6dc8cc4956-rpmtb\" (UID: \"2c2d9e67-3baa-4408-a499-9b6a3e288310\") " pod="calico-system/calico-typha-6dc8cc4956-rpmtb" Jul 7 00:00:19.172883 kubelet[2515]: I0707 00:00:19.172536 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c2d9e67-3baa-4408-a499-9b6a3e288310-tigera-ca-bundle\") pod \"calico-typha-6dc8cc4956-rpmtb\" (UID: \"2c2d9e67-3baa-4408-a499-9b6a3e288310\") " pod="calico-system/calico-typha-6dc8cc4956-rpmtb" Jul 7 00:00:19.172883 kubelet[2515]: I0707 00:00:19.172560 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2c2d9e67-3baa-4408-a499-9b6a3e288310-typha-certs\") pod \"calico-typha-6dc8cc4956-rpmtb\" (UID: \"2c2d9e67-3baa-4408-a499-9b6a3e288310\") " pod="calico-system/calico-typha-6dc8cc4956-rpmtb" Jul 7 00:00:19.468139 kubelet[2515]: E0707 00:00:19.468021 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:00:19.469028 containerd[1456]: time="2025-07-07T00:00:19.468978967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6dc8cc4956-rpmtb,Uid:2c2d9e67-3baa-4408-a499-9b6a3e288310,Namespace:calico-system,Attempt:0,}" Jul 7 00:00:19.471807 systemd[1]: Created slice kubepods-besteffort-pod6d809fab_fefe_43d6_aa09_58ecf0d0a835.slice - libcontainer container kubepods-besteffort-pod6d809fab_fefe_43d6_aa09_58ecf0d0a835.slice. Jul 7 00:00:19.474771 kubelet[2515]: I0707 00:00:19.474650 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d809fab-fefe-43d6-aa09-58ecf0d0a835-lib-modules\") pod \"calico-node-t5f6f\" (UID: \"6d809fab-fefe-43d6-aa09-58ecf0d0a835\") " pod="calico-system/calico-node-t5f6f" Jul 7 00:00:19.474771 kubelet[2515]: I0707 00:00:19.474692 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg786\" (UniqueName: \"kubernetes.io/projected/6d809fab-fefe-43d6-aa09-58ecf0d0a835-kube-api-access-dg786\") pod \"calico-node-t5f6f\" (UID: \"6d809fab-fefe-43d6-aa09-58ecf0d0a835\") " pod="calico-system/calico-node-t5f6f" Jul 7 00:00:19.474771 kubelet[2515]: I0707 00:00:19.474712 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6d809fab-fefe-43d6-aa09-58ecf0d0a835-cni-bin-dir\") pod \"calico-node-t5f6f\" (UID: \"6d809fab-fefe-43d6-aa09-58ecf0d0a835\") " pod="calico-system/calico-node-t5f6f" Jul 7 00:00:19.474771 kubelet[2515]: I0707 00:00:19.474730 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6d809fab-fefe-43d6-aa09-58ecf0d0a835-flexvol-driver-host\") pod \"calico-node-t5f6f\" (UID: \"6d809fab-fefe-43d6-aa09-58ecf0d0a835\") " pod="calico-system/calico-node-t5f6f" Jul 7 00:00:19.474771 kubelet[2515]: I0707 00:00:19.474772 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6d809fab-fefe-43d6-aa09-58ecf0d0a835-cni-net-dir\") pod \"calico-node-t5f6f\" (UID: \"6d809fab-fefe-43d6-aa09-58ecf0d0a835\") " pod="calico-system/calico-node-t5f6f" Jul 7 00:00:19.474981 kubelet[2515]: I0707 00:00:19.474792 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6d809fab-fefe-43d6-aa09-58ecf0d0a835-var-lib-calico\") pod \"calico-node-t5f6f\" (UID: \"6d809fab-fefe-43d6-aa09-58ecf0d0a835\") " pod="calico-system/calico-node-t5f6f" Jul 7 00:00:19.474981 kubelet[2515]: I0707 00:00:19.474808 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6d809fab-fefe-43d6-aa09-58ecf0d0a835-node-certs\") pod \"calico-node-t5f6f\" (UID: \"6d809fab-fefe-43d6-aa09-58ecf0d0a835\") " pod="calico-system/calico-node-t5f6f" Jul 7 00:00:19.474981 kubelet[2515]: I0707 00:00:19.474822 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6d809fab-fefe-43d6-aa09-58ecf0d0a835-policysync\") pod \"calico-node-t5f6f\" (UID: \"6d809fab-fefe-43d6-aa09-58ecf0d0a835\") " pod="calico-system/calico-node-t5f6f" Jul 7 00:00:19.474981 kubelet[2515]: I0707 00:00:19.474877 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d809fab-fefe-43d6-aa09-58ecf0d0a835-tigera-ca-bundle\") pod \"calico-node-t5f6f\" (UID: \"6d809fab-fefe-43d6-aa09-58ecf0d0a835\") " pod="calico-system/calico-node-t5f6f" Jul 7 00:00:19.474981 kubelet[2515]: I0707 00:00:19.474893 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d809fab-fefe-43d6-aa09-58ecf0d0a835-xtables-lock\") pod \"calico-node-t5f6f\" (UID: \"6d809fab-fefe-43d6-aa09-58ecf0d0a835\") " pod="calico-system/calico-node-t5f6f" Jul 7 00:00:19.475117 kubelet[2515]: I0707 00:00:19.474909 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6d809fab-fefe-43d6-aa09-58ecf0d0a835-var-run-calico\") pod \"calico-node-t5f6f\" (UID: \"6d809fab-fefe-43d6-aa09-58ecf0d0a835\") " pod="calico-system/calico-node-t5f6f" Jul 7 00:00:19.475117 kubelet[2515]: I0707 00:00:19.474928 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6d809fab-fefe-43d6-aa09-58ecf0d0a835-cni-log-dir\") pod \"calico-node-t5f6f\" (UID: \"6d809fab-fefe-43d6-aa09-58ecf0d0a835\") " pod="calico-system/calico-node-t5f6f" Jul 7 00:00:19.496622 containerd[1456]: time="2025-07-07T00:00:19.495707162Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:00:19.496622 containerd[1456]: time="2025-07-07T00:00:19.495782915Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:00:19.496622 containerd[1456]: time="2025-07-07T00:00:19.495798615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:19.496622 containerd[1456]: time="2025-07-07T00:00:19.496410051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:19.522087 systemd[1]: Started cri-containerd-c6a4a04c8878b935b82f833ebe0ff0b5d31bee0c7481f326be0fab33138645c1.scope - libcontainer container c6a4a04c8878b935b82f833ebe0ff0b5d31bee0c7481f326be0fab33138645c1. Jul 7 00:00:19.561476 containerd[1456]: time="2025-07-07T00:00:19.561412131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6dc8cc4956-rpmtb,Uid:2c2d9e67-3baa-4408-a499-9b6a3e288310,Namespace:calico-system,Attempt:0,} returns sandbox id \"c6a4a04c8878b935b82f833ebe0ff0b5d31bee0c7481f326be0fab33138645c1\"" Jul 7 00:00:19.564826 kubelet[2515]: E0707 00:00:19.564791 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:00:19.568698 containerd[1456]: time="2025-07-07T00:00:19.568658662Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 7 00:00:19.578654 kubelet[2515]: E0707 00:00:19.578611 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.578654 kubelet[2515]: W0707 00:00:19.578635 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.578772 kubelet[2515]: E0707 00:00:19.578661 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.580158 kubelet[2515]: E0707 00:00:19.580112 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.580158 kubelet[2515]: W0707 00:00:19.580136 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.580158 kubelet[2515]: E0707 00:00:19.580157 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.584146 kubelet[2515]: E0707 00:00:19.584120 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.584146 kubelet[2515]: W0707 00:00:19.584133 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.584146 kubelet[2515]: E0707 00:00:19.584143 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.758440 kubelet[2515]: E0707 00:00:19.758306 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qnpjz" podUID="febfaed7-7f45-4497-b75d-f0ee8f991481" Jul 7 00:00:19.773052 kubelet[2515]: E0707 00:00:19.773018 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.773052 kubelet[2515]: W0707 00:00:19.773041 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.773204 kubelet[2515]: E0707 00:00:19.773064 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.773345 kubelet[2515]: E0707 00:00:19.773322 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.773345 kubelet[2515]: W0707 00:00:19.773333 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.773345 kubelet[2515]: E0707 00:00:19.773342 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.773547 kubelet[2515]: E0707 00:00:19.773532 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.773547 kubelet[2515]: W0707 00:00:19.773542 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.773612 kubelet[2515]: E0707 00:00:19.773554 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.773846 kubelet[2515]: E0707 00:00:19.773831 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.773846 kubelet[2515]: W0707 00:00:19.773841 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.773896 kubelet[2515]: E0707 00:00:19.773851 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.774072 kubelet[2515]: E0707 00:00:19.774057 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.774072 kubelet[2515]: W0707 00:00:19.774067 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.774125 kubelet[2515]: E0707 00:00:19.774075 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.774269 kubelet[2515]: E0707 00:00:19.774255 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.774269 kubelet[2515]: W0707 00:00:19.774264 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.774316 kubelet[2515]: E0707 00:00:19.774272 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.774459 kubelet[2515]: E0707 00:00:19.774445 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.774459 kubelet[2515]: W0707 00:00:19.774454 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.774504 kubelet[2515]: E0707 00:00:19.774462 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.774689 kubelet[2515]: E0707 00:00:19.774663 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.774689 kubelet[2515]: W0707 00:00:19.774685 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.774740 kubelet[2515]: E0707 00:00:19.774693 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.774888 kubelet[2515]: E0707 00:00:19.774874 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.774888 kubelet[2515]: W0707 00:00:19.774884 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.774929 kubelet[2515]: E0707 00:00:19.774892 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.775154 kubelet[2515]: E0707 00:00:19.775129 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.775154 kubelet[2515]: W0707 00:00:19.775143 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.775154 kubelet[2515]: E0707 00:00:19.775152 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.775399 kubelet[2515]: E0707 00:00:19.775377 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.775423 kubelet[2515]: W0707 00:00:19.775398 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.775423 kubelet[2515]: E0707 00:00:19.775418 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.775682 kubelet[2515]: E0707 00:00:19.775655 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.775682 kubelet[2515]: W0707 00:00:19.775667 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.775738 kubelet[2515]: E0707 00:00:19.775689 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.775965 kubelet[2515]: E0707 00:00:19.775937 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.775965 kubelet[2515]: W0707 00:00:19.775959 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.776130 kubelet[2515]: E0707 00:00:19.775967 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.776190 kubelet[2515]: E0707 00:00:19.776176 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.776190 kubelet[2515]: W0707 00:00:19.776185 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.776231 kubelet[2515]: E0707 00:00:19.776193 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.776383 kubelet[2515]: E0707 00:00:19.776370 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.776383 kubelet[2515]: W0707 00:00:19.776379 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.776431 kubelet[2515]: E0707 00:00:19.776387 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.776584 kubelet[2515]: E0707 00:00:19.776568 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.776584 kubelet[2515]: W0707 00:00:19.776580 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.776633 kubelet[2515]: E0707 00:00:19.776589 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.776837 kubelet[2515]: E0707 00:00:19.776822 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.776837 kubelet[2515]: W0707 00:00:19.776833 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.776889 kubelet[2515]: E0707 00:00:19.776841 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.777058 kubelet[2515]: E0707 00:00:19.777044 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.777058 kubelet[2515]: W0707 00:00:19.777054 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.777106 kubelet[2515]: E0707 00:00:19.777064 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.777323 kubelet[2515]: E0707 00:00:19.777309 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.777323 kubelet[2515]: W0707 00:00:19.777320 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.777367 kubelet[2515]: E0707 00:00:19.777328 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.777533 kubelet[2515]: E0707 00:00:19.777520 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.777533 kubelet[2515]: W0707 00:00:19.777530 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.777574 kubelet[2515]: E0707 00:00:19.777537 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.777923 kubelet[2515]: E0707 00:00:19.777890 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.777923 kubelet[2515]: W0707 00:00:19.777913 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.777990 kubelet[2515]: E0707 00:00:19.777938 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.777990 kubelet[2515]: I0707 00:00:19.777979 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/febfaed7-7f45-4497-b75d-f0ee8f991481-registration-dir\") pod \"csi-node-driver-qnpjz\" (UID: \"febfaed7-7f45-4497-b75d-f0ee8f991481\") " pod="calico-system/csi-node-driver-qnpjz" Jul 7 00:00:19.778228 kubelet[2515]: E0707 00:00:19.778211 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.778228 kubelet[2515]: W0707 00:00:19.778223 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.778287 kubelet[2515]: E0707 00:00:19.778234 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.778287 kubelet[2515]: I0707 00:00:19.778255 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q7tz\" (UniqueName: \"kubernetes.io/projected/febfaed7-7f45-4497-b75d-f0ee8f991481-kube-api-access-4q7tz\") pod \"csi-node-driver-qnpjz\" (UID: \"febfaed7-7f45-4497-b75d-f0ee8f991481\") " pod="calico-system/csi-node-driver-qnpjz" Jul 7 00:00:19.778339 containerd[1456]: time="2025-07-07T00:00:19.778220332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-t5f6f,Uid:6d809fab-fefe-43d6-aa09-58ecf0d0a835,Namespace:calico-system,Attempt:0,}" Jul 7 00:00:19.778476 kubelet[2515]: E0707 00:00:19.778460 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.778476 kubelet[2515]: W0707 00:00:19.778472 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.778530 kubelet[2515]: E0707 00:00:19.778481 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.778530 kubelet[2515]: I0707 00:00:19.778501 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/febfaed7-7f45-4497-b75d-f0ee8f991481-kubelet-dir\") pod \"csi-node-driver-qnpjz\" (UID: \"febfaed7-7f45-4497-b75d-f0ee8f991481\") " pod="calico-system/csi-node-driver-qnpjz" Jul 7 00:00:19.778778 kubelet[2515]: E0707 00:00:19.778748 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.778778 kubelet[2515]: W0707 00:00:19.778762 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.778778 kubelet[2515]: E0707 00:00:19.778772 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.778853 kubelet[2515]: I0707 00:00:19.778794 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/febfaed7-7f45-4497-b75d-f0ee8f991481-socket-dir\") pod \"csi-node-driver-qnpjz\" (UID: \"febfaed7-7f45-4497-b75d-f0ee8f991481\") " pod="calico-system/csi-node-driver-qnpjz" Jul 7 00:00:19.779104 kubelet[2515]: E0707 00:00:19.779067 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.779104 kubelet[2515]: W0707 00:00:19.779092 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.779104 kubelet[2515]: E0707 00:00:19.779104 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.779336 kubelet[2515]: E0707 00:00:19.779308 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.779336 kubelet[2515]: W0707 00:00:19.779320 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.779336 kubelet[2515]: E0707 00:00:19.779329 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.779659 kubelet[2515]: E0707 00:00:19.779624 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.779659 kubelet[2515]: W0707 00:00:19.779661 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.779827 kubelet[2515]: E0707 00:00:19.779699 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.780024 kubelet[2515]: E0707 00:00:19.779972 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.780024 kubelet[2515]: W0707 00:00:19.780019 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.780115 kubelet[2515]: E0707 00:00:19.780031 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.780305 kubelet[2515]: E0707 00:00:19.780285 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.780305 kubelet[2515]: W0707 00:00:19.780298 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.780390 kubelet[2515]: E0707 00:00:19.780310 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.780550 kubelet[2515]: E0707 00:00:19.780531 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.780550 kubelet[2515]: W0707 00:00:19.780546 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.780640 kubelet[2515]: E0707 00:00:19.780558 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.780640 kubelet[2515]: I0707 00:00:19.780612 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/febfaed7-7f45-4497-b75d-f0ee8f991481-varrun\") pod \"csi-node-driver-qnpjz\" (UID: \"febfaed7-7f45-4497-b75d-f0ee8f991481\") " pod="calico-system/csi-node-driver-qnpjz" Jul 7 00:00:19.780819 kubelet[2515]: E0707 00:00:19.780803 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.780819 kubelet[2515]: W0707 00:00:19.780815 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.780864 kubelet[2515]: E0707 00:00:19.780824 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.781062 kubelet[2515]: E0707 00:00:19.781044 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.781062 kubelet[2515]: W0707 00:00:19.781056 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.781062 kubelet[2515]: E0707 00:00:19.781064 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.781381 kubelet[2515]: E0707 00:00:19.781364 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.781381 kubelet[2515]: W0707 00:00:19.781378 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.781433 kubelet[2515]: E0707 00:00:19.781389 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.781787 kubelet[2515]: E0707 00:00:19.781634 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.781787 kubelet[2515]: W0707 00:00:19.781651 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.781787 kubelet[2515]: E0707 00:00:19.781664 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.781982 kubelet[2515]: E0707 00:00:19.781918 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.781982 kubelet[2515]: W0707 00:00:19.781929 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.782146 kubelet[2515]: E0707 00:00:19.781939 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.803801 containerd[1456]: time="2025-07-07T00:00:19.803113939Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:00:19.803801 containerd[1456]: time="2025-07-07T00:00:19.803766523Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:00:19.803801 containerd[1456]: time="2025-07-07T00:00:19.803783795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:19.804316 containerd[1456]: time="2025-07-07T00:00:19.804146652Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:19.824156 systemd[1]: Started cri-containerd-28d147800873504c8f672346817311af1614b0ab9b2a7b7fccbe35479143759b.scope - libcontainer container 28d147800873504c8f672346817311af1614b0ab9b2a7b7fccbe35479143759b. Jul 7 00:00:19.857805 containerd[1456]: time="2025-07-07T00:00:19.857738137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-t5f6f,Uid:6d809fab-fefe-43d6-aa09-58ecf0d0a835,Namespace:calico-system,Attempt:0,} returns sandbox id \"28d147800873504c8f672346817311af1614b0ab9b2a7b7fccbe35479143759b\"" Jul 7 00:00:19.882551 kubelet[2515]: E0707 00:00:19.882492 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.882551 kubelet[2515]: W0707 00:00:19.882519 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.882551 kubelet[2515]: E0707 00:00:19.882540 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.882897 kubelet[2515]: E0707 00:00:19.882858 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.882897 kubelet[2515]: W0707 00:00:19.882880 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.883082 kubelet[2515]: E0707 00:00:19.882905 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.883213 kubelet[2515]: E0707 00:00:19.883190 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.883213 kubelet[2515]: W0707 00:00:19.883210 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.883300 kubelet[2515]: E0707 00:00:19.883220 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.883536 kubelet[2515]: E0707 00:00:19.883503 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.883536 kubelet[2515]: W0707 00:00:19.883522 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.883536 kubelet[2515]: E0707 00:00:19.883532 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.883858 kubelet[2515]: E0707 00:00:19.883831 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.884062 kubelet[2515]: W0707 00:00:19.883905 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.884062 kubelet[2515]: E0707 00:00:19.883935 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.884413 kubelet[2515]: E0707 00:00:19.884385 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.884447 kubelet[2515]: W0707 00:00:19.884412 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.884447 kubelet[2515]: E0707 00:00:19.884439 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.884812 kubelet[2515]: E0707 00:00:19.884790 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.884812 kubelet[2515]: W0707 00:00:19.884806 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.884885 kubelet[2515]: E0707 00:00:19.884818 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.885242 kubelet[2515]: E0707 00:00:19.885226 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.885242 kubelet[2515]: W0707 00:00:19.885238 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.885318 kubelet[2515]: E0707 00:00:19.885248 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.885502 kubelet[2515]: E0707 00:00:19.885487 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.885502 kubelet[2515]: W0707 00:00:19.885498 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.885565 kubelet[2515]: E0707 00:00:19.885507 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.885770 kubelet[2515]: E0707 00:00:19.885755 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.885770 kubelet[2515]: W0707 00:00:19.885766 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.885892 kubelet[2515]: E0707 00:00:19.885775 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.886025 kubelet[2515]: E0707 00:00:19.886010 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.886025 kubelet[2515]: W0707 00:00:19.886021 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.886072 kubelet[2515]: E0707 00:00:19.886032 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.886286 kubelet[2515]: E0707 00:00:19.886270 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.886286 kubelet[2515]: W0707 00:00:19.886283 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.886355 kubelet[2515]: E0707 00:00:19.886294 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.886534 kubelet[2515]: E0707 00:00:19.886519 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.886534 kubelet[2515]: W0707 00:00:19.886529 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.886582 kubelet[2515]: E0707 00:00:19.886538 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.886773 kubelet[2515]: E0707 00:00:19.886757 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.886773 kubelet[2515]: W0707 00:00:19.886767 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.886839 kubelet[2515]: E0707 00:00:19.886776 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.887088 kubelet[2515]: E0707 00:00:19.887071 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.887088 kubelet[2515]: W0707 00:00:19.887085 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.887136 kubelet[2515]: E0707 00:00:19.887095 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.887337 kubelet[2515]: E0707 00:00:19.887320 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.887337 kubelet[2515]: W0707 00:00:19.887334 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.887392 kubelet[2515]: E0707 00:00:19.887345 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.887557 kubelet[2515]: E0707 00:00:19.887542 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.887557 kubelet[2515]: W0707 00:00:19.887553 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.887610 kubelet[2515]: E0707 00:00:19.887561 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.887789 kubelet[2515]: E0707 00:00:19.887774 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.887789 kubelet[2515]: W0707 00:00:19.887784 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.887857 kubelet[2515]: E0707 00:00:19.887792 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.888036 kubelet[2515]: E0707 00:00:19.888022 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.888036 kubelet[2515]: W0707 00:00:19.888032 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.888085 kubelet[2515]: E0707 00:00:19.888040 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.888249 kubelet[2515]: E0707 00:00:19.888234 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.888249 kubelet[2515]: W0707 00:00:19.888244 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.888307 kubelet[2515]: E0707 00:00:19.888252 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.888463 kubelet[2515]: E0707 00:00:19.888449 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.888463 kubelet[2515]: W0707 00:00:19.888459 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.888508 kubelet[2515]: E0707 00:00:19.888466 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.888774 kubelet[2515]: E0707 00:00:19.888757 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.888774 kubelet[2515]: W0707 00:00:19.888771 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.888834 kubelet[2515]: E0707 00:00:19.888782 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.889165 kubelet[2515]: E0707 00:00:19.889146 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.889165 kubelet[2515]: W0707 00:00:19.889158 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.889225 kubelet[2515]: E0707 00:00:19.889170 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.889437 kubelet[2515]: E0707 00:00:19.889421 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.889437 kubelet[2515]: W0707 00:00:19.889432 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.889492 kubelet[2515]: E0707 00:00:19.889440 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.889708 kubelet[2515]: E0707 00:00:19.889693 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.889708 kubelet[2515]: W0707 00:00:19.889704 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.889759 kubelet[2515]: E0707 00:00:19.889713 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:19.896369 kubelet[2515]: E0707 00:00:19.896342 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:19.896369 kubelet[2515]: W0707 00:00:19.896354 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:19.896369 kubelet[2515]: E0707 00:00:19.896365 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:21.186795 kubelet[2515]: E0707 00:00:21.186728 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qnpjz" podUID="febfaed7-7f45-4497-b75d-f0ee8f991481" Jul 7 00:00:22.641906 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3279842860.mount: Deactivated successfully. Jul 7 00:00:23.185211 kubelet[2515]: E0707 00:00:23.185159 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qnpjz" podUID="febfaed7-7f45-4497-b75d-f0ee8f991481" Jul 7 00:00:24.308005 containerd[1456]: time="2025-07-07T00:00:24.307930468Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:24.308808 containerd[1456]: time="2025-07-07T00:00:24.308764322Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 7 00:00:24.309888 containerd[1456]: time="2025-07-07T00:00:24.309844380Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:24.311761 containerd[1456]: time="2025-07-07T00:00:24.311733032Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:24.312347 containerd[1456]: time="2025-07-07T00:00:24.312315160Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 4.74353574s" Jul 7 00:00:24.312380 containerd[1456]: time="2025-07-07T00:00:24.312344757Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 7 00:00:24.313342 containerd[1456]: time="2025-07-07T00:00:24.313219085Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 7 00:00:24.327737 containerd[1456]: time="2025-07-07T00:00:24.327702739Z" level=info msg="CreateContainer within sandbox \"c6a4a04c8878b935b82f833ebe0ff0b5d31bee0c7481f326be0fab33138645c1\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 7 00:00:24.341380 containerd[1456]: time="2025-07-07T00:00:24.341337491Z" level=info msg="CreateContainer within sandbox \"c6a4a04c8878b935b82f833ebe0ff0b5d31bee0c7481f326be0fab33138645c1\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"179e35b46e75727b7b01ccb586c1e1bf7a1c07f2b08f244c45c76bc96f1585ad\"" Jul 7 00:00:24.341868 containerd[1456]: time="2025-07-07T00:00:24.341847652Z" level=info msg="StartContainer for \"179e35b46e75727b7b01ccb586c1e1bf7a1c07f2b08f244c45c76bc96f1585ad\"" Jul 7 00:00:24.373053 systemd[1]: Started cri-containerd-179e35b46e75727b7b01ccb586c1e1bf7a1c07f2b08f244c45c76bc96f1585ad.scope - libcontainer container 179e35b46e75727b7b01ccb586c1e1bf7a1c07f2b08f244c45c76bc96f1585ad. Jul 7 00:00:24.414192 containerd[1456]: time="2025-07-07T00:00:24.414148096Z" level=info msg="StartContainer for \"179e35b46e75727b7b01ccb586c1e1bf7a1c07f2b08f244c45c76bc96f1585ad\" returns successfully" Jul 7 00:00:25.185513 kubelet[2515]: E0707 00:00:25.185467 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qnpjz" podUID="febfaed7-7f45-4497-b75d-f0ee8f991481" Jul 7 00:00:25.236967 kubelet[2515]: E0707 00:00:25.236908 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:00:25.253971 kubelet[2515]: I0707 00:00:25.250704 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6dc8cc4956-rpmtb" podStartSLOduration=1.503124371 podStartE2EDuration="6.250685458s" podCreationTimestamp="2025-07-07 00:00:19 +0000 UTC" firstStartedPulling="2025-07-07 00:00:19.565543883 +0000 UTC m=+16.482276778" lastFinishedPulling="2025-07-07 00:00:24.31310497 +0000 UTC m=+21.229837865" observedRunningTime="2025-07-07 00:00:25.246621925 +0000 UTC m=+22.163354820" watchObservedRunningTime="2025-07-07 00:00:25.250685458 +0000 UTC m=+22.167418353" Jul 7 00:00:25.314249 kubelet[2515]: E0707 00:00:25.314214 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:25.314249 kubelet[2515]: W0707 00:00:25.314238 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:25.314414 kubelet[2515]: E0707 00:00:25.314266 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:25.314524 kubelet[2515]: E0707 00:00:25.314508 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:25.314524 kubelet[2515]: W0707 00:00:25.314521 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:25.314573 kubelet[2515]: E0707 00:00:25.314531 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:25.314781 kubelet[2515]: E0707 00:00:25.314758 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:25.314781 kubelet[2515]: W0707 00:00:25.314770 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:25.314781 kubelet[2515]: E0707 00:00:25.314780 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:25.315057 kubelet[2515]: E0707 00:00:25.315045 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:25.315057 kubelet[2515]: W0707 00:00:25.315056 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:25.315128 kubelet[2515]: E0707 00:00:25.315066 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:25.315282 kubelet[2515]: E0707 00:00:25.315271 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:25.315282 kubelet[2515]: W0707 00:00:25.315280 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:25.315340 kubelet[2515]: E0707 00:00:25.315288 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:25.315475 kubelet[2515]: E0707 00:00:25.315457 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:25.315475 kubelet[2515]: W0707 00:00:25.315466 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:25.315475 kubelet[2515]: E0707 00:00:25.315474 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:25.315660 kubelet[2515]: E0707 00:00:25.315649 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:25.315660 kubelet[2515]: W0707 00:00:25.315657 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:25.315716 kubelet[2515]: E0707 00:00:25.315665 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:25.315848 kubelet[2515]: E0707 00:00:25.315838 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:25.315848 kubelet[2515]: W0707 00:00:25.315846 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:25.315893 kubelet[2515]: E0707 00:00:25.315854 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:25.316095 kubelet[2515]: E0707 00:00:25.316083 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:25.316095 kubelet[2515]: W0707 00:00:25.316093 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:25.316155 kubelet[2515]: E0707 00:00:25.316102 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:25.316289 kubelet[2515]: E0707 00:00:25.316278 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:25.316289 kubelet[2515]: W0707 00:00:25.316286 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:25.316331 kubelet[2515]: E0707 00:00:25.316293 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:25.316477 kubelet[2515]: E0707 00:00:25.316467 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:25.316477 kubelet[2515]: W0707 00:00:25.316475 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:25.316525 kubelet[2515]: E0707 00:00:25.316484 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:25.316674 kubelet[2515]: E0707 00:00:25.316663 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:25.316674 kubelet[2515]: W0707 00:00:25.316672 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:25.316717 kubelet[2515]: E0707 00:00:25.316680 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:25.316891 kubelet[2515]: E0707 00:00:25.316880 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:25.316891 kubelet[2515]: W0707 00:00:25.316888 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:25.316961 kubelet[2515]: E0707 00:00:25.316896 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:25.317105 kubelet[2515]: E0707 00:00:25.317093 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:25.317105 kubelet[2515]: W0707 00:00:25.317102 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:25.317156 kubelet[2515]: E0707 00:00:25.317111 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:25.317291 kubelet[2515]: E0707 00:00:25.317281 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:25.317291 kubelet[2515]: W0707 00:00:25.317289 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:25.317340 kubelet[2515]: E0707 00:00:25.317296 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:25.325807 kubelet[2515]: E0707 00:00:25.325784 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:25.325807 kubelet[2515]: W0707 00:00:25.325805 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:25.325874 kubelet[2515]: E0707 00:00:25.325828 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:25.326110 kubelet[2515]: E0707 00:00:25.326091 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:25.326110 kubelet[2515]: W0707 00:00:25.326103 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:25.326110 kubelet[2515]: E0707 00:00:25.326112 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:25.326381 kubelet[2515]: E0707 00:00:25.326367 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:25.326381 kubelet[2515]: W0707 00:00:25.326378 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:25.326435 kubelet[2515]: E0707 00:00:25.326386 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:25.326745 kubelet[2515]: E0707 00:00:25.326715 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:25.326745 kubelet[2515]: W0707 00:00:25.326737 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:25.326881 kubelet[2515]: E0707 00:00:25.326752 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:25.327043 kubelet[2515]: E0707 00:00:25.327024 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:25.327043 kubelet[2515]: W0707 00:00:25.327037 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:25.327096 kubelet[2515]: E0707 00:00:25.327047 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:25.327285 kubelet[2515]: E0707 00:00:25.327263 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:25.327285 kubelet[2515]: W0707 00:00:25.327276 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:25.327285 kubelet[2515]: E0707 00:00:25.327290 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:25.327538 kubelet[2515]: E0707 00:00:25.327516 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:25.327538 kubelet[2515]: W0707 00:00:25.327529 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:25.327538 kubelet[2515]: E0707 00:00:25.327538 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:25.327761 kubelet[2515]: E0707 00:00:25.327738 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:25.327761 kubelet[2515]: W0707 00:00:25.327750 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:25.327761 kubelet[2515]: E0707 00:00:25.327758 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:25.327996 kubelet[2515]: E0707 00:00:25.327982 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:25.327996 kubelet[2515]: W0707 00:00:25.327992 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:25.328053 kubelet[2515]: E0707 00:00:25.328000 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:25.328329 kubelet[2515]: E0707 00:00:25.328300 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:25.328329 kubelet[2515]: W0707 00:00:25.328319 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:25.328379 kubelet[2515]: E0707 00:00:25.328330 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:25.328573 kubelet[2515]: E0707 00:00:25.328548 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:25.328573 kubelet[2515]: W0707 00:00:25.328564 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:25.328665 kubelet[2515]: E0707 00:00:25.328574 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:25.328826 kubelet[2515]: E0707 00:00:25.328808 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:25.328826 kubelet[2515]: W0707 00:00:25.328823 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:25.328882 kubelet[2515]: E0707 00:00:25.328833 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:25.329137 kubelet[2515]: E0707 00:00:25.329113 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:25.329137 kubelet[2515]: W0707 00:00:25.329127 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:25.329137 kubelet[2515]: E0707 00:00:25.329136 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:25.329315 kubelet[2515]: E0707 00:00:25.329301 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:25.329315 kubelet[2515]: W0707 00:00:25.329311 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:25.329365 kubelet[2515]: E0707 00:00:25.329320 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:25.329560 kubelet[2515]: E0707 00:00:25.329544 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:25.329560 kubelet[2515]: W0707 00:00:25.329556 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:25.329601 kubelet[2515]: E0707 00:00:25.329566 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:25.329917 kubelet[2515]: E0707 00:00:25.329898 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:25.329969 kubelet[2515]: W0707 00:00:25.329916 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:25.329969 kubelet[2515]: E0707 00:00:25.329934 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:25.330203 kubelet[2515]: E0707 00:00:25.330187 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:25.330203 kubelet[2515]: W0707 00:00:25.330198 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:25.330265 kubelet[2515]: E0707 00:00:25.330206 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:25.330561 kubelet[2515]: E0707 00:00:25.330535 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:25.330561 kubelet[2515]: W0707 00:00:25.330551 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:25.330676 kubelet[2515]: E0707 00:00:25.330562 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:26.238696 kubelet[2515]: E0707 00:00:26.238651 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:00:26.323294 kubelet[2515]: E0707 00:00:26.323262 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:26.323294 kubelet[2515]: W0707 00:00:26.323282 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:26.323294 kubelet[2515]: E0707 00:00:26.323300 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:26.323567 kubelet[2515]: E0707 00:00:26.323543 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:26.323567 kubelet[2515]: W0707 00:00:26.323557 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:26.323641 kubelet[2515]: E0707 00:00:26.323566 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:26.323901 kubelet[2515]: E0707 00:00:26.323860 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:26.323901 kubelet[2515]: W0707 00:00:26.323888 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:26.323998 kubelet[2515]: E0707 00:00:26.323916 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:26.324220 kubelet[2515]: E0707 00:00:26.324202 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:26.324220 kubelet[2515]: W0707 00:00:26.324216 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:26.324293 kubelet[2515]: E0707 00:00:26.324227 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:26.324550 kubelet[2515]: E0707 00:00:26.324523 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:26.324550 kubelet[2515]: W0707 00:00:26.324537 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:26.324550 kubelet[2515]: E0707 00:00:26.324549 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:26.324791 kubelet[2515]: E0707 00:00:26.324776 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:26.324791 kubelet[2515]: W0707 00:00:26.324789 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:26.324863 kubelet[2515]: E0707 00:00:26.324801 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:26.325052 kubelet[2515]: E0707 00:00:26.325032 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:26.325052 kubelet[2515]: W0707 00:00:26.325043 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:26.325052 kubelet[2515]: E0707 00:00:26.325052 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:26.325239 kubelet[2515]: E0707 00:00:26.325228 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:26.325239 kubelet[2515]: W0707 00:00:26.325237 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:26.325284 kubelet[2515]: E0707 00:00:26.325244 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:26.325447 kubelet[2515]: E0707 00:00:26.325437 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:26.325447 kubelet[2515]: W0707 00:00:26.325445 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:26.325491 kubelet[2515]: E0707 00:00:26.325455 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:26.325642 kubelet[2515]: E0707 00:00:26.325631 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:26.325642 kubelet[2515]: W0707 00:00:26.325639 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:26.325698 kubelet[2515]: E0707 00:00:26.325647 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:26.325848 kubelet[2515]: E0707 00:00:26.325827 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:26.325848 kubelet[2515]: W0707 00:00:26.325846 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:26.325895 kubelet[2515]: E0707 00:00:26.325854 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:26.326077 kubelet[2515]: E0707 00:00:26.326065 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:26.326077 kubelet[2515]: W0707 00:00:26.326074 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:26.326142 kubelet[2515]: E0707 00:00:26.326082 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:26.326289 kubelet[2515]: E0707 00:00:26.326278 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:26.326289 kubelet[2515]: W0707 00:00:26.326286 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:26.326337 kubelet[2515]: E0707 00:00:26.326293 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:26.326470 kubelet[2515]: E0707 00:00:26.326460 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:26.326470 kubelet[2515]: W0707 00:00:26.326468 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:26.326521 kubelet[2515]: E0707 00:00:26.326475 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:26.326674 kubelet[2515]: E0707 00:00:26.326664 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:26.326674 kubelet[2515]: W0707 00:00:26.326672 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:26.326725 kubelet[2515]: E0707 00:00:26.326679 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:26.333041 kubelet[2515]: E0707 00:00:26.333010 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:26.333041 kubelet[2515]: W0707 00:00:26.333024 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:26.333041 kubelet[2515]: E0707 00:00:26.333036 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:26.333313 kubelet[2515]: E0707 00:00:26.333289 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:26.333313 kubelet[2515]: W0707 00:00:26.333311 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:26.333313 kubelet[2515]: E0707 00:00:26.333322 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:26.333552 kubelet[2515]: E0707 00:00:26.333536 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:26.333552 kubelet[2515]: W0707 00:00:26.333547 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:26.333655 kubelet[2515]: E0707 00:00:26.333556 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:26.333819 kubelet[2515]: E0707 00:00:26.333803 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:26.333819 kubelet[2515]: W0707 00:00:26.333816 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:26.333894 kubelet[2515]: E0707 00:00:26.333826 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:26.334067 kubelet[2515]: E0707 00:00:26.334054 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:26.334100 kubelet[2515]: W0707 00:00:26.334066 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:26.334100 kubelet[2515]: E0707 00:00:26.334077 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:26.334285 kubelet[2515]: E0707 00:00:26.334274 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:26.334285 kubelet[2515]: W0707 00:00:26.334282 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:26.334349 kubelet[2515]: E0707 00:00:26.334290 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:26.334546 kubelet[2515]: E0707 00:00:26.334510 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:26.334546 kubelet[2515]: W0707 00:00:26.334522 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:26.334546 kubelet[2515]: E0707 00:00:26.334534 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:26.334920 kubelet[2515]: E0707 00:00:26.334899 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:26.334920 kubelet[2515]: W0707 00:00:26.334913 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:26.334920 kubelet[2515]: E0707 00:00:26.334923 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:26.335169 kubelet[2515]: E0707 00:00:26.335151 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:26.335169 kubelet[2515]: W0707 00:00:26.335162 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:26.335169 kubelet[2515]: E0707 00:00:26.335170 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:26.335387 kubelet[2515]: E0707 00:00:26.335369 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:26.335387 kubelet[2515]: W0707 00:00:26.335380 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:26.335387 kubelet[2515]: E0707 00:00:26.335388 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:26.335619 kubelet[2515]: E0707 00:00:26.335600 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:26.335619 kubelet[2515]: W0707 00:00:26.335612 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:26.335619 kubelet[2515]: E0707 00:00:26.335621 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:26.335845 kubelet[2515]: E0707 00:00:26.335828 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:26.335845 kubelet[2515]: W0707 00:00:26.335839 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:26.335910 kubelet[2515]: E0707 00:00:26.335849 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:26.336168 kubelet[2515]: E0707 00:00:26.336147 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:26.336234 kubelet[2515]: W0707 00:00:26.336187 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:26.336234 kubelet[2515]: E0707 00:00:26.336199 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:26.336696 kubelet[2515]: E0707 00:00:26.336678 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:26.336696 kubelet[2515]: W0707 00:00:26.336690 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:26.336772 kubelet[2515]: E0707 00:00:26.336700 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:26.337921 kubelet[2515]: E0707 00:00:26.337076 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:26.337921 kubelet[2515]: W0707 00:00:26.337090 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:26.337921 kubelet[2515]: E0707 00:00:26.337099 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:26.337921 kubelet[2515]: E0707 00:00:26.337346 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:26.337921 kubelet[2515]: W0707 00:00:26.337355 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:26.337921 kubelet[2515]: E0707 00:00:26.337366 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:26.337921 kubelet[2515]: E0707 00:00:26.337578 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:26.337921 kubelet[2515]: W0707 00:00:26.337586 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:26.337921 kubelet[2515]: E0707 00:00:26.337603 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:26.340854 kubelet[2515]: E0707 00:00:26.340822 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:00:26.340899 kubelet[2515]: W0707 00:00:26.340884 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:00:26.340965 kubelet[2515]: E0707 00:00:26.340899 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:00:26.549729 containerd[1456]: time="2025-07-07T00:00:26.549580790Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:26.550659 containerd[1456]: time="2025-07-07T00:00:26.550576988Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 7 00:00:26.552601 containerd[1456]: time="2025-07-07T00:00:26.552559185Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:26.554751 containerd[1456]: time="2025-07-07T00:00:26.554676466Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:26.555344 containerd[1456]: time="2025-07-07T00:00:26.555306074Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 2.242059765s" Jul 7 00:00:26.555396 containerd[1456]: time="2025-07-07T00:00:26.555348994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 7 00:00:26.560110 containerd[1456]: time="2025-07-07T00:00:26.560078229Z" level=info msg="CreateContainer within sandbox \"28d147800873504c8f672346817311af1614b0ab9b2a7b7fccbe35479143759b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 7 00:00:26.576145 containerd[1456]: time="2025-07-07T00:00:26.576087469Z" level=info msg="CreateContainer within sandbox \"28d147800873504c8f672346817311af1614b0ab9b2a7b7fccbe35479143759b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9c6d65011f5cf7c56d0d313e91e8372a49502de19089de57527c11b68110725f\"" Jul 7 00:00:26.576730 containerd[1456]: time="2025-07-07T00:00:26.576656352Z" level=info msg="StartContainer for \"9c6d65011f5cf7c56d0d313e91e8372a49502de19089de57527c11b68110725f\"" Jul 7 00:00:26.622161 systemd[1]: Started cri-containerd-9c6d65011f5cf7c56d0d313e91e8372a49502de19089de57527c11b68110725f.scope - libcontainer container 9c6d65011f5cf7c56d0d313e91e8372a49502de19089de57527c11b68110725f. Jul 7 00:00:26.659723 containerd[1456]: time="2025-07-07T00:00:26.659654643Z" level=info msg="StartContainer for \"9c6d65011f5cf7c56d0d313e91e8372a49502de19089de57527c11b68110725f\" returns successfully" Jul 7 00:00:26.670334 systemd[1]: cri-containerd-9c6d65011f5cf7c56d0d313e91e8372a49502de19089de57527c11b68110725f.scope: Deactivated successfully. Jul 7 00:00:27.020935 containerd[1456]: time="2025-07-07T00:00:27.018652680Z" level=info msg="shim disconnected" id=9c6d65011f5cf7c56d0d313e91e8372a49502de19089de57527c11b68110725f namespace=k8s.io Jul 7 00:00:27.020935 containerd[1456]: time="2025-07-07T00:00:27.020918320Z" level=warning msg="cleaning up after shim disconnected" id=9c6d65011f5cf7c56d0d313e91e8372a49502de19089de57527c11b68110725f namespace=k8s.io Jul 7 00:00:27.020935 containerd[1456]: time="2025-07-07T00:00:27.020928950Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:00:27.185643 kubelet[2515]: E0707 00:00:27.185301 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qnpjz" podUID="febfaed7-7f45-4497-b75d-f0ee8f991481" Jul 7 00:00:27.241436 kubelet[2515]: E0707 00:00:27.241390 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:00:27.242971 containerd[1456]: time="2025-07-07T00:00:27.242900269Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 7 00:00:27.571194 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c6d65011f5cf7c56d0d313e91e8372a49502de19089de57527c11b68110725f-rootfs.mount: Deactivated successfully. Jul 7 00:00:29.185462 kubelet[2515]: E0707 00:00:29.185402 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qnpjz" podUID="febfaed7-7f45-4497-b75d-f0ee8f991481" Jul 7 00:00:31.068882 containerd[1456]: time="2025-07-07T00:00:31.068816318Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:31.069569 containerd[1456]: time="2025-07-07T00:00:31.069520774Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 7 00:00:31.070691 containerd[1456]: time="2025-07-07T00:00:31.070656792Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:31.073014 containerd[1456]: time="2025-07-07T00:00:31.072939230Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:31.073554 containerd[1456]: time="2025-07-07T00:00:31.073502380Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 3.830563207s" Jul 7 00:00:31.073554 containerd[1456]: time="2025-07-07T00:00:31.073536824Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 7 00:00:31.077871 containerd[1456]: time="2025-07-07T00:00:31.077834123Z" level=info msg="CreateContainer within sandbox \"28d147800873504c8f672346817311af1614b0ab9b2a7b7fccbe35479143759b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 7 00:00:31.093498 containerd[1456]: time="2025-07-07T00:00:31.093453917Z" level=info msg="CreateContainer within sandbox \"28d147800873504c8f672346817311af1614b0ab9b2a7b7fccbe35479143759b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"54e3824c7134001097a243fe0ea6ada027f292a4f1138faa3912cb85d387f7d6\"" Jul 7 00:00:31.093985 containerd[1456]: time="2025-07-07T00:00:31.093939821Z" level=info msg="StartContainer for \"54e3824c7134001097a243fe0ea6ada027f292a4f1138faa3912cb85d387f7d6\"" Jul 7 00:00:31.128081 systemd[1]: Started cri-containerd-54e3824c7134001097a243fe0ea6ada027f292a4f1138faa3912cb85d387f7d6.scope - libcontainer container 54e3824c7134001097a243fe0ea6ada027f292a4f1138faa3912cb85d387f7d6. Jul 7 00:00:31.176267 containerd[1456]: time="2025-07-07T00:00:31.176222774Z" level=info msg="StartContainer for \"54e3824c7134001097a243fe0ea6ada027f292a4f1138faa3912cb85d387f7d6\" returns successfully" Jul 7 00:00:31.185158 kubelet[2515]: E0707 00:00:31.185116 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qnpjz" podUID="febfaed7-7f45-4497-b75d-f0ee8f991481" Jul 7 00:00:32.440028 systemd[1]: cri-containerd-54e3824c7134001097a243fe0ea6ada027f292a4f1138faa3912cb85d387f7d6.scope: Deactivated successfully. Jul 7 00:00:32.461381 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-54e3824c7134001097a243fe0ea6ada027f292a4f1138faa3912cb85d387f7d6-rootfs.mount: Deactivated successfully. Jul 7 00:00:32.518635 kubelet[2515]: I0707 00:00:32.518594 2515 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 7 00:00:32.741189 containerd[1456]: time="2025-07-07T00:00:32.740733013Z" level=info msg="shim disconnected" id=54e3824c7134001097a243fe0ea6ada027f292a4f1138faa3912cb85d387f7d6 namespace=k8s.io Jul 7 00:00:32.741189 containerd[1456]: time="2025-07-07T00:00:32.740797925Z" level=warning msg="cleaning up after shim disconnected" id=54e3824c7134001097a243fe0ea6ada027f292a4f1138faa3912cb85d387f7d6 namespace=k8s.io Jul 7 00:00:32.741189 containerd[1456]: time="2025-07-07T00:00:32.740808716Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:00:32.771216 systemd[1]: Created slice kubepods-besteffort-pod321b5fcc_dee8_4254_8be2_33315ade1aed.slice - libcontainer container kubepods-besteffort-pod321b5fcc_dee8_4254_8be2_33315ade1aed.slice. Jul 7 00:00:32.774147 kubelet[2515]: I0707 00:00:32.774108 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e866b654-c1cc-4469-bdb5-43648247fb5f-config-volume\") pod \"coredns-674b8bbfcf-cd8nn\" (UID: \"e866b654-c1cc-4469-bdb5-43648247fb5f\") " pod="kube-system/coredns-674b8bbfcf-cd8nn" Jul 7 00:00:32.774332 kubelet[2515]: I0707 00:00:32.774307 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/f9c9236a-e34c-48ed-9633-47af4ea04d91-goldmane-key-pair\") pod \"goldmane-768f4c5c69-6c7vz\" (UID: \"f9c9236a-e34c-48ed-9633-47af4ea04d91\") " pod="calico-system/goldmane-768f4c5c69-6c7vz" Jul 7 00:00:32.774431 kubelet[2515]: I0707 00:00:32.774413 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/321b5fcc-dee8-4254-8be2-33315ade1aed-whisker-backend-key-pair\") pod \"whisker-65d9dcd5c4-rvpmp\" (UID: \"321b5fcc-dee8-4254-8be2-33315ade1aed\") " pod="calico-system/whisker-65d9dcd5c4-rvpmp" Jul 7 00:00:32.775613 kubelet[2515]: I0707 00:00:32.774521 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/321b5fcc-dee8-4254-8be2-33315ade1aed-whisker-ca-bundle\") pod \"whisker-65d9dcd5c4-rvpmp\" (UID: \"321b5fcc-dee8-4254-8be2-33315ade1aed\") " pod="calico-system/whisker-65d9dcd5c4-rvpmp" Jul 7 00:00:32.775613 kubelet[2515]: I0707 00:00:32.774555 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3661e791-5a5a-4263-b5c4-3e2adfeb5eb7-tigera-ca-bundle\") pod \"calico-kube-controllers-dbc45fd6f-nj8xd\" (UID: \"3661e791-5a5a-4263-b5c4-3e2adfeb5eb7\") " pod="calico-system/calico-kube-controllers-dbc45fd6f-nj8xd" Jul 7 00:00:32.775613 kubelet[2515]: I0707 00:00:32.774577 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj67b\" (UniqueName: \"kubernetes.io/projected/3661e791-5a5a-4263-b5c4-3e2adfeb5eb7-kube-api-access-zj67b\") pod \"calico-kube-controllers-dbc45fd6f-nj8xd\" (UID: \"3661e791-5a5a-4263-b5c4-3e2adfeb5eb7\") " pod="calico-system/calico-kube-controllers-dbc45fd6f-nj8xd" Jul 7 00:00:32.775613 kubelet[2515]: I0707 00:00:32.774599 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n562t\" (UniqueName: \"kubernetes.io/projected/321b5fcc-dee8-4254-8be2-33315ade1aed-kube-api-access-n562t\") pod \"whisker-65d9dcd5c4-rvpmp\" (UID: \"321b5fcc-dee8-4254-8be2-33315ade1aed\") " pod="calico-system/whisker-65d9dcd5c4-rvpmp" Jul 7 00:00:32.775613 kubelet[2515]: I0707 00:00:32.774621 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9c9236a-e34c-48ed-9633-47af4ea04d91-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-6c7vz\" (UID: \"f9c9236a-e34c-48ed-9633-47af4ea04d91\") " pod="calico-system/goldmane-768f4c5c69-6c7vz" Jul 7 00:00:32.777125 kubelet[2515]: I0707 00:00:32.774641 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fgmm\" (UniqueName: \"kubernetes.io/projected/d92e234f-52f0-4d16-a111-2199fe918193-kube-api-access-8fgmm\") pod \"coredns-674b8bbfcf-gx8x8\" (UID: \"d92e234f-52f0-4d16-a111-2199fe918193\") " pod="kube-system/coredns-674b8bbfcf-gx8x8" Jul 7 00:00:32.777125 kubelet[2515]: I0707 00:00:32.774666 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9c9236a-e34c-48ed-9633-47af4ea04d91-config\") pod \"goldmane-768f4c5c69-6c7vz\" (UID: \"f9c9236a-e34c-48ed-9633-47af4ea04d91\") " pod="calico-system/goldmane-768f4c5c69-6c7vz" Jul 7 00:00:32.777125 kubelet[2515]: I0707 00:00:32.774686 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hm7f\" (UniqueName: \"kubernetes.io/projected/f9c9236a-e34c-48ed-9633-47af4ea04d91-kube-api-access-8hm7f\") pod \"goldmane-768f4c5c69-6c7vz\" (UID: \"f9c9236a-e34c-48ed-9633-47af4ea04d91\") " pod="calico-system/goldmane-768f4c5c69-6c7vz" Jul 7 00:00:32.777125 kubelet[2515]: I0707 00:00:32.774709 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwsv2\" (UniqueName: \"kubernetes.io/projected/e866b654-c1cc-4469-bdb5-43648247fb5f-kube-api-access-vwsv2\") pod \"coredns-674b8bbfcf-cd8nn\" (UID: \"e866b654-c1cc-4469-bdb5-43648247fb5f\") " pod="kube-system/coredns-674b8bbfcf-cd8nn" Jul 7 00:00:32.777125 kubelet[2515]: I0707 00:00:32.774731 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d92e234f-52f0-4d16-a111-2199fe918193-config-volume\") pod \"coredns-674b8bbfcf-gx8x8\" (UID: \"d92e234f-52f0-4d16-a111-2199fe918193\") " pod="kube-system/coredns-674b8bbfcf-gx8x8" Jul 7 00:00:32.782715 systemd[1]: Created slice kubepods-burstable-pode866b654_c1cc_4469_bdb5_43648247fb5f.slice - libcontainer container kubepods-burstable-pode866b654_c1cc_4469_bdb5_43648247fb5f.slice. Jul 7 00:00:32.788290 systemd[1]: Created slice kubepods-besteffort-podf9c9236a_e34c_48ed_9633_47af4ea04d91.slice - libcontainer container kubepods-besteffort-podf9c9236a_e34c_48ed_9633_47af4ea04d91.slice. Jul 7 00:00:32.796275 systemd[1]: Created slice kubepods-besteffort-pod3661e791_5a5a_4263_b5c4_3e2adfeb5eb7.slice - libcontainer container kubepods-besteffort-pod3661e791_5a5a_4263_b5c4_3e2adfeb5eb7.slice. Jul 7 00:00:32.801995 systemd[1]: Created slice kubepods-besteffort-pod573569b9_db50_4d0d_a4f3_ce9219aa836d.slice - libcontainer container kubepods-besteffort-pod573569b9_db50_4d0d_a4f3_ce9219aa836d.slice. Jul 7 00:00:32.806692 systemd[1]: Created slice kubepods-besteffort-pod6154d981_1514_483e_9a6f_b65a800f05e0.slice - libcontainer container kubepods-besteffort-pod6154d981_1514_483e_9a6f_b65a800f05e0.slice. Jul 7 00:00:32.812756 systemd[1]: Created slice kubepods-burstable-podd92e234f_52f0_4d16_a111_2199fe918193.slice - libcontainer container kubepods-burstable-podd92e234f_52f0_4d16_a111_2199fe918193.slice. Jul 7 00:00:32.876069 kubelet[2515]: I0707 00:00:32.876021 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/573569b9-db50-4d0d-a4f3-ce9219aa836d-calico-apiserver-certs\") pod \"calico-apiserver-5ffb9474b-prlmm\" (UID: \"573569b9-db50-4d0d-a4f3-ce9219aa836d\") " pod="calico-apiserver/calico-apiserver-5ffb9474b-prlmm" Jul 7 00:00:32.876383 kubelet[2515]: I0707 00:00:32.876250 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6154d981-1514-483e-9a6f-b65a800f05e0-calico-apiserver-certs\") pod \"calico-apiserver-5ffb9474b-k66n7\" (UID: \"6154d981-1514-483e-9a6f-b65a800f05e0\") " pod="calico-apiserver/calico-apiserver-5ffb9474b-k66n7" Jul 7 00:00:32.876459 kubelet[2515]: I0707 00:00:32.876446 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgk5p\" (UniqueName: \"kubernetes.io/projected/573569b9-db50-4d0d-a4f3-ce9219aa836d-kube-api-access-fgk5p\") pod \"calico-apiserver-5ffb9474b-prlmm\" (UID: \"573569b9-db50-4d0d-a4f3-ce9219aa836d\") " pod="calico-apiserver/calico-apiserver-5ffb9474b-prlmm" Jul 7 00:00:32.876573 kubelet[2515]: I0707 00:00:32.876555 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jp67\" (UniqueName: \"kubernetes.io/projected/6154d981-1514-483e-9a6f-b65a800f05e0-kube-api-access-4jp67\") pod \"calico-apiserver-5ffb9474b-k66n7\" (UID: \"6154d981-1514-483e-9a6f-b65a800f05e0\") " pod="calico-apiserver/calico-apiserver-5ffb9474b-k66n7" Jul 7 00:00:33.080819 containerd[1456]: time="2025-07-07T00:00:33.080695006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65d9dcd5c4-rvpmp,Uid:321b5fcc-dee8-4254-8be2-33315ade1aed,Namespace:calico-system,Attempt:0,}" Jul 7 00:00:33.087055 kubelet[2515]: E0707 00:00:33.087017 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:00:33.087658 containerd[1456]: time="2025-07-07T00:00:33.087614886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cd8nn,Uid:e866b654-c1cc-4469-bdb5-43648247fb5f,Namespace:kube-system,Attempt:0,}" Jul 7 00:00:33.094343 containerd[1456]: time="2025-07-07T00:00:33.094293322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-6c7vz,Uid:f9c9236a-e34c-48ed-9633-47af4ea04d91,Namespace:calico-system,Attempt:0,}" Jul 7 00:00:33.100043 containerd[1456]: time="2025-07-07T00:00:33.099999919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dbc45fd6f-nj8xd,Uid:3661e791-5a5a-4263-b5c4-3e2adfeb5eb7,Namespace:calico-system,Attempt:0,}" Jul 7 00:00:33.105198 containerd[1456]: time="2025-07-07T00:00:33.105148506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ffb9474b-prlmm,Uid:573569b9-db50-4d0d-a4f3-ce9219aa836d,Namespace:calico-apiserver,Attempt:0,}" Jul 7 00:00:33.110392 containerd[1456]: time="2025-07-07T00:00:33.110244004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ffb9474b-k66n7,Uid:6154d981-1514-483e-9a6f-b65a800f05e0,Namespace:calico-apiserver,Attempt:0,}" Jul 7 00:00:33.115869 kubelet[2515]: E0707 00:00:33.115638 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:00:33.120100 containerd[1456]: time="2025-07-07T00:00:33.120057539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gx8x8,Uid:d92e234f-52f0-4d16-a111-2199fe918193,Namespace:kube-system,Attempt:0,}" Jul 7 00:00:33.237532 systemd[1]: Created slice kubepods-besteffort-podfebfaed7_7f45_4497_b75d_f0ee8f991481.slice - libcontainer container kubepods-besteffort-podfebfaed7_7f45_4497_b75d_f0ee8f991481.slice. Jul 7 00:00:33.243462 containerd[1456]: time="2025-07-07T00:00:33.243430134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qnpjz,Uid:febfaed7-7f45-4497-b75d-f0ee8f991481,Namespace:calico-system,Attempt:0,}" Jul 7 00:00:33.257970 containerd[1456]: time="2025-07-07T00:00:33.257832133Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 7 00:00:33.318935 containerd[1456]: time="2025-07-07T00:00:33.318787885Z" level=error msg="Failed to destroy network for sandbox \"939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:33.331934 containerd[1456]: time="2025-07-07T00:00:33.331511306Z" level=error msg="encountered an error cleaning up failed sandbox \"939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:33.333246 containerd[1456]: time="2025-07-07T00:00:33.333215702Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-6c7vz,Uid:f9c9236a-e34c-48ed-9633-47af4ea04d91,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:33.339267 kubelet[2515]: E0707 00:00:33.339225 2515 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:33.339684 containerd[1456]: time="2025-07-07T00:00:33.339451525Z" level=error msg="Failed to destroy network for sandbox \"7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:33.339814 kubelet[2515]: E0707 00:00:33.339664 2515 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-6c7vz" Jul 7 00:00:33.339814 kubelet[2515]: E0707 00:00:33.339771 2515 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-6c7vz" Jul 7 00:00:33.340350 kubelet[2515]: E0707 00:00:33.339930 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-6c7vz_calico-system(f9c9236a-e34c-48ed-9633-47af4ea04d91)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-6c7vz_calico-system(f9c9236a-e34c-48ed-9633-47af4ea04d91)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-6c7vz" podUID="f9c9236a-e34c-48ed-9633-47af4ea04d91" Jul 7 00:00:33.340517 containerd[1456]: time="2025-07-07T00:00:33.340238757Z" level=error msg="encountered an error cleaning up failed sandbox \"7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:33.340884 containerd[1456]: time="2025-07-07T00:00:33.340823517Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65d9dcd5c4-rvpmp,Uid:321b5fcc-dee8-4254-8be2-33315ade1aed,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:33.341745 kubelet[2515]: E0707 00:00:33.341700 2515 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:33.341811 kubelet[2515]: E0707 00:00:33.341773 2515 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-65d9dcd5c4-rvpmp" Jul 7 00:00:33.341811 kubelet[2515]: E0707 00:00:33.341797 2515 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-65d9dcd5c4-rvpmp" Jul 7 00:00:33.341891 kubelet[2515]: E0707 00:00:33.341846 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-65d9dcd5c4-rvpmp_calico-system(321b5fcc-dee8-4254-8be2-33315ade1aed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-65d9dcd5c4-rvpmp_calico-system(321b5fcc-dee8-4254-8be2-33315ade1aed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-65d9dcd5c4-rvpmp" podUID="321b5fcc-dee8-4254-8be2-33315ade1aed" Jul 7 00:00:33.359017 containerd[1456]: time="2025-07-07T00:00:33.358965934Z" level=error msg="Failed to destroy network for sandbox \"f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:33.359649 containerd[1456]: time="2025-07-07T00:00:33.359618752Z" level=error msg="encountered an error cleaning up failed sandbox \"f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:33.359765 containerd[1456]: time="2025-07-07T00:00:33.359744008Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gx8x8,Uid:d92e234f-52f0-4d16-a111-2199fe918193,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:33.360150 kubelet[2515]: E0707 00:00:33.360067 2515 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:33.360376 kubelet[2515]: E0707 00:00:33.360262 2515 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-gx8x8" Jul 7 00:00:33.360376 kubelet[2515]: E0707 00:00:33.360304 2515 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-gx8x8" Jul 7 00:00:33.360525 kubelet[2515]: E0707 00:00:33.360463 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-gx8x8_kube-system(d92e234f-52f0-4d16-a111-2199fe918193)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-gx8x8_kube-system(d92e234f-52f0-4d16-a111-2199fe918193)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-gx8x8" podUID="d92e234f-52f0-4d16-a111-2199fe918193" Jul 7 00:00:33.364108 containerd[1456]: time="2025-07-07T00:00:33.363872175Z" level=error msg="Failed to destroy network for sandbox \"5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:33.364822 containerd[1456]: time="2025-07-07T00:00:33.364797796Z" level=error msg="encountered an error cleaning up failed sandbox \"5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:33.364924 containerd[1456]: time="2025-07-07T00:00:33.364904918Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ffb9474b-prlmm,Uid:573569b9-db50-4d0d-a4f3-ce9219aa836d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:33.365259 kubelet[2515]: E0707 00:00:33.365220 2515 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:33.365505 kubelet[2515]: E0707 00:00:33.365433 2515 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5ffb9474b-prlmm" Jul 7 00:00:33.365505 kubelet[2515]: E0707 00:00:33.365463 2515 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5ffb9474b-prlmm" Jul 7 00:00:33.366679 kubelet[2515]: E0707 00:00:33.365604 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5ffb9474b-prlmm_calico-apiserver(573569b9-db50-4d0d-a4f3-ce9219aa836d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5ffb9474b-prlmm_calico-apiserver(573569b9-db50-4d0d-a4f3-ce9219aa836d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5ffb9474b-prlmm" podUID="573569b9-db50-4d0d-a4f3-ce9219aa836d" Jul 7 00:00:33.371754 containerd[1456]: time="2025-07-07T00:00:33.371694804Z" level=error msg="Failed to destroy network for sandbox \"b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:33.372434 containerd[1456]: time="2025-07-07T00:00:33.372412955Z" level=error msg="encountered an error cleaning up failed sandbox \"b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:33.372572 containerd[1456]: time="2025-07-07T00:00:33.372550533Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cd8nn,Uid:e866b654-c1cc-4469-bdb5-43648247fb5f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:33.372818 kubelet[2515]: E0707 00:00:33.372790 2515 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:33.373045 kubelet[2515]: E0707 00:00:33.373017 2515 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-cd8nn" Jul 7 00:00:33.373154 kubelet[2515]: E0707 00:00:33.373140 2515 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-cd8nn" Jul 7 00:00:33.373486 kubelet[2515]: E0707 00:00:33.373267 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-cd8nn_kube-system(e866b654-c1cc-4469-bdb5-43648247fb5f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-cd8nn_kube-system(e866b654-c1cc-4469-bdb5-43648247fb5f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-cd8nn" podUID="e866b654-c1cc-4469-bdb5-43648247fb5f" Jul 7 00:00:33.373981 containerd[1456]: time="2025-07-07T00:00:33.373954736Z" level=error msg="Failed to destroy network for sandbox \"00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:33.376524 containerd[1456]: time="2025-07-07T00:00:33.376481140Z" level=error msg="encountered an error cleaning up failed sandbox \"00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:33.376630 containerd[1456]: time="2025-07-07T00:00:33.376589073Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ffb9474b-k66n7,Uid:6154d981-1514-483e-9a6f-b65a800f05e0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:33.377131 kubelet[2515]: E0707 00:00:33.376812 2515 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:33.377131 kubelet[2515]: E0707 00:00:33.376851 2515 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5ffb9474b-k66n7" Jul 7 00:00:33.377131 kubelet[2515]: E0707 00:00:33.376867 2515 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5ffb9474b-k66n7" Jul 7 00:00:33.377241 kubelet[2515]: E0707 00:00:33.376908 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5ffb9474b-k66n7_calico-apiserver(6154d981-1514-483e-9a6f-b65a800f05e0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5ffb9474b-k66n7_calico-apiserver(6154d981-1514-483e-9a6f-b65a800f05e0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5ffb9474b-k66n7" podUID="6154d981-1514-483e-9a6f-b65a800f05e0" Jul 7 00:00:33.382049 containerd[1456]: time="2025-07-07T00:00:33.381991528Z" level=error msg="Failed to destroy network for sandbox \"d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:33.382414 containerd[1456]: time="2025-07-07T00:00:33.382389787Z" level=error msg="encountered an error cleaning up failed sandbox \"d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:33.382460 containerd[1456]: time="2025-07-07T00:00:33.382436174Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dbc45fd6f-nj8xd,Uid:3661e791-5a5a-4263-b5c4-3e2adfeb5eb7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:33.382660 kubelet[2515]: E0707 00:00:33.382630 2515 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:33.382730 kubelet[2515]: E0707 00:00:33.382712 2515 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-dbc45fd6f-nj8xd" Jul 7 00:00:33.382758 kubelet[2515]: E0707 00:00:33.382736 2515 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-dbc45fd6f-nj8xd" Jul 7 00:00:33.382830 kubelet[2515]: E0707 00:00:33.382807 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-dbc45fd6f-nj8xd_calico-system(3661e791-5a5a-4263-b5c4-3e2adfeb5eb7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-dbc45fd6f-nj8xd_calico-system(3661e791-5a5a-4263-b5c4-3e2adfeb5eb7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-dbc45fd6f-nj8xd" podUID="3661e791-5a5a-4263-b5c4-3e2adfeb5eb7" Jul 7 00:00:33.389142 containerd[1456]: time="2025-07-07T00:00:33.389077931Z" level=error msg="Failed to destroy network for sandbox \"26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:33.389457 containerd[1456]: time="2025-07-07T00:00:33.389417982Z" level=error msg="encountered an error cleaning up failed sandbox \"26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:33.389554 containerd[1456]: time="2025-07-07T00:00:33.389464759Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qnpjz,Uid:febfaed7-7f45-4497-b75d-f0ee8f991481,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:33.389682 kubelet[2515]: E0707 00:00:33.389645 2515 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:33.389720 kubelet[2515]: E0707 00:00:33.389704 2515 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qnpjz" Jul 7 00:00:33.389745 kubelet[2515]: E0707 00:00:33.389725 2515 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qnpjz" Jul 7 00:00:33.389819 kubelet[2515]: E0707 00:00:33.389791 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qnpjz_calico-system(febfaed7-7f45-4497-b75d-f0ee8f991481)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qnpjz_calico-system(febfaed7-7f45-4497-b75d-f0ee8f991481)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qnpjz" podUID="febfaed7-7f45-4497-b75d-f0ee8f991481" Jul 7 00:00:34.258442 kubelet[2515]: I0707 00:00:34.258392 2515 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a" Jul 7 00:00:34.259441 kubelet[2515]: I0707 00:00:34.259408 2515 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958" Jul 7 00:00:34.260916 kubelet[2515]: I0707 00:00:34.260879 2515 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a" Jul 7 00:00:34.264939 kubelet[2515]: I0707 00:00:34.264913 2515 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690" Jul 7 00:00:34.267676 containerd[1456]: time="2025-07-07T00:00:34.267421729Z" level=info msg="StopPodSandbox for \"7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a\"" Jul 7 00:00:34.267676 containerd[1456]: time="2025-07-07T00:00:34.267471072Z" level=info msg="StopPodSandbox for \"00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690\"" Jul 7 00:00:34.268034 containerd[1456]: time="2025-07-07T00:00:34.267426278Z" level=info msg="StopPodSandbox for \"f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958\"" Jul 7 00:00:34.268990 containerd[1456]: time="2025-07-07T00:00:34.267445995Z" level=info msg="StopPodSandbox for \"26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a\"" Jul 7 00:00:34.269128 kubelet[2515]: I0707 00:00:34.268248 2515 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa" Jul 7 00:00:34.269170 containerd[1456]: time="2025-07-07T00:00:34.269018442Z" level=info msg="Ensure that sandbox 7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a in task-service has been cleanup successfully" Jul 7 00:00:34.269170 containerd[1456]: time="2025-07-07T00:00:34.269036437Z" level=info msg="Ensure that sandbox 00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690 in task-service has been cleanup successfully" Jul 7 00:00:34.269292 containerd[1456]: time="2025-07-07T00:00:34.269260869Z" level=info msg="Ensure that sandbox f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958 in task-service has been cleanup successfully" Jul 7 00:00:34.269470 containerd[1456]: time="2025-07-07T00:00:34.269432732Z" level=info msg="Ensure that sandbox 26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a in task-service has been cleanup successfully" Jul 7 00:00:34.275027 containerd[1456]: time="2025-07-07T00:00:34.274990617Z" level=info msg="StopPodSandbox for \"b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa\"" Jul 7 00:00:34.275248 containerd[1456]: time="2025-07-07T00:00:34.275217655Z" level=info msg="Ensure that sandbox b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa in task-service has been cleanup successfully" Jul 7 00:00:34.277340 kubelet[2515]: I0707 00:00:34.276824 2515 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb" Jul 7 00:00:34.277533 containerd[1456]: time="2025-07-07T00:00:34.277508384Z" level=info msg="StopPodSandbox for \"5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb\"" Jul 7 00:00:34.278600 containerd[1456]: time="2025-07-07T00:00:34.278580781Z" level=info msg="Ensure that sandbox 5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb in task-service has been cleanup successfully" Jul 7 00:00:34.281001 kubelet[2515]: I0707 00:00:34.280984 2515 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac" Jul 7 00:00:34.284569 containerd[1456]: time="2025-07-07T00:00:34.284538529Z" level=info msg="StopPodSandbox for \"939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac\"" Jul 7 00:00:34.284874 containerd[1456]: time="2025-07-07T00:00:34.284762650Z" level=info msg="Ensure that sandbox 939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac in task-service has been cleanup successfully" Jul 7 00:00:34.287760 kubelet[2515]: I0707 00:00:34.287738 2515 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97" Jul 7 00:00:34.289373 containerd[1456]: time="2025-07-07T00:00:34.288538724Z" level=info msg="StopPodSandbox for \"d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97\"" Jul 7 00:00:34.289373 containerd[1456]: time="2025-07-07T00:00:34.288786210Z" level=info msg="Ensure that sandbox d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97 in task-service has been cleanup successfully" Jul 7 00:00:34.342141 containerd[1456]: time="2025-07-07T00:00:34.341988373Z" level=error msg="StopPodSandbox for \"5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb\" failed" error="failed to destroy network for sandbox \"5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:34.342349 kubelet[2515]: E0707 00:00:34.342273 2515 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb" Jul 7 00:00:34.342392 kubelet[2515]: E0707 00:00:34.342331 2515 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb"} Jul 7 00:00:34.342419 kubelet[2515]: E0707 00:00:34.342395 2515 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"573569b9-db50-4d0d-a4f3-ce9219aa836d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:00:34.342493 kubelet[2515]: E0707 00:00:34.342424 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"573569b9-db50-4d0d-a4f3-ce9219aa836d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5ffb9474b-prlmm" podUID="573569b9-db50-4d0d-a4f3-ce9219aa836d" Jul 7 00:00:34.342745 containerd[1456]: time="2025-07-07T00:00:34.342702246Z" level=error msg="StopPodSandbox for \"7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a\" failed" error="failed to destroy network for sandbox \"7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:34.343097 kubelet[2515]: E0707 00:00:34.343027 2515 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a" Jul 7 00:00:34.343097 kubelet[2515]: E0707 00:00:34.343059 2515 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a"} Jul 7 00:00:34.343097 kubelet[2515]: E0707 00:00:34.343081 2515 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"321b5fcc-dee8-4254-8be2-33315ade1aed\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:00:34.343203 kubelet[2515]: E0707 00:00:34.343102 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"321b5fcc-dee8-4254-8be2-33315ade1aed\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-65d9dcd5c4-rvpmp" podUID="321b5fcc-dee8-4254-8be2-33315ade1aed" Jul 7 00:00:34.345490 containerd[1456]: time="2025-07-07T00:00:34.345452279Z" level=error msg="StopPodSandbox for \"26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a\" failed" error="failed to destroy network for sandbox \"26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:34.345936 kubelet[2515]: E0707 00:00:34.345771 2515 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a" Jul 7 00:00:34.345936 kubelet[2515]: E0707 00:00:34.345836 2515 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a"} Jul 7 00:00:34.345936 kubelet[2515]: E0707 00:00:34.345875 2515 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"febfaed7-7f45-4497-b75d-f0ee8f991481\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:00:34.345936 kubelet[2515]: E0707 00:00:34.345901 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"febfaed7-7f45-4497-b75d-f0ee8f991481\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qnpjz" podUID="febfaed7-7f45-4497-b75d-f0ee8f991481" Jul 7 00:00:34.349506 containerd[1456]: time="2025-07-07T00:00:34.349110912Z" level=error msg="StopPodSandbox for \"b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa\" failed" error="failed to destroy network for sandbox \"b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:34.349648 kubelet[2515]: E0707 00:00:34.349362 2515 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa" Jul 7 00:00:34.349648 kubelet[2515]: E0707 00:00:34.349394 2515 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa"} Jul 7 00:00:34.349648 kubelet[2515]: E0707 00:00:34.349416 2515 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e866b654-c1cc-4469-bdb5-43648247fb5f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:00:34.349648 kubelet[2515]: E0707 00:00:34.349442 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e866b654-c1cc-4469-bdb5-43648247fb5f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-cd8nn" podUID="e866b654-c1cc-4469-bdb5-43648247fb5f" Jul 7 00:00:34.350861 containerd[1456]: time="2025-07-07T00:00:34.350837490Z" level=error msg="StopPodSandbox for \"f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958\" failed" error="failed to destroy network for sandbox \"f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:34.351276 kubelet[2515]: E0707 00:00:34.351169 2515 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958" Jul 7 00:00:34.351276 kubelet[2515]: E0707 00:00:34.351212 2515 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958"} Jul 7 00:00:34.351276 kubelet[2515]: E0707 00:00:34.351236 2515 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d92e234f-52f0-4d16-a111-2199fe918193\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:00:34.351276 kubelet[2515]: E0707 00:00:34.351254 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d92e234f-52f0-4d16-a111-2199fe918193\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-gx8x8" podUID="d92e234f-52f0-4d16-a111-2199fe918193" Jul 7 00:00:34.354385 containerd[1456]: time="2025-07-07T00:00:34.354327405Z" level=error msg="StopPodSandbox for \"00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690\" failed" error="failed to destroy network for sandbox \"00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:34.354884 kubelet[2515]: E0707 00:00:34.354836 2515 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690" Jul 7 00:00:34.355074 kubelet[2515]: E0707 00:00:34.355049 2515 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690"} Jul 7 00:00:34.355186 kubelet[2515]: E0707 00:00:34.355169 2515 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6154d981-1514-483e-9a6f-b65a800f05e0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:00:34.355476 kubelet[2515]: E0707 00:00:34.355378 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6154d981-1514-483e-9a6f-b65a800f05e0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5ffb9474b-k66n7" podUID="6154d981-1514-483e-9a6f-b65a800f05e0" Jul 7 00:00:34.357407 containerd[1456]: time="2025-07-07T00:00:34.357357846Z" level=error msg="StopPodSandbox for \"939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac\" failed" error="failed to destroy network for sandbox \"939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:34.357716 kubelet[2515]: E0707 00:00:34.357684 2515 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac" Jul 7 00:00:34.357776 kubelet[2515]: E0707 00:00:34.357729 2515 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac"} Jul 7 00:00:34.357776 kubelet[2515]: E0707 00:00:34.357756 2515 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f9c9236a-e34c-48ed-9633-47af4ea04d91\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:00:34.357863 kubelet[2515]: E0707 00:00:34.357778 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f9c9236a-e34c-48ed-9633-47af4ea04d91\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-6c7vz" podUID="f9c9236a-e34c-48ed-9633-47af4ea04d91" Jul 7 00:00:34.362965 containerd[1456]: time="2025-07-07T00:00:34.362904410Z" level=error msg="StopPodSandbox for \"d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97\" failed" error="failed to destroy network for sandbox \"d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:00:34.363154 kubelet[2515]: E0707 00:00:34.363113 2515 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97" Jul 7 00:00:34.363317 kubelet[2515]: E0707 00:00:34.363167 2515 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97"} Jul 7 00:00:34.363317 kubelet[2515]: E0707 00:00:34.363206 2515 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3661e791-5a5a-4263-b5c4-3e2adfeb5eb7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:00:34.363317 kubelet[2515]: E0707 00:00:34.363236 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3661e791-5a5a-4263-b5c4-3e2adfeb5eb7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-dbc45fd6f-nj8xd" podUID="3661e791-5a5a-4263-b5c4-3e2adfeb5eb7" Jul 7 00:00:37.065334 systemd[1]: Started sshd@7-10.0.0.146:22-10.0.0.1:43044.service - OpenSSH per-connection server daemon (10.0.0.1:43044). Jul 7 00:00:37.109910 sshd[3796]: Accepted publickey for core from 10.0.0.1 port 43044 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 7 00:00:37.111705 sshd[3796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:00:37.116256 systemd-logind[1440]: New session 8 of user core. Jul 7 00:00:37.124055 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 00:00:37.241414 sshd[3796]: pam_unix(sshd:session): session closed for user core Jul 7 00:00:37.245689 systemd[1]: sshd@7-10.0.0.146:22-10.0.0.1:43044.service: Deactivated successfully. Jul 7 00:00:37.247631 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 00:00:37.248474 systemd-logind[1440]: Session 8 logged out. Waiting for processes to exit. Jul 7 00:00:37.249436 systemd-logind[1440]: Removed session 8. Jul 7 00:00:41.706108 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3939841607.mount: Deactivated successfully. Jul 7 00:00:42.252748 systemd[1]: Started sshd@8-10.0.0.146:22-10.0.0.1:53338.service - OpenSSH per-connection server daemon (10.0.0.1:53338). Jul 7 00:00:42.698432 containerd[1456]: time="2025-07-07T00:00:42.698367679Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:42.702177 containerd[1456]: time="2025-07-07T00:00:42.701587399Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 7 00:00:42.714705 containerd[1456]: time="2025-07-07T00:00:42.714659539Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:42.718533 containerd[1456]: time="2025-07-07T00:00:42.718485126Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:42.719147 containerd[1456]: time="2025-07-07T00:00:42.719120380Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 9.461249826s" Jul 7 00:00:42.719206 containerd[1456]: time="2025-07-07T00:00:42.719152441Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 7 00:00:42.731014 containerd[1456]: time="2025-07-07T00:00:42.730970645Z" level=info msg="CreateContainer within sandbox \"28d147800873504c8f672346817311af1614b0ab9b2a7b7fccbe35479143759b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 7 00:00:42.743577 sshd[3819]: Accepted publickey for core from 10.0.0.1 port 53338 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 7 00:00:42.746097 sshd[3819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:00:42.753178 systemd-logind[1440]: New session 9 of user core. Jul 7 00:00:42.757185 containerd[1456]: time="2025-07-07T00:00:42.757149781Z" level=info msg="CreateContainer within sandbox \"28d147800873504c8f672346817311af1614b0ab9b2a7b7fccbe35479143759b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"43b25d99143c82ca44a74dfc1f2d5663af1712949296b9dd1de8df11f56c5202\"" Jul 7 00:00:42.757749 containerd[1456]: time="2025-07-07T00:00:42.757690517Z" level=info msg="StartContainer for \"43b25d99143c82ca44a74dfc1f2d5663af1712949296b9dd1de8df11f56c5202\"" Jul 7 00:00:42.761104 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 00:00:42.827204 systemd[1]: Started cri-containerd-43b25d99143c82ca44a74dfc1f2d5663af1712949296b9dd1de8df11f56c5202.scope - libcontainer container 43b25d99143c82ca44a74dfc1f2d5663af1712949296b9dd1de8df11f56c5202. Jul 7 00:00:43.071164 containerd[1456]: time="2025-07-07T00:00:43.071033682Z" level=info msg="StartContainer for \"43b25d99143c82ca44a74dfc1f2d5663af1712949296b9dd1de8df11f56c5202\" returns successfully" Jul 7 00:00:43.087313 sshd[3819]: pam_unix(sshd:session): session closed for user core Jul 7 00:00:43.091880 systemd[1]: sshd@8-10.0.0.146:22-10.0.0.1:53338.service: Deactivated successfully. Jul 7 00:00:43.094136 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 00:00:43.094776 systemd-logind[1440]: Session 9 logged out. Waiting for processes to exit. Jul 7 00:00:43.095771 systemd-logind[1440]: Removed session 9. Jul 7 00:00:43.101593 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 7 00:00:43.101674 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 7 00:00:43.241514 containerd[1456]: time="2025-07-07T00:00:43.241468995Z" level=info msg="StopPodSandbox for \"7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a\"" Jul 7 00:00:43.338807 kubelet[2515]: I0707 00:00:43.338226 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-t5f6f" podStartSLOduration=1.477525649 podStartE2EDuration="24.338211036s" podCreationTimestamp="2025-07-07 00:00:19 +0000 UTC" firstStartedPulling="2025-07-07 00:00:19.859207835 +0000 UTC m=+16.775940730" lastFinishedPulling="2025-07-07 00:00:42.719893222 +0000 UTC m=+39.636626117" observedRunningTime="2025-07-07 00:00:43.335123055 +0000 UTC m=+40.251855951" watchObservedRunningTime="2025-07-07 00:00:43.338211036 +0000 UTC m=+40.254943932" Jul 7 00:00:43.482571 containerd[1456]: 2025-07-07 00:00:43.326 [INFO][3908] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a" Jul 7 00:00:43.482571 containerd[1456]: 2025-07-07 00:00:43.327 [INFO][3908] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a" iface="eth0" netns="/var/run/netns/cni-81bc717e-7acf-04fc-cf87-52ffdc84cdf4" Jul 7 00:00:43.482571 containerd[1456]: 2025-07-07 00:00:43.328 [INFO][3908] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a" iface="eth0" netns="/var/run/netns/cni-81bc717e-7acf-04fc-cf87-52ffdc84cdf4" Jul 7 00:00:43.482571 containerd[1456]: 2025-07-07 00:00:43.328 [INFO][3908] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a" iface="eth0" netns="/var/run/netns/cni-81bc717e-7acf-04fc-cf87-52ffdc84cdf4" Jul 7 00:00:43.482571 containerd[1456]: 2025-07-07 00:00:43.328 [INFO][3908] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a" Jul 7 00:00:43.482571 containerd[1456]: 2025-07-07 00:00:43.328 [INFO][3908] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a" Jul 7 00:00:43.482571 containerd[1456]: 2025-07-07 00:00:43.459 [INFO][3923] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a" HandleID="k8s-pod-network.7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a" Workload="localhost-k8s-whisker--65d9dcd5c4--rvpmp-eth0" Jul 7 00:00:43.482571 containerd[1456]: 2025-07-07 00:00:43.460 [INFO][3923] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:43.482571 containerd[1456]: 2025-07-07 00:00:43.460 [INFO][3923] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:43.482571 containerd[1456]: 2025-07-07 00:00:43.470 [WARNING][3923] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a" HandleID="k8s-pod-network.7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a" Workload="localhost-k8s-whisker--65d9dcd5c4--rvpmp-eth0" Jul 7 00:00:43.482571 containerd[1456]: 2025-07-07 00:00:43.470 [INFO][3923] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a" HandleID="k8s-pod-network.7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a" Workload="localhost-k8s-whisker--65d9dcd5c4--rvpmp-eth0" Jul 7 00:00:43.482571 containerd[1456]: 2025-07-07 00:00:43.474 [INFO][3923] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:43.482571 containerd[1456]: 2025-07-07 00:00:43.478 [INFO][3908] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a" Jul 7 00:00:43.483553 containerd[1456]: time="2025-07-07T00:00:43.483081644Z" level=info msg="TearDown network for sandbox \"7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a\" successfully" Jul 7 00:00:43.483553 containerd[1456]: time="2025-07-07T00:00:43.483109987Z" level=info msg="StopPodSandbox for \"7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a\" returns successfully" Jul 7 00:00:43.542861 kubelet[2515]: I0707 00:00:43.542808 2515 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/321b5fcc-dee8-4254-8be2-33315ade1aed-whisker-ca-bundle\") pod \"321b5fcc-dee8-4254-8be2-33315ade1aed\" (UID: \"321b5fcc-dee8-4254-8be2-33315ade1aed\") " Jul 7 00:00:43.542861 kubelet[2515]: I0707 00:00:43.542864 2515 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/321b5fcc-dee8-4254-8be2-33315ade1aed-whisker-backend-key-pair\") pod \"321b5fcc-dee8-4254-8be2-33315ade1aed\" (UID: \"321b5fcc-dee8-4254-8be2-33315ade1aed\") " Jul 7 00:00:43.543092 kubelet[2515]: I0707 00:00:43.542902 2515 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n562t\" (UniqueName: \"kubernetes.io/projected/321b5fcc-dee8-4254-8be2-33315ade1aed-kube-api-access-n562t\") pod \"321b5fcc-dee8-4254-8be2-33315ade1aed\" (UID: \"321b5fcc-dee8-4254-8be2-33315ade1aed\") " Jul 7 00:00:43.545194 kubelet[2515]: I0707 00:00:43.545165 2515 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/321b5fcc-dee8-4254-8be2-33315ade1aed-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "321b5fcc-dee8-4254-8be2-33315ade1aed" (UID: "321b5fcc-dee8-4254-8be2-33315ade1aed"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 7 00:00:43.551108 kubelet[2515]: I0707 00:00:43.551062 2515 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/321b5fcc-dee8-4254-8be2-33315ade1aed-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "321b5fcc-dee8-4254-8be2-33315ade1aed" (UID: "321b5fcc-dee8-4254-8be2-33315ade1aed"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 7 00:00:43.551235 kubelet[2515]: I0707 00:00:43.551204 2515 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/321b5fcc-dee8-4254-8be2-33315ade1aed-kube-api-access-n562t" (OuterVolumeSpecName: "kube-api-access-n562t") pod "321b5fcc-dee8-4254-8be2-33315ade1aed" (UID: "321b5fcc-dee8-4254-8be2-33315ade1aed"). InnerVolumeSpecName "kube-api-access-n562t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 00:00:43.643921 kubelet[2515]: I0707 00:00:43.643870 2515 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/321b5fcc-dee8-4254-8be2-33315ade1aed-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 7 00:00:43.643921 kubelet[2515]: I0707 00:00:43.643905 2515 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/321b5fcc-dee8-4254-8be2-33315ade1aed-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 7 00:00:43.643921 kubelet[2515]: I0707 00:00:43.643913 2515 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n562t\" (UniqueName: \"kubernetes.io/projected/321b5fcc-dee8-4254-8be2-33315ade1aed-kube-api-access-n562t\") on node \"localhost\" DevicePath \"\"" Jul 7 00:00:43.725301 systemd[1]: run-containerd-runc-k8s.io-43b25d99143c82ca44a74dfc1f2d5663af1712949296b9dd1de8df11f56c5202-runc.POwq0M.mount: Deactivated successfully. Jul 7 00:00:43.725424 systemd[1]: run-netns-cni\x2d81bc717e\x2d7acf\x2d04fc\x2dcf87\x2d52ffdc84cdf4.mount: Deactivated successfully. Jul 7 00:00:43.725503 systemd[1]: var-lib-kubelet-pods-321b5fcc\x2ddee8\x2d4254\x2d8be2\x2d33315ade1aed-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dn562t.mount: Deactivated successfully. Jul 7 00:00:43.725583 systemd[1]: var-lib-kubelet-pods-321b5fcc\x2ddee8\x2d4254\x2d8be2\x2d33315ade1aed-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 7 00:00:44.325124 systemd[1]: Removed slice kubepods-besteffort-pod321b5fcc_dee8_4254_8be2_33315ade1aed.slice - libcontainer container kubepods-besteffort-pod321b5fcc_dee8_4254_8be2_33315ade1aed.slice. Jul 7 00:00:44.335377 systemd[1]: run-containerd-runc-k8s.io-43b25d99143c82ca44a74dfc1f2d5663af1712949296b9dd1de8df11f56c5202-runc.VTffXK.mount: Deactivated successfully. Jul 7 00:00:44.383744 systemd[1]: Created slice kubepods-besteffort-pode5e7a28a_6998_4881_81d9_5385e9f7751e.slice - libcontainer container kubepods-besteffort-pode5e7a28a_6998_4881_81d9_5385e9f7751e.slice. Jul 7 00:00:44.448131 kubelet[2515]: I0707 00:00:44.448085 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6bxb\" (UniqueName: \"kubernetes.io/projected/e5e7a28a-6998-4881-81d9-5385e9f7751e-kube-api-access-l6bxb\") pod \"whisker-bb85c498b-2j25r\" (UID: \"e5e7a28a-6998-4881-81d9-5385e9f7751e\") " pod="calico-system/whisker-bb85c498b-2j25r" Jul 7 00:00:44.448131 kubelet[2515]: I0707 00:00:44.448125 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e5e7a28a-6998-4881-81d9-5385e9f7751e-whisker-backend-key-pair\") pod \"whisker-bb85c498b-2j25r\" (UID: \"e5e7a28a-6998-4881-81d9-5385e9f7751e\") " pod="calico-system/whisker-bb85c498b-2j25r" Jul 7 00:00:44.448562 kubelet[2515]: I0707 00:00:44.448143 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5e7a28a-6998-4881-81d9-5385e9f7751e-whisker-ca-bundle\") pod \"whisker-bb85c498b-2j25r\" (UID: \"e5e7a28a-6998-4881-81d9-5385e9f7751e\") " pod="calico-system/whisker-bb85c498b-2j25r" Jul 7 00:00:44.689545 containerd[1456]: time="2025-07-07T00:00:44.689488882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bb85c498b-2j25r,Uid:e5e7a28a-6998-4881-81d9-5385e9f7751e,Namespace:calico-system,Attempt:0,}" Jul 7 00:00:44.700969 kernel: bpftool[4124]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 7 00:00:44.815437 systemd-networkd[1387]: calib255beac43c: Link UP Jul 7 00:00:44.815690 systemd-networkd[1387]: calib255beac43c: Gained carrier Jul 7 00:00:44.833338 containerd[1456]: 2025-07-07 00:00:44.743 [INFO][4126] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--bb85c498b--2j25r-eth0 whisker-bb85c498b- calico-system e5e7a28a-6998-4881-81d9-5385e9f7751e 1034 0 2025-07-07 00:00:44 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:bb85c498b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-bb85c498b-2j25r eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calib255beac43c [] [] }} ContainerID="8cea13b78a0cbbdee26edbaefa9bbc6820db1b3ab0509c2d3e3b62f5915d2406" Namespace="calico-system" Pod="whisker-bb85c498b-2j25r" WorkloadEndpoint="localhost-k8s-whisker--bb85c498b--2j25r-" Jul 7 00:00:44.833338 containerd[1456]: 2025-07-07 00:00:44.743 [INFO][4126] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8cea13b78a0cbbdee26edbaefa9bbc6820db1b3ab0509c2d3e3b62f5915d2406" Namespace="calico-system" Pod="whisker-bb85c498b-2j25r" WorkloadEndpoint="localhost-k8s-whisker--bb85c498b--2j25r-eth0" Jul 7 00:00:44.833338 containerd[1456]: 2025-07-07 00:00:44.772 [INFO][4140] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8cea13b78a0cbbdee26edbaefa9bbc6820db1b3ab0509c2d3e3b62f5915d2406" HandleID="k8s-pod-network.8cea13b78a0cbbdee26edbaefa9bbc6820db1b3ab0509c2d3e3b62f5915d2406" Workload="localhost-k8s-whisker--bb85c498b--2j25r-eth0" Jul 7 00:00:44.833338 containerd[1456]: 2025-07-07 00:00:44.772 [INFO][4140] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8cea13b78a0cbbdee26edbaefa9bbc6820db1b3ab0509c2d3e3b62f5915d2406" HandleID="k8s-pod-network.8cea13b78a0cbbdee26edbaefa9bbc6820db1b3ab0509c2d3e3b62f5915d2406" Workload="localhost-k8s-whisker--bb85c498b--2j25r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f050), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-bb85c498b-2j25r", "timestamp":"2025-07-07 00:00:44.771991084 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:00:44.833338 containerd[1456]: 2025-07-07 00:00:44.772 [INFO][4140] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:44.833338 containerd[1456]: 2025-07-07 00:00:44.772 [INFO][4140] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:44.833338 containerd[1456]: 2025-07-07 00:00:44.772 [INFO][4140] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 00:00:44.833338 containerd[1456]: 2025-07-07 00:00:44.778 [INFO][4140] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8cea13b78a0cbbdee26edbaefa9bbc6820db1b3ab0509c2d3e3b62f5915d2406" host="localhost" Jul 7 00:00:44.833338 containerd[1456]: 2025-07-07 00:00:44.784 [INFO][4140] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 00:00:44.833338 containerd[1456]: 2025-07-07 00:00:44.788 [INFO][4140] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 00:00:44.833338 containerd[1456]: 2025-07-07 00:00:44.790 [INFO][4140] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 00:00:44.833338 containerd[1456]: 2025-07-07 00:00:44.791 [INFO][4140] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 00:00:44.833338 containerd[1456]: 2025-07-07 00:00:44.791 [INFO][4140] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8cea13b78a0cbbdee26edbaefa9bbc6820db1b3ab0509c2d3e3b62f5915d2406" host="localhost" Jul 7 00:00:44.833338 containerd[1456]: 2025-07-07 00:00:44.793 [INFO][4140] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8cea13b78a0cbbdee26edbaefa9bbc6820db1b3ab0509c2d3e3b62f5915d2406 Jul 7 00:00:44.833338 containerd[1456]: 2025-07-07 00:00:44.798 [INFO][4140] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8cea13b78a0cbbdee26edbaefa9bbc6820db1b3ab0509c2d3e3b62f5915d2406" host="localhost" Jul 7 00:00:44.833338 containerd[1456]: 2025-07-07 00:00:44.802 [INFO][4140] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.8cea13b78a0cbbdee26edbaefa9bbc6820db1b3ab0509c2d3e3b62f5915d2406" host="localhost" Jul 7 00:00:44.833338 containerd[1456]: 2025-07-07 00:00:44.802 [INFO][4140] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.8cea13b78a0cbbdee26edbaefa9bbc6820db1b3ab0509c2d3e3b62f5915d2406" host="localhost" Jul 7 00:00:44.833338 containerd[1456]: 2025-07-07 00:00:44.802 [INFO][4140] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:44.833338 containerd[1456]: 2025-07-07 00:00:44.802 [INFO][4140] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="8cea13b78a0cbbdee26edbaefa9bbc6820db1b3ab0509c2d3e3b62f5915d2406" HandleID="k8s-pod-network.8cea13b78a0cbbdee26edbaefa9bbc6820db1b3ab0509c2d3e3b62f5915d2406" Workload="localhost-k8s-whisker--bb85c498b--2j25r-eth0" Jul 7 00:00:44.835389 containerd[1456]: 2025-07-07 00:00:44.806 [INFO][4126] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8cea13b78a0cbbdee26edbaefa9bbc6820db1b3ab0509c2d3e3b62f5915d2406" Namespace="calico-system" Pod="whisker-bb85c498b-2j25r" WorkloadEndpoint="localhost-k8s-whisker--bb85c498b--2j25r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--bb85c498b--2j25r-eth0", GenerateName:"whisker-bb85c498b-", Namespace:"calico-system", SelfLink:"", UID:"e5e7a28a-6998-4881-81d9-5385e9f7751e", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 0, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"bb85c498b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-bb85c498b-2j25r", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib255beac43c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:44.835389 containerd[1456]: 2025-07-07 00:00:44.806 [INFO][4126] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="8cea13b78a0cbbdee26edbaefa9bbc6820db1b3ab0509c2d3e3b62f5915d2406" Namespace="calico-system" Pod="whisker-bb85c498b-2j25r" WorkloadEndpoint="localhost-k8s-whisker--bb85c498b--2j25r-eth0" Jul 7 00:00:44.835389 containerd[1456]: 2025-07-07 00:00:44.806 [INFO][4126] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib255beac43c ContainerID="8cea13b78a0cbbdee26edbaefa9bbc6820db1b3ab0509c2d3e3b62f5915d2406" Namespace="calico-system" Pod="whisker-bb85c498b-2j25r" WorkloadEndpoint="localhost-k8s-whisker--bb85c498b--2j25r-eth0" Jul 7 00:00:44.835389 containerd[1456]: 2025-07-07 00:00:44.813 [INFO][4126] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8cea13b78a0cbbdee26edbaefa9bbc6820db1b3ab0509c2d3e3b62f5915d2406" Namespace="calico-system" Pod="whisker-bb85c498b-2j25r" WorkloadEndpoint="localhost-k8s-whisker--bb85c498b--2j25r-eth0" Jul 7 00:00:44.835389 containerd[1456]: 2025-07-07 00:00:44.814 [INFO][4126] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8cea13b78a0cbbdee26edbaefa9bbc6820db1b3ab0509c2d3e3b62f5915d2406" Namespace="calico-system" Pod="whisker-bb85c498b-2j25r" WorkloadEndpoint="localhost-k8s-whisker--bb85c498b--2j25r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--bb85c498b--2j25r-eth0", GenerateName:"whisker-bb85c498b-", Namespace:"calico-system", SelfLink:"", UID:"e5e7a28a-6998-4881-81d9-5385e9f7751e", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 0, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"bb85c498b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8cea13b78a0cbbdee26edbaefa9bbc6820db1b3ab0509c2d3e3b62f5915d2406", Pod:"whisker-bb85c498b-2j25r", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib255beac43c", MAC:"62:b9:f8:8d:66:66", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:44.835389 containerd[1456]: 2025-07-07 00:00:44.829 [INFO][4126] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8cea13b78a0cbbdee26edbaefa9bbc6820db1b3ab0509c2d3e3b62f5915d2406" Namespace="calico-system" Pod="whisker-bb85c498b-2j25r" WorkloadEndpoint="localhost-k8s-whisker--bb85c498b--2j25r-eth0" Jul 7 00:00:44.867683 containerd[1456]: time="2025-07-07T00:00:44.867529155Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:00:44.867931 containerd[1456]: time="2025-07-07T00:00:44.867595249Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:00:44.868679 containerd[1456]: time="2025-07-07T00:00:44.867859796Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:44.868880 containerd[1456]: time="2025-07-07T00:00:44.868774264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:44.893082 systemd[1]: Started cri-containerd-8cea13b78a0cbbdee26edbaefa9bbc6820db1b3ab0509c2d3e3b62f5915d2406.scope - libcontainer container 8cea13b78a0cbbdee26edbaefa9bbc6820db1b3ab0509c2d3e3b62f5915d2406. Jul 7 00:00:44.908707 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 00:00:44.948424 containerd[1456]: time="2025-07-07T00:00:44.948294526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bb85c498b-2j25r,Uid:e5e7a28a-6998-4881-81d9-5385e9f7751e,Namespace:calico-system,Attempt:0,} returns sandbox id \"8cea13b78a0cbbdee26edbaefa9bbc6820db1b3ab0509c2d3e3b62f5915d2406\"" Jul 7 00:00:44.951228 containerd[1456]: time="2025-07-07T00:00:44.951185255Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 7 00:00:44.959210 systemd-networkd[1387]: vxlan.calico: Link UP Jul 7 00:00:44.959218 systemd-networkd[1387]: vxlan.calico: Gained carrier Jul 7 00:00:45.186408 containerd[1456]: time="2025-07-07T00:00:45.186145143Z" level=info msg="StopPodSandbox for \"f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958\"" Jul 7 00:00:45.188306 kubelet[2515]: I0707 00:00:45.188252 2515 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="321b5fcc-dee8-4254-8be2-33315ade1aed" path="/var/lib/kubelet/pods/321b5fcc-dee8-4254-8be2-33315ade1aed/volumes" Jul 7 00:00:45.391250 containerd[1456]: 2025-07-07 00:00:45.357 [INFO][4251] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958" Jul 7 00:00:45.391250 containerd[1456]: 2025-07-07 00:00:45.357 [INFO][4251] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958" iface="eth0" netns="/var/run/netns/cni-89a46af7-6133-0f21-33bf-11111e27592e" Jul 7 00:00:45.391250 containerd[1456]: 2025-07-07 00:00:45.357 [INFO][4251] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958" iface="eth0" netns="/var/run/netns/cni-89a46af7-6133-0f21-33bf-11111e27592e" Jul 7 00:00:45.391250 containerd[1456]: 2025-07-07 00:00:45.358 [INFO][4251] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958" iface="eth0" netns="/var/run/netns/cni-89a46af7-6133-0f21-33bf-11111e27592e" Jul 7 00:00:45.391250 containerd[1456]: 2025-07-07 00:00:45.358 [INFO][4251] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958" Jul 7 00:00:45.391250 containerd[1456]: 2025-07-07 00:00:45.358 [INFO][4251] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958" Jul 7 00:00:45.391250 containerd[1456]: 2025-07-07 00:00:45.378 [INFO][4284] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958" HandleID="k8s-pod-network.f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958" Workload="localhost-k8s-coredns--674b8bbfcf--gx8x8-eth0" Jul 7 00:00:45.391250 containerd[1456]: 2025-07-07 00:00:45.378 [INFO][4284] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:45.391250 containerd[1456]: 2025-07-07 00:00:45.378 [INFO][4284] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:45.391250 containerd[1456]: 2025-07-07 00:00:45.384 [WARNING][4284] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958" HandleID="k8s-pod-network.f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958" Workload="localhost-k8s-coredns--674b8bbfcf--gx8x8-eth0" Jul 7 00:00:45.391250 containerd[1456]: 2025-07-07 00:00:45.384 [INFO][4284] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958" HandleID="k8s-pod-network.f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958" Workload="localhost-k8s-coredns--674b8bbfcf--gx8x8-eth0" Jul 7 00:00:45.391250 containerd[1456]: 2025-07-07 00:00:45.385 [INFO][4284] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:45.391250 containerd[1456]: 2025-07-07 00:00:45.388 [INFO][4251] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958" Jul 7 00:00:45.391651 containerd[1456]: time="2025-07-07T00:00:45.391422630Z" level=info msg="TearDown network for sandbox \"f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958\" successfully" Jul 7 00:00:45.391651 containerd[1456]: time="2025-07-07T00:00:45.391455251Z" level=info msg="StopPodSandbox for \"f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958\" returns successfully" Jul 7 00:00:45.391853 kubelet[2515]: E0707 00:00:45.391817 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:00:45.392310 containerd[1456]: time="2025-07-07T00:00:45.392224135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gx8x8,Uid:d92e234f-52f0-4d16-a111-2199fe918193,Namespace:kube-system,Attempt:1,}" Jul 7 00:00:45.393890 systemd[1]: run-netns-cni\x2d89a46af7\x2d6133\x2d0f21\x2d33bf\x2d11111e27592e.mount: Deactivated successfully. Jul 7 00:00:45.512586 systemd-networkd[1387]: calia5be83ab918: Link UP Jul 7 00:00:45.512812 systemd-networkd[1387]: calia5be83ab918: Gained carrier Jul 7 00:00:45.526459 containerd[1456]: 2025-07-07 00:00:45.454 [INFO][4292] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--gx8x8-eth0 coredns-674b8bbfcf- kube-system d92e234f-52f0-4d16-a111-2199fe918193 1046 0 2025-07-07 00:00:08 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-gx8x8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia5be83ab918 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="82c5df8b7cfaef12e528c59b8a746a86e136b68cbc60900c9e472ca0182aff6e" Namespace="kube-system" Pod="coredns-674b8bbfcf-gx8x8" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gx8x8-" Jul 7 00:00:45.526459 containerd[1456]: 2025-07-07 00:00:45.454 [INFO][4292] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="82c5df8b7cfaef12e528c59b8a746a86e136b68cbc60900c9e472ca0182aff6e" Namespace="kube-system" Pod="coredns-674b8bbfcf-gx8x8" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gx8x8-eth0" Jul 7 00:00:45.526459 containerd[1456]: 2025-07-07 00:00:45.477 [INFO][4306] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="82c5df8b7cfaef12e528c59b8a746a86e136b68cbc60900c9e472ca0182aff6e" HandleID="k8s-pod-network.82c5df8b7cfaef12e528c59b8a746a86e136b68cbc60900c9e472ca0182aff6e" Workload="localhost-k8s-coredns--674b8bbfcf--gx8x8-eth0" Jul 7 00:00:45.526459 containerd[1456]: 2025-07-07 00:00:45.477 [INFO][4306] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="82c5df8b7cfaef12e528c59b8a746a86e136b68cbc60900c9e472ca0182aff6e" HandleID="k8s-pod-network.82c5df8b7cfaef12e528c59b8a746a86e136b68cbc60900c9e472ca0182aff6e" Workload="localhost-k8s-coredns--674b8bbfcf--gx8x8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a5db0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-gx8x8", "timestamp":"2025-07-07 00:00:45.477719465 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:00:45.526459 containerd[1456]: 2025-07-07 00:00:45.477 [INFO][4306] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:45.526459 containerd[1456]: 2025-07-07 00:00:45.477 [INFO][4306] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:45.526459 containerd[1456]: 2025-07-07 00:00:45.478 [INFO][4306] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 00:00:45.526459 containerd[1456]: 2025-07-07 00:00:45.484 [INFO][4306] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.82c5df8b7cfaef12e528c59b8a746a86e136b68cbc60900c9e472ca0182aff6e" host="localhost" Jul 7 00:00:45.526459 containerd[1456]: 2025-07-07 00:00:45.488 [INFO][4306] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 00:00:45.526459 containerd[1456]: 2025-07-07 00:00:45.493 [INFO][4306] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 00:00:45.526459 containerd[1456]: 2025-07-07 00:00:45.494 [INFO][4306] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 00:00:45.526459 containerd[1456]: 2025-07-07 00:00:45.496 [INFO][4306] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 00:00:45.526459 containerd[1456]: 2025-07-07 00:00:45.496 [INFO][4306] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.82c5df8b7cfaef12e528c59b8a746a86e136b68cbc60900c9e472ca0182aff6e" host="localhost" Jul 7 00:00:45.526459 containerd[1456]: 2025-07-07 00:00:45.497 [INFO][4306] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.82c5df8b7cfaef12e528c59b8a746a86e136b68cbc60900c9e472ca0182aff6e Jul 7 00:00:45.526459 containerd[1456]: 2025-07-07 00:00:45.503 [INFO][4306] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.82c5df8b7cfaef12e528c59b8a746a86e136b68cbc60900c9e472ca0182aff6e" host="localhost" Jul 7 00:00:45.526459 containerd[1456]: 2025-07-07 00:00:45.507 [INFO][4306] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.82c5df8b7cfaef12e528c59b8a746a86e136b68cbc60900c9e472ca0182aff6e" host="localhost" Jul 7 00:00:45.526459 containerd[1456]: 2025-07-07 00:00:45.507 [INFO][4306] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.82c5df8b7cfaef12e528c59b8a746a86e136b68cbc60900c9e472ca0182aff6e" host="localhost" Jul 7 00:00:45.526459 containerd[1456]: 2025-07-07 00:00:45.507 [INFO][4306] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:45.526459 containerd[1456]: 2025-07-07 00:00:45.507 [INFO][4306] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="82c5df8b7cfaef12e528c59b8a746a86e136b68cbc60900c9e472ca0182aff6e" HandleID="k8s-pod-network.82c5df8b7cfaef12e528c59b8a746a86e136b68cbc60900c9e472ca0182aff6e" Workload="localhost-k8s-coredns--674b8bbfcf--gx8x8-eth0" Jul 7 00:00:45.527331 containerd[1456]: 2025-07-07 00:00:45.510 [INFO][4292] cni-plugin/k8s.go 418: Populated endpoint ContainerID="82c5df8b7cfaef12e528c59b8a746a86e136b68cbc60900c9e472ca0182aff6e" Namespace="kube-system" Pod="coredns-674b8bbfcf-gx8x8" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gx8x8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--gx8x8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d92e234f-52f0-4d16-a111-2199fe918193", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 0, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-gx8x8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia5be83ab918", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:45.527331 containerd[1456]: 2025-07-07 00:00:45.510 [INFO][4292] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="82c5df8b7cfaef12e528c59b8a746a86e136b68cbc60900c9e472ca0182aff6e" Namespace="kube-system" Pod="coredns-674b8bbfcf-gx8x8" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gx8x8-eth0" Jul 7 00:00:45.527331 containerd[1456]: 2025-07-07 00:00:45.510 [INFO][4292] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia5be83ab918 ContainerID="82c5df8b7cfaef12e528c59b8a746a86e136b68cbc60900c9e472ca0182aff6e" Namespace="kube-system" Pod="coredns-674b8bbfcf-gx8x8" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gx8x8-eth0" Jul 7 00:00:45.527331 containerd[1456]: 2025-07-07 00:00:45.512 [INFO][4292] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="82c5df8b7cfaef12e528c59b8a746a86e136b68cbc60900c9e472ca0182aff6e" Namespace="kube-system" Pod="coredns-674b8bbfcf-gx8x8" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gx8x8-eth0" Jul 7 00:00:45.527331 containerd[1456]: 2025-07-07 00:00:45.513 [INFO][4292] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="82c5df8b7cfaef12e528c59b8a746a86e136b68cbc60900c9e472ca0182aff6e" Namespace="kube-system" Pod="coredns-674b8bbfcf-gx8x8" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gx8x8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--gx8x8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d92e234f-52f0-4d16-a111-2199fe918193", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 0, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"82c5df8b7cfaef12e528c59b8a746a86e136b68cbc60900c9e472ca0182aff6e", Pod:"coredns-674b8bbfcf-gx8x8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia5be83ab918", MAC:"66:c8:ef:68:c3:c8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:45.527331 containerd[1456]: 2025-07-07 00:00:45.522 [INFO][4292] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="82c5df8b7cfaef12e528c59b8a746a86e136b68cbc60900c9e472ca0182aff6e" Namespace="kube-system" Pod="coredns-674b8bbfcf-gx8x8" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gx8x8-eth0" Jul 7 00:00:45.546329 containerd[1456]: time="2025-07-07T00:00:45.546171122Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:00:45.546329 containerd[1456]: time="2025-07-07T00:00:45.546252595Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:00:45.546424 containerd[1456]: time="2025-07-07T00:00:45.546275638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:45.547293 containerd[1456]: time="2025-07-07T00:00:45.547200305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:45.566077 systemd[1]: Started cri-containerd-82c5df8b7cfaef12e528c59b8a746a86e136b68cbc60900c9e472ca0182aff6e.scope - libcontainer container 82c5df8b7cfaef12e528c59b8a746a86e136b68cbc60900c9e472ca0182aff6e. Jul 7 00:00:45.578309 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 00:00:45.601637 containerd[1456]: time="2025-07-07T00:00:45.601600861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gx8x8,Uid:d92e234f-52f0-4d16-a111-2199fe918193,Namespace:kube-system,Attempt:1,} returns sandbox id \"82c5df8b7cfaef12e528c59b8a746a86e136b68cbc60900c9e472ca0182aff6e\"" Jul 7 00:00:45.602303 kubelet[2515]: E0707 00:00:45.602280 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:00:45.608468 containerd[1456]: time="2025-07-07T00:00:45.608429918Z" level=info msg="CreateContainer within sandbox \"82c5df8b7cfaef12e528c59b8a746a86e136b68cbc60900c9e472ca0182aff6e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 00:00:45.624174 containerd[1456]: time="2025-07-07T00:00:45.624130167Z" level=info msg="CreateContainer within sandbox \"82c5df8b7cfaef12e528c59b8a746a86e136b68cbc60900c9e472ca0182aff6e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3db1349af223321d10f130cb604a592ac01c5675455bae5adae0e4b242d8eee0\"" Jul 7 00:00:45.624699 containerd[1456]: time="2025-07-07T00:00:45.624643351Z" level=info msg="StartContainer for \"3db1349af223321d10f130cb604a592ac01c5675455bae5adae0e4b242d8eee0\"" Jul 7 00:00:45.653087 systemd[1]: Started cri-containerd-3db1349af223321d10f130cb604a592ac01c5675455bae5adae0e4b242d8eee0.scope - libcontainer container 3db1349af223321d10f130cb604a592ac01c5675455bae5adae0e4b242d8eee0. Jul 7 00:00:45.681242 containerd[1456]: time="2025-07-07T00:00:45.681190008Z" level=info msg="StartContainer for \"3db1349af223321d10f130cb604a592ac01c5675455bae5adae0e4b242d8eee0\" returns successfully" Jul 7 00:00:46.147074 systemd-networkd[1387]: calib255beac43c: Gained IPv6LL Jul 7 00:00:46.186506 containerd[1456]: time="2025-07-07T00:00:46.186430269Z" level=info msg="StopPodSandbox for \"b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa\"" Jul 7 00:00:46.266917 containerd[1456]: 2025-07-07 00:00:46.232 [INFO][4416] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa" Jul 7 00:00:46.266917 containerd[1456]: 2025-07-07 00:00:46.232 [INFO][4416] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa" iface="eth0" netns="/var/run/netns/cni-b01d7383-567c-b561-dc7b-5add9b2e2f85" Jul 7 00:00:46.266917 containerd[1456]: 2025-07-07 00:00:46.233 [INFO][4416] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa" iface="eth0" netns="/var/run/netns/cni-b01d7383-567c-b561-dc7b-5add9b2e2f85" Jul 7 00:00:46.266917 containerd[1456]: 2025-07-07 00:00:46.233 [INFO][4416] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa" iface="eth0" netns="/var/run/netns/cni-b01d7383-567c-b561-dc7b-5add9b2e2f85" Jul 7 00:00:46.266917 containerd[1456]: 2025-07-07 00:00:46.233 [INFO][4416] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa" Jul 7 00:00:46.266917 containerd[1456]: 2025-07-07 00:00:46.233 [INFO][4416] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa" Jul 7 00:00:46.266917 containerd[1456]: 2025-07-07 00:00:46.253 [INFO][4424] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa" HandleID="k8s-pod-network.b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa" Workload="localhost-k8s-coredns--674b8bbfcf--cd8nn-eth0" Jul 7 00:00:46.266917 containerd[1456]: 2025-07-07 00:00:46.254 [INFO][4424] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:46.266917 containerd[1456]: 2025-07-07 00:00:46.254 [INFO][4424] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:46.266917 containerd[1456]: 2025-07-07 00:00:46.259 [WARNING][4424] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa" HandleID="k8s-pod-network.b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa" Workload="localhost-k8s-coredns--674b8bbfcf--cd8nn-eth0" Jul 7 00:00:46.266917 containerd[1456]: 2025-07-07 00:00:46.259 [INFO][4424] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa" HandleID="k8s-pod-network.b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa" Workload="localhost-k8s-coredns--674b8bbfcf--cd8nn-eth0" Jul 7 00:00:46.266917 containerd[1456]: 2025-07-07 00:00:46.261 [INFO][4424] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:46.266917 containerd[1456]: 2025-07-07 00:00:46.263 [INFO][4416] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa" Jul 7 00:00:46.267325 containerd[1456]: time="2025-07-07T00:00:46.267098197Z" level=info msg="TearDown network for sandbox \"b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa\" successfully" Jul 7 00:00:46.267325 containerd[1456]: time="2025-07-07T00:00:46.267123454Z" level=info msg="StopPodSandbox for \"b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa\" returns successfully" Jul 7 00:00:46.267408 kubelet[2515]: E0707 00:00:46.267387 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:00:46.268133 containerd[1456]: time="2025-07-07T00:00:46.267791970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cd8nn,Uid:e866b654-c1cc-4469-bdb5-43648247fb5f,Namespace:kube-system,Attempt:1,}" Jul 7 00:00:46.269986 systemd[1]: run-netns-cni\x2db01d7383\x2d567c\x2db561\x2ddc7b\x2d5add9b2e2f85.mount: Deactivated successfully. Jul 7 00:00:46.326436 kubelet[2515]: E0707 00:00:46.326407 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:00:46.336072 kubelet[2515]: I0707 00:00:46.335553 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-gx8x8" podStartSLOduration=38.335534474 podStartE2EDuration="38.335534474s" podCreationTimestamp="2025-07-07 00:00:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:00:46.335180148 +0000 UTC m=+43.251913044" watchObservedRunningTime="2025-07-07 00:00:46.335534474 +0000 UTC m=+43.252267369" Jul 7 00:00:46.388046 systemd-networkd[1387]: calif918c5d716f: Link UP Jul 7 00:00:46.388577 systemd-networkd[1387]: calif918c5d716f: Gained carrier Jul 7 00:00:46.405840 containerd[1456]: 2025-07-07 00:00:46.309 [INFO][4432] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--cd8nn-eth0 coredns-674b8bbfcf- kube-system e866b654-c1cc-4469-bdb5-43648247fb5f 1057 0 2025-07-07 00:00:08 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-cd8nn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif918c5d716f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c036b51f7907bf5d8148f39cb839efcd0c7190ccf127e17bc1b46a9c23efa911" Namespace="kube-system" Pod="coredns-674b8bbfcf-cd8nn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cd8nn-" Jul 7 00:00:46.405840 containerd[1456]: 2025-07-07 00:00:46.310 [INFO][4432] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c036b51f7907bf5d8148f39cb839efcd0c7190ccf127e17bc1b46a9c23efa911" Namespace="kube-system" Pod="coredns-674b8bbfcf-cd8nn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cd8nn-eth0" Jul 7 00:00:46.405840 containerd[1456]: 2025-07-07 00:00:46.344 [INFO][4446] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c036b51f7907bf5d8148f39cb839efcd0c7190ccf127e17bc1b46a9c23efa911" HandleID="k8s-pod-network.c036b51f7907bf5d8148f39cb839efcd0c7190ccf127e17bc1b46a9c23efa911" Workload="localhost-k8s-coredns--674b8bbfcf--cd8nn-eth0" Jul 7 00:00:46.405840 containerd[1456]: 2025-07-07 00:00:46.345 [INFO][4446] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c036b51f7907bf5d8148f39cb839efcd0c7190ccf127e17bc1b46a9c23efa911" HandleID="k8s-pod-network.c036b51f7907bf5d8148f39cb839efcd0c7190ccf127e17bc1b46a9c23efa911" Workload="localhost-k8s-coredns--674b8bbfcf--cd8nn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f5f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-cd8nn", "timestamp":"2025-07-07 00:00:46.344965906 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:00:46.405840 containerd[1456]: 2025-07-07 00:00:46.345 [INFO][4446] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:46.405840 containerd[1456]: 2025-07-07 00:00:46.345 [INFO][4446] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:46.405840 containerd[1456]: 2025-07-07 00:00:46.345 [INFO][4446] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 00:00:46.405840 containerd[1456]: 2025-07-07 00:00:46.356 [INFO][4446] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c036b51f7907bf5d8148f39cb839efcd0c7190ccf127e17bc1b46a9c23efa911" host="localhost" Jul 7 00:00:46.405840 containerd[1456]: 2025-07-07 00:00:46.361 [INFO][4446] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 00:00:46.405840 containerd[1456]: 2025-07-07 00:00:46.366 [INFO][4446] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 00:00:46.405840 containerd[1456]: 2025-07-07 00:00:46.367 [INFO][4446] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 00:00:46.405840 containerd[1456]: 2025-07-07 00:00:46.369 [INFO][4446] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 00:00:46.405840 containerd[1456]: 2025-07-07 00:00:46.369 [INFO][4446] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c036b51f7907bf5d8148f39cb839efcd0c7190ccf127e17bc1b46a9c23efa911" host="localhost" Jul 7 00:00:46.405840 containerd[1456]: 2025-07-07 00:00:46.370 [INFO][4446] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c036b51f7907bf5d8148f39cb839efcd0c7190ccf127e17bc1b46a9c23efa911 Jul 7 00:00:46.405840 containerd[1456]: 2025-07-07 00:00:46.376 [INFO][4446] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c036b51f7907bf5d8148f39cb839efcd0c7190ccf127e17bc1b46a9c23efa911" host="localhost" Jul 7 00:00:46.405840 containerd[1456]: 2025-07-07 00:00:46.381 [INFO][4446] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.c036b51f7907bf5d8148f39cb839efcd0c7190ccf127e17bc1b46a9c23efa911" host="localhost" Jul 7 00:00:46.405840 containerd[1456]: 2025-07-07 00:00:46.381 [INFO][4446] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.c036b51f7907bf5d8148f39cb839efcd0c7190ccf127e17bc1b46a9c23efa911" host="localhost" Jul 7 00:00:46.405840 containerd[1456]: 2025-07-07 00:00:46.381 [INFO][4446] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:46.405840 containerd[1456]: 2025-07-07 00:00:46.382 [INFO][4446] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="c036b51f7907bf5d8148f39cb839efcd0c7190ccf127e17bc1b46a9c23efa911" HandleID="k8s-pod-network.c036b51f7907bf5d8148f39cb839efcd0c7190ccf127e17bc1b46a9c23efa911" Workload="localhost-k8s-coredns--674b8bbfcf--cd8nn-eth0" Jul 7 00:00:46.406992 containerd[1456]: 2025-07-07 00:00:46.385 [INFO][4432] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c036b51f7907bf5d8148f39cb839efcd0c7190ccf127e17bc1b46a9c23efa911" Namespace="kube-system" Pod="coredns-674b8bbfcf-cd8nn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cd8nn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--cd8nn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e866b654-c1cc-4469-bdb5-43648247fb5f", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 0, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-cd8nn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif918c5d716f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:46.406992 containerd[1456]: 2025-07-07 00:00:46.385 [INFO][4432] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="c036b51f7907bf5d8148f39cb839efcd0c7190ccf127e17bc1b46a9c23efa911" Namespace="kube-system" Pod="coredns-674b8bbfcf-cd8nn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cd8nn-eth0" Jul 7 00:00:46.406992 containerd[1456]: 2025-07-07 00:00:46.385 [INFO][4432] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif918c5d716f ContainerID="c036b51f7907bf5d8148f39cb839efcd0c7190ccf127e17bc1b46a9c23efa911" Namespace="kube-system" Pod="coredns-674b8bbfcf-cd8nn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cd8nn-eth0" Jul 7 00:00:46.406992 containerd[1456]: 2025-07-07 00:00:46.388 [INFO][4432] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c036b51f7907bf5d8148f39cb839efcd0c7190ccf127e17bc1b46a9c23efa911" Namespace="kube-system" Pod="coredns-674b8bbfcf-cd8nn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cd8nn-eth0" Jul 7 00:00:46.406992 containerd[1456]: 2025-07-07 00:00:46.389 [INFO][4432] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c036b51f7907bf5d8148f39cb839efcd0c7190ccf127e17bc1b46a9c23efa911" Namespace="kube-system" Pod="coredns-674b8bbfcf-cd8nn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cd8nn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--cd8nn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e866b654-c1cc-4469-bdb5-43648247fb5f", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 0, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c036b51f7907bf5d8148f39cb839efcd0c7190ccf127e17bc1b46a9c23efa911", Pod:"coredns-674b8bbfcf-cd8nn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif918c5d716f", MAC:"32:a9:5c:90:d8:c4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:46.406992 containerd[1456]: 2025-07-07 00:00:46.400 [INFO][4432] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c036b51f7907bf5d8148f39cb839efcd0c7190ccf127e17bc1b46a9c23efa911" Namespace="kube-system" Pod="coredns-674b8bbfcf-cd8nn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cd8nn-eth0" Jul 7 00:00:46.432412 containerd[1456]: time="2025-07-07T00:00:46.432275117Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:00:46.432546 containerd[1456]: time="2025-07-07T00:00:46.432442482Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:00:46.433257 containerd[1456]: time="2025-07-07T00:00:46.433091630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:46.433257 containerd[1456]: time="2025-07-07T00:00:46.433183954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:46.458134 systemd[1]: Started cri-containerd-c036b51f7907bf5d8148f39cb839efcd0c7190ccf127e17bc1b46a9c23efa911.scope - libcontainer container c036b51f7907bf5d8148f39cb839efcd0c7190ccf127e17bc1b46a9c23efa911. Jul 7 00:00:46.471621 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 00:00:46.498317 containerd[1456]: time="2025-07-07T00:00:46.498281290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cd8nn,Uid:e866b654-c1cc-4469-bdb5-43648247fb5f,Namespace:kube-system,Attempt:1,} returns sandbox id \"c036b51f7907bf5d8148f39cb839efcd0c7190ccf127e17bc1b46a9c23efa911\"" Jul 7 00:00:46.499357 kubelet[2515]: E0707 00:00:46.499332 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:00:46.594105 systemd-networkd[1387]: vxlan.calico: Gained IPv6LL Jul 7 00:00:46.712086 containerd[1456]: time="2025-07-07T00:00:46.711926775Z" level=info msg="CreateContainer within sandbox \"c036b51f7907bf5d8148f39cb839efcd0c7190ccf127e17bc1b46a9c23efa911\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 00:00:46.754765 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount685334120.mount: Deactivated successfully. Jul 7 00:00:46.758843 containerd[1456]: time="2025-07-07T00:00:46.758810484Z" level=info msg="CreateContainer within sandbox \"c036b51f7907bf5d8148f39cb839efcd0c7190ccf127e17bc1b46a9c23efa911\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d4e920604f37b880c7c10ad7b315c9e5b582e553a0adf397152f2cb6d112761b\"" Jul 7 00:00:46.759650 containerd[1456]: time="2025-07-07T00:00:46.759604455Z" level=info msg="StartContainer for \"d4e920604f37b880c7c10ad7b315c9e5b582e553a0adf397152f2cb6d112761b\"" Jul 7 00:00:46.790424 systemd[1]: Started cri-containerd-d4e920604f37b880c7c10ad7b315c9e5b582e553a0adf397152f2cb6d112761b.scope - libcontainer container d4e920604f37b880c7c10ad7b315c9e5b582e553a0adf397152f2cb6d112761b. Jul 7 00:00:46.817876 containerd[1456]: time="2025-07-07T00:00:46.817825016Z" level=info msg="StartContainer for \"d4e920604f37b880c7c10ad7b315c9e5b582e553a0adf397152f2cb6d112761b\" returns successfully" Jul 7 00:00:46.827184 containerd[1456]: time="2025-07-07T00:00:46.827133849Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:46.827987 containerd[1456]: time="2025-07-07T00:00:46.827840767Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 7 00:00:46.829048 containerd[1456]: time="2025-07-07T00:00:46.829019470Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:46.832437 containerd[1456]: time="2025-07-07T00:00:46.832410438Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:46.833839 containerd[1456]: time="2025-07-07T00:00:46.833481981Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.882256209s" Jul 7 00:00:46.833839 containerd[1456]: time="2025-07-07T00:00:46.833510134Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 7 00:00:46.838723 containerd[1456]: time="2025-07-07T00:00:46.838683780Z" level=info msg="CreateContainer within sandbox \"8cea13b78a0cbbdee26edbaefa9bbc6820db1b3ab0509c2d3e3b62f5915d2406\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 7 00:00:46.855222 containerd[1456]: time="2025-07-07T00:00:46.855167047Z" level=info msg="CreateContainer within sandbox \"8cea13b78a0cbbdee26edbaefa9bbc6820db1b3ab0509c2d3e3b62f5915d2406\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"26d58a2bce51d90d3fa2ab2dae09726cd389516b9190c50fc21ccf1b45f21b4d\"" Jul 7 00:00:46.855728 containerd[1456]: time="2025-07-07T00:00:46.855635326Z" level=info msg="StartContainer for \"26d58a2bce51d90d3fa2ab2dae09726cd389516b9190c50fc21ccf1b45f21b4d\"" Jul 7 00:00:46.884083 systemd[1]: Started cri-containerd-26d58a2bce51d90d3fa2ab2dae09726cd389516b9190c50fc21ccf1b45f21b4d.scope - libcontainer container 26d58a2bce51d90d3fa2ab2dae09726cd389516b9190c50fc21ccf1b45f21b4d. Jul 7 00:00:46.922262 containerd[1456]: time="2025-07-07T00:00:46.922224533Z" level=info msg="StartContainer for \"26d58a2bce51d90d3fa2ab2dae09726cd389516b9190c50fc21ccf1b45f21b4d\" returns successfully" Jul 7 00:00:46.923779 containerd[1456]: time="2025-07-07T00:00:46.923579799Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 7 00:00:47.186318 containerd[1456]: time="2025-07-07T00:00:47.185529641Z" level=info msg="StopPodSandbox for \"5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb\"" Jul 7 00:00:47.262453 containerd[1456]: 2025-07-07 00:00:47.228 [INFO][4605] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb" Jul 7 00:00:47.262453 containerd[1456]: 2025-07-07 00:00:47.228 [INFO][4605] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb" iface="eth0" netns="/var/run/netns/cni-ff9c75e4-f459-43ad-2356-128733c85d1d" Jul 7 00:00:47.262453 containerd[1456]: 2025-07-07 00:00:47.229 [INFO][4605] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb" iface="eth0" netns="/var/run/netns/cni-ff9c75e4-f459-43ad-2356-128733c85d1d" Jul 7 00:00:47.262453 containerd[1456]: 2025-07-07 00:00:47.229 [INFO][4605] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb" iface="eth0" netns="/var/run/netns/cni-ff9c75e4-f459-43ad-2356-128733c85d1d" Jul 7 00:00:47.262453 containerd[1456]: 2025-07-07 00:00:47.229 [INFO][4605] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb" Jul 7 00:00:47.262453 containerd[1456]: 2025-07-07 00:00:47.229 [INFO][4605] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb" Jul 7 00:00:47.262453 containerd[1456]: 2025-07-07 00:00:47.248 [INFO][4614] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb" HandleID="k8s-pod-network.5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb" Workload="localhost-k8s-calico--apiserver--5ffb9474b--prlmm-eth0" Jul 7 00:00:47.262453 containerd[1456]: 2025-07-07 00:00:47.248 [INFO][4614] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:47.262453 containerd[1456]: 2025-07-07 00:00:47.248 [INFO][4614] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:47.262453 containerd[1456]: 2025-07-07 00:00:47.255 [WARNING][4614] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb" HandleID="k8s-pod-network.5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb" Workload="localhost-k8s-calico--apiserver--5ffb9474b--prlmm-eth0" Jul 7 00:00:47.262453 containerd[1456]: 2025-07-07 00:00:47.255 [INFO][4614] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb" HandleID="k8s-pod-network.5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb" Workload="localhost-k8s-calico--apiserver--5ffb9474b--prlmm-eth0" Jul 7 00:00:47.262453 containerd[1456]: 2025-07-07 00:00:47.256 [INFO][4614] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:47.262453 containerd[1456]: 2025-07-07 00:00:47.259 [INFO][4605] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb" Jul 7 00:00:47.263206 containerd[1456]: time="2025-07-07T00:00:47.263151225Z" level=info msg="TearDown network for sandbox \"5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb\" successfully" Jul 7 00:00:47.263206 containerd[1456]: time="2025-07-07T00:00:47.263196780Z" level=info msg="StopPodSandbox for \"5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb\" returns successfully" Jul 7 00:00:47.263914 containerd[1456]: time="2025-07-07T00:00:47.263877479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ffb9474b-prlmm,Uid:573569b9-db50-4d0d-a4f3-ce9219aa836d,Namespace:calico-apiserver,Attempt:1,}" Jul 7 00:00:47.331654 kubelet[2515]: E0707 00:00:47.331564 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:00:47.334840 kubelet[2515]: E0707 00:00:47.334821 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:00:47.490132 systemd-networkd[1387]: calia5be83ab918: Gained IPv6LL Jul 7 00:00:47.532476 kubelet[2515]: I0707 00:00:47.532404 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-cd8nn" podStartSLOduration=39.532385634 podStartE2EDuration="39.532385634s" podCreationTimestamp="2025-07-07 00:00:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:00:47.529581949 +0000 UTC m=+44.446314854" watchObservedRunningTime="2025-07-07 00:00:47.532385634 +0000 UTC m=+44.449118529" Jul 7 00:00:47.579300 systemd-networkd[1387]: cali37faeaa72db: Link UP Jul 7 00:00:47.581366 systemd-networkd[1387]: cali37faeaa72db: Gained carrier Jul 7 00:00:47.596739 containerd[1456]: 2025-07-07 00:00:47.316 [INFO][4623] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5ffb9474b--prlmm-eth0 calico-apiserver-5ffb9474b- calico-apiserver 573569b9-db50-4d0d-a4f3-ce9219aa836d 1088 0 2025-07-07 00:00:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5ffb9474b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5ffb9474b-prlmm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali37faeaa72db [] [] }} ContainerID="789d870b31747453c67be80958e428216fe98fd3e50c07e81bd8c32d9fbc0997" Namespace="calico-apiserver" Pod="calico-apiserver-5ffb9474b-prlmm" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ffb9474b--prlmm-" Jul 7 00:00:47.596739 containerd[1456]: 2025-07-07 00:00:47.316 [INFO][4623] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="789d870b31747453c67be80958e428216fe98fd3e50c07e81bd8c32d9fbc0997" Namespace="calico-apiserver" Pod="calico-apiserver-5ffb9474b-prlmm" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ffb9474b--prlmm-eth0" Jul 7 00:00:47.596739 containerd[1456]: 2025-07-07 00:00:47.342 [INFO][4636] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="789d870b31747453c67be80958e428216fe98fd3e50c07e81bd8c32d9fbc0997" HandleID="k8s-pod-network.789d870b31747453c67be80958e428216fe98fd3e50c07e81bd8c32d9fbc0997" Workload="localhost-k8s-calico--apiserver--5ffb9474b--prlmm-eth0" Jul 7 00:00:47.596739 containerd[1456]: 2025-07-07 00:00:47.343 [INFO][4636] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="789d870b31747453c67be80958e428216fe98fd3e50c07e81bd8c32d9fbc0997" HandleID="k8s-pod-network.789d870b31747453c67be80958e428216fe98fd3e50c07e81bd8c32d9fbc0997" Workload="localhost-k8s-calico--apiserver--5ffb9474b--prlmm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e7d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5ffb9474b-prlmm", "timestamp":"2025-07-07 00:00:47.342905754 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:00:47.596739 containerd[1456]: 2025-07-07 00:00:47.343 [INFO][4636] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:47.596739 containerd[1456]: 2025-07-07 00:00:47.343 [INFO][4636] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:47.596739 containerd[1456]: 2025-07-07 00:00:47.343 [INFO][4636] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 00:00:47.596739 containerd[1456]: 2025-07-07 00:00:47.533 [INFO][4636] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.789d870b31747453c67be80958e428216fe98fd3e50c07e81bd8c32d9fbc0997" host="localhost" Jul 7 00:00:47.596739 containerd[1456]: 2025-07-07 00:00:47.544 [INFO][4636] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 00:00:47.596739 containerd[1456]: 2025-07-07 00:00:47.553 [INFO][4636] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 00:00:47.596739 containerd[1456]: 2025-07-07 00:00:47.555 [INFO][4636] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 00:00:47.596739 containerd[1456]: 2025-07-07 00:00:47.558 [INFO][4636] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 00:00:47.596739 containerd[1456]: 2025-07-07 00:00:47.558 [INFO][4636] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.789d870b31747453c67be80958e428216fe98fd3e50c07e81bd8c32d9fbc0997" host="localhost" Jul 7 00:00:47.596739 containerd[1456]: 2025-07-07 00:00:47.559 [INFO][4636] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.789d870b31747453c67be80958e428216fe98fd3e50c07e81bd8c32d9fbc0997 Jul 7 00:00:47.596739 containerd[1456]: 2025-07-07 00:00:47.562 [INFO][4636] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.789d870b31747453c67be80958e428216fe98fd3e50c07e81bd8c32d9fbc0997" host="localhost" Jul 7 00:00:47.596739 containerd[1456]: 2025-07-07 00:00:47.570 [INFO][4636] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.789d870b31747453c67be80958e428216fe98fd3e50c07e81bd8c32d9fbc0997" host="localhost" Jul 7 00:00:47.596739 containerd[1456]: 2025-07-07 00:00:47.570 [INFO][4636] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.789d870b31747453c67be80958e428216fe98fd3e50c07e81bd8c32d9fbc0997" host="localhost" Jul 7 00:00:47.596739 containerd[1456]: 2025-07-07 00:00:47.570 [INFO][4636] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:47.596739 containerd[1456]: 2025-07-07 00:00:47.570 [INFO][4636] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="789d870b31747453c67be80958e428216fe98fd3e50c07e81bd8c32d9fbc0997" HandleID="k8s-pod-network.789d870b31747453c67be80958e428216fe98fd3e50c07e81bd8c32d9fbc0997" Workload="localhost-k8s-calico--apiserver--5ffb9474b--prlmm-eth0" Jul 7 00:00:47.597508 containerd[1456]: 2025-07-07 00:00:47.574 [INFO][4623] cni-plugin/k8s.go 418: Populated endpoint ContainerID="789d870b31747453c67be80958e428216fe98fd3e50c07e81bd8c32d9fbc0997" Namespace="calico-apiserver" Pod="calico-apiserver-5ffb9474b-prlmm" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ffb9474b--prlmm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5ffb9474b--prlmm-eth0", GenerateName:"calico-apiserver-5ffb9474b-", Namespace:"calico-apiserver", SelfLink:"", UID:"573569b9-db50-4d0d-a4f3-ce9219aa836d", ResourceVersion:"1088", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 0, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ffb9474b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5ffb9474b-prlmm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali37faeaa72db", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:47.597508 containerd[1456]: 2025-07-07 00:00:47.574 [INFO][4623] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="789d870b31747453c67be80958e428216fe98fd3e50c07e81bd8c32d9fbc0997" Namespace="calico-apiserver" Pod="calico-apiserver-5ffb9474b-prlmm" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ffb9474b--prlmm-eth0" Jul 7 00:00:47.597508 containerd[1456]: 2025-07-07 00:00:47.574 [INFO][4623] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali37faeaa72db ContainerID="789d870b31747453c67be80958e428216fe98fd3e50c07e81bd8c32d9fbc0997" Namespace="calico-apiserver" Pod="calico-apiserver-5ffb9474b-prlmm" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ffb9474b--prlmm-eth0" Jul 7 00:00:47.597508 containerd[1456]: 2025-07-07 00:00:47.582 [INFO][4623] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="789d870b31747453c67be80958e428216fe98fd3e50c07e81bd8c32d9fbc0997" Namespace="calico-apiserver" Pod="calico-apiserver-5ffb9474b-prlmm" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ffb9474b--prlmm-eth0" Jul 7 00:00:47.597508 containerd[1456]: 2025-07-07 00:00:47.582 [INFO][4623] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="789d870b31747453c67be80958e428216fe98fd3e50c07e81bd8c32d9fbc0997" Namespace="calico-apiserver" Pod="calico-apiserver-5ffb9474b-prlmm" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ffb9474b--prlmm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5ffb9474b--prlmm-eth0", GenerateName:"calico-apiserver-5ffb9474b-", Namespace:"calico-apiserver", SelfLink:"", UID:"573569b9-db50-4d0d-a4f3-ce9219aa836d", ResourceVersion:"1088", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 0, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ffb9474b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"789d870b31747453c67be80958e428216fe98fd3e50c07e81bd8c32d9fbc0997", Pod:"calico-apiserver-5ffb9474b-prlmm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali37faeaa72db", MAC:"ca:4c:e2:aa:9e:c8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:47.597508 containerd[1456]: 2025-07-07 00:00:47.592 [INFO][4623] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="789d870b31747453c67be80958e428216fe98fd3e50c07e81bd8c32d9fbc0997" Namespace="calico-apiserver" Pod="calico-apiserver-5ffb9474b-prlmm" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ffb9474b--prlmm-eth0" Jul 7 00:00:47.618172 systemd-networkd[1387]: calif918c5d716f: Gained IPv6LL Jul 7 00:00:47.621029 containerd[1456]: time="2025-07-07T00:00:47.620833889Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:00:47.621029 containerd[1456]: time="2025-07-07T00:00:47.620964384Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:00:47.621029 containerd[1456]: time="2025-07-07T00:00:47.620987607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:47.621225 containerd[1456]: time="2025-07-07T00:00:47.621116981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:47.643070 systemd[1]: Started cri-containerd-789d870b31747453c67be80958e428216fe98fd3e50c07e81bd8c32d9fbc0997.scope - libcontainer container 789d870b31747453c67be80958e428216fe98fd3e50c07e81bd8c32d9fbc0997. Jul 7 00:00:47.659128 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 00:00:47.687719 containerd[1456]: time="2025-07-07T00:00:47.687673195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ffb9474b-prlmm,Uid:573569b9-db50-4d0d-a4f3-ce9219aa836d,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"789d870b31747453c67be80958e428216fe98fd3e50c07e81bd8c32d9fbc0997\"" Jul 7 00:00:47.729044 systemd[1]: run-netns-cni\x2dff9c75e4\x2df459\x2d43ad\x2d2356\x2d128733c85d1d.mount: Deactivated successfully. Jul 7 00:00:48.105449 systemd[1]: Started sshd@9-10.0.0.146:22-10.0.0.1:53344.service - OpenSSH per-connection server daemon (10.0.0.1:53344). Jul 7 00:00:48.152393 sshd[4697]: Accepted publickey for core from 10.0.0.1 port 53344 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 7 00:00:48.154327 sshd[4697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:00:48.159232 systemd-logind[1440]: New session 10 of user core. Jul 7 00:00:48.165097 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 00:00:48.185598 containerd[1456]: time="2025-07-07T00:00:48.185565116Z" level=info msg="StopPodSandbox for \"d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97\"" Jul 7 00:00:48.273296 containerd[1456]: 2025-07-07 00:00:48.226 [INFO][4711] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97" Jul 7 00:00:48.273296 containerd[1456]: 2025-07-07 00:00:48.227 [INFO][4711] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97" iface="eth0" netns="/var/run/netns/cni-ccac1d9b-1124-11da-24e1-ea19df311c96" Jul 7 00:00:48.273296 containerd[1456]: 2025-07-07 00:00:48.228 [INFO][4711] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97" iface="eth0" netns="/var/run/netns/cni-ccac1d9b-1124-11da-24e1-ea19df311c96" Jul 7 00:00:48.273296 containerd[1456]: 2025-07-07 00:00:48.228 [INFO][4711] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97" iface="eth0" netns="/var/run/netns/cni-ccac1d9b-1124-11da-24e1-ea19df311c96" Jul 7 00:00:48.273296 containerd[1456]: 2025-07-07 00:00:48.228 [INFO][4711] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97" Jul 7 00:00:48.273296 containerd[1456]: 2025-07-07 00:00:48.228 [INFO][4711] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97" Jul 7 00:00:48.273296 containerd[1456]: 2025-07-07 00:00:48.256 [INFO][4727] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97" HandleID="k8s-pod-network.d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97" Workload="localhost-k8s-calico--kube--controllers--dbc45fd6f--nj8xd-eth0" Jul 7 00:00:48.273296 containerd[1456]: 2025-07-07 00:00:48.257 [INFO][4727] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:48.273296 containerd[1456]: 2025-07-07 00:00:48.257 [INFO][4727] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:48.273296 containerd[1456]: 2025-07-07 00:00:48.265 [WARNING][4727] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97" HandleID="k8s-pod-network.d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97" Workload="localhost-k8s-calico--kube--controllers--dbc45fd6f--nj8xd-eth0" Jul 7 00:00:48.273296 containerd[1456]: 2025-07-07 00:00:48.265 [INFO][4727] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97" HandleID="k8s-pod-network.d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97" Workload="localhost-k8s-calico--kube--controllers--dbc45fd6f--nj8xd-eth0" Jul 7 00:00:48.273296 containerd[1456]: 2025-07-07 00:00:48.267 [INFO][4727] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:48.273296 containerd[1456]: 2025-07-07 00:00:48.270 [INFO][4711] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97" Jul 7 00:00:48.276290 containerd[1456]: time="2025-07-07T00:00:48.276077853Z" level=info msg="TearDown network for sandbox \"d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97\" successfully" Jul 7 00:00:48.276290 containerd[1456]: time="2025-07-07T00:00:48.276119141Z" level=info msg="StopPodSandbox for \"d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97\" returns successfully" Jul 7 00:00:48.276700 systemd[1]: run-netns-cni\x2dccac1d9b\x2d1124\x2d11da\x2d24e1\x2dea19df311c96.mount: Deactivated successfully. Jul 7 00:00:48.277343 containerd[1456]: time="2025-07-07T00:00:48.277307492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dbc45fd6f-nj8xd,Uid:3661e791-5a5a-4263-b5c4-3e2adfeb5eb7,Namespace:calico-system,Attempt:1,}" Jul 7 00:00:48.338599 kubelet[2515]: E0707 00:00:48.338563 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:00:48.339120 kubelet[2515]: E0707 00:00:48.338975 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:00:48.480066 sshd[4697]: pam_unix(sshd:session): session closed for user core Jul 7 00:00:48.487237 systemd[1]: sshd@9-10.0.0.146:22-10.0.0.1:53344.service: Deactivated successfully. Jul 7 00:00:48.489745 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 00:00:48.490568 systemd-logind[1440]: Session 10 logged out. Waiting for processes to exit. Jul 7 00:00:48.491699 systemd-logind[1440]: Removed session 10. Jul 7 00:00:48.694022 systemd-networkd[1387]: caliafaad1cce76: Link UP Jul 7 00:00:48.694820 systemd-networkd[1387]: caliafaad1cce76: Gained carrier Jul 7 00:00:48.705815 containerd[1456]: 2025-07-07 00:00:48.627 [INFO][4742] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--dbc45fd6f--nj8xd-eth0 calico-kube-controllers-dbc45fd6f- calico-system 3661e791-5a5a-4263-b5c4-3e2adfeb5eb7 1106 0 2025-07-07 00:00:19 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:dbc45fd6f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-dbc45fd6f-nj8xd eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliafaad1cce76 [] [] }} ContainerID="89b7af0f7c217b847cbfbe8e6568ad5fd70a65c2f63f4cd5ea764afb34391ad5" Namespace="calico-system" Pod="calico-kube-controllers-dbc45fd6f-nj8xd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dbc45fd6f--nj8xd-" Jul 7 00:00:48.705815 containerd[1456]: 2025-07-07 00:00:48.627 [INFO][4742] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="89b7af0f7c217b847cbfbe8e6568ad5fd70a65c2f63f4cd5ea764afb34391ad5" Namespace="calico-system" Pod="calico-kube-controllers-dbc45fd6f-nj8xd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dbc45fd6f--nj8xd-eth0" Jul 7 00:00:48.705815 containerd[1456]: 2025-07-07 00:00:48.654 [INFO][4756] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="89b7af0f7c217b847cbfbe8e6568ad5fd70a65c2f63f4cd5ea764afb34391ad5" HandleID="k8s-pod-network.89b7af0f7c217b847cbfbe8e6568ad5fd70a65c2f63f4cd5ea764afb34391ad5" Workload="localhost-k8s-calico--kube--controllers--dbc45fd6f--nj8xd-eth0" Jul 7 00:00:48.705815 containerd[1456]: 2025-07-07 00:00:48.654 [INFO][4756] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="89b7af0f7c217b847cbfbe8e6568ad5fd70a65c2f63f4cd5ea764afb34391ad5" HandleID="k8s-pod-network.89b7af0f7c217b847cbfbe8e6568ad5fd70a65c2f63f4cd5ea764afb34391ad5" Workload="localhost-k8s-calico--kube--controllers--dbc45fd6f--nj8xd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003af600), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-dbc45fd6f-nj8xd", "timestamp":"2025-07-07 00:00:48.654556479 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:00:48.705815 containerd[1456]: 2025-07-07 00:00:48.654 [INFO][4756] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:48.705815 containerd[1456]: 2025-07-07 00:00:48.654 [INFO][4756] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:48.705815 containerd[1456]: 2025-07-07 00:00:48.654 [INFO][4756] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 00:00:48.705815 containerd[1456]: 2025-07-07 00:00:48.661 [INFO][4756] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.89b7af0f7c217b847cbfbe8e6568ad5fd70a65c2f63f4cd5ea764afb34391ad5" host="localhost" Jul 7 00:00:48.705815 containerd[1456]: 2025-07-07 00:00:48.668 [INFO][4756] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 00:00:48.705815 containerd[1456]: 2025-07-07 00:00:48.673 [INFO][4756] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 00:00:48.705815 containerd[1456]: 2025-07-07 00:00:48.674 [INFO][4756] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 00:00:48.705815 containerd[1456]: 2025-07-07 00:00:48.676 [INFO][4756] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 00:00:48.705815 containerd[1456]: 2025-07-07 00:00:48.676 [INFO][4756] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.89b7af0f7c217b847cbfbe8e6568ad5fd70a65c2f63f4cd5ea764afb34391ad5" host="localhost" Jul 7 00:00:48.705815 containerd[1456]: 2025-07-07 00:00:48.679 [INFO][4756] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.89b7af0f7c217b847cbfbe8e6568ad5fd70a65c2f63f4cd5ea764afb34391ad5 Jul 7 00:00:48.705815 containerd[1456]: 2025-07-07 00:00:48.682 [INFO][4756] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.89b7af0f7c217b847cbfbe8e6568ad5fd70a65c2f63f4cd5ea764afb34391ad5" host="localhost" Jul 7 00:00:48.705815 containerd[1456]: 2025-07-07 00:00:48.688 [INFO][4756] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.89b7af0f7c217b847cbfbe8e6568ad5fd70a65c2f63f4cd5ea764afb34391ad5" host="localhost" Jul 7 00:00:48.705815 containerd[1456]: 2025-07-07 00:00:48.688 [INFO][4756] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.89b7af0f7c217b847cbfbe8e6568ad5fd70a65c2f63f4cd5ea764afb34391ad5" host="localhost" Jul 7 00:00:48.705815 containerd[1456]: 2025-07-07 00:00:48.688 [INFO][4756] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:48.705815 containerd[1456]: 2025-07-07 00:00:48.688 [INFO][4756] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="89b7af0f7c217b847cbfbe8e6568ad5fd70a65c2f63f4cd5ea764afb34391ad5" HandleID="k8s-pod-network.89b7af0f7c217b847cbfbe8e6568ad5fd70a65c2f63f4cd5ea764afb34391ad5" Workload="localhost-k8s-calico--kube--controllers--dbc45fd6f--nj8xd-eth0" Jul 7 00:00:48.706401 containerd[1456]: 2025-07-07 00:00:48.691 [INFO][4742] cni-plugin/k8s.go 418: Populated endpoint ContainerID="89b7af0f7c217b847cbfbe8e6568ad5fd70a65c2f63f4cd5ea764afb34391ad5" Namespace="calico-system" Pod="calico-kube-controllers-dbc45fd6f-nj8xd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dbc45fd6f--nj8xd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--dbc45fd6f--nj8xd-eth0", GenerateName:"calico-kube-controllers-dbc45fd6f-", Namespace:"calico-system", SelfLink:"", UID:"3661e791-5a5a-4263-b5c4-3e2adfeb5eb7", ResourceVersion:"1106", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 0, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"dbc45fd6f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-dbc45fd6f-nj8xd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliafaad1cce76", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:48.706401 containerd[1456]: 2025-07-07 00:00:48.691 [INFO][4742] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="89b7af0f7c217b847cbfbe8e6568ad5fd70a65c2f63f4cd5ea764afb34391ad5" Namespace="calico-system" Pod="calico-kube-controllers-dbc45fd6f-nj8xd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dbc45fd6f--nj8xd-eth0" Jul 7 00:00:48.706401 containerd[1456]: 2025-07-07 00:00:48.691 [INFO][4742] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliafaad1cce76 ContainerID="89b7af0f7c217b847cbfbe8e6568ad5fd70a65c2f63f4cd5ea764afb34391ad5" Namespace="calico-system" Pod="calico-kube-controllers-dbc45fd6f-nj8xd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dbc45fd6f--nj8xd-eth0" Jul 7 00:00:48.706401 containerd[1456]: 2025-07-07 00:00:48.694 [INFO][4742] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="89b7af0f7c217b847cbfbe8e6568ad5fd70a65c2f63f4cd5ea764afb34391ad5" Namespace="calico-system" Pod="calico-kube-controllers-dbc45fd6f-nj8xd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dbc45fd6f--nj8xd-eth0" Jul 7 00:00:48.706401 containerd[1456]: 2025-07-07 00:00:48.694 [INFO][4742] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="89b7af0f7c217b847cbfbe8e6568ad5fd70a65c2f63f4cd5ea764afb34391ad5" Namespace="calico-system" Pod="calico-kube-controllers-dbc45fd6f-nj8xd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dbc45fd6f--nj8xd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--dbc45fd6f--nj8xd-eth0", GenerateName:"calico-kube-controllers-dbc45fd6f-", Namespace:"calico-system", SelfLink:"", UID:"3661e791-5a5a-4263-b5c4-3e2adfeb5eb7", ResourceVersion:"1106", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 0, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"dbc45fd6f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"89b7af0f7c217b847cbfbe8e6568ad5fd70a65c2f63f4cd5ea764afb34391ad5", Pod:"calico-kube-controllers-dbc45fd6f-nj8xd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliafaad1cce76", MAC:"2e:9d:db:e8:e7:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:48.706401 containerd[1456]: 2025-07-07 00:00:48.702 [INFO][4742] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="89b7af0f7c217b847cbfbe8e6568ad5fd70a65c2f63f4cd5ea764afb34391ad5" Namespace="calico-system" Pod="calico-kube-controllers-dbc45fd6f-nj8xd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dbc45fd6f--nj8xd-eth0" Jul 7 00:00:48.727632 containerd[1456]: time="2025-07-07T00:00:48.727338294Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:00:48.727632 containerd[1456]: time="2025-07-07T00:00:48.727509404Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:00:48.727632 containerd[1456]: time="2025-07-07T00:00:48.727548107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:48.727769 containerd[1456]: time="2025-07-07T00:00:48.727683682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:48.751094 systemd[1]: Started cri-containerd-89b7af0f7c217b847cbfbe8e6568ad5fd70a65c2f63f4cd5ea764afb34391ad5.scope - libcontainer container 89b7af0f7c217b847cbfbe8e6568ad5fd70a65c2f63f4cd5ea764afb34391ad5. Jul 7 00:00:48.765079 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 00:00:48.791368 containerd[1456]: time="2025-07-07T00:00:48.791327998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dbc45fd6f-nj8xd,Uid:3661e791-5a5a-4263-b5c4-3e2adfeb5eb7,Namespace:calico-system,Attempt:1,} returns sandbox id \"89b7af0f7c217b847cbfbe8e6568ad5fd70a65c2f63f4cd5ea764afb34391ad5\"" Jul 7 00:00:48.962234 systemd-networkd[1387]: cali37faeaa72db: Gained IPv6LL Jul 7 00:00:49.186623 containerd[1456]: time="2025-07-07T00:00:49.186033407Z" level=info msg="StopPodSandbox for \"26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a\"" Jul 7 00:00:49.186623 containerd[1456]: time="2025-07-07T00:00:49.186237229Z" level=info msg="StopPodSandbox for \"00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690\"" Jul 7 00:00:49.187002 containerd[1456]: time="2025-07-07T00:00:49.186865599Z" level=info msg="StopPodSandbox for \"939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac\"" Jul 7 00:00:49.266859 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2633496561.mount: Deactivated successfully. Jul 7 00:00:49.345593 kubelet[2515]: E0707 00:00:49.345534 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:00:49.542817 containerd[1456]: 2025-07-07 00:00:49.497 [INFO][4853] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac" Jul 7 00:00:49.542817 containerd[1456]: 2025-07-07 00:00:49.498 [INFO][4853] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac" iface="eth0" netns="/var/run/netns/cni-d365a48d-cfe0-04f7-2302-9029b7d68893" Jul 7 00:00:49.542817 containerd[1456]: 2025-07-07 00:00:49.499 [INFO][4853] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac" iface="eth0" netns="/var/run/netns/cni-d365a48d-cfe0-04f7-2302-9029b7d68893" Jul 7 00:00:49.542817 containerd[1456]: 2025-07-07 00:00:49.499 [INFO][4853] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac" iface="eth0" netns="/var/run/netns/cni-d365a48d-cfe0-04f7-2302-9029b7d68893" Jul 7 00:00:49.542817 containerd[1456]: 2025-07-07 00:00:49.499 [INFO][4853] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac" Jul 7 00:00:49.542817 containerd[1456]: 2025-07-07 00:00:49.499 [INFO][4853] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac" Jul 7 00:00:49.542817 containerd[1456]: 2025-07-07 00:00:49.531 [INFO][4875] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac" HandleID="k8s-pod-network.939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac" Workload="localhost-k8s-goldmane--768f4c5c69--6c7vz-eth0" Jul 7 00:00:49.542817 containerd[1456]: 2025-07-07 00:00:49.531 [INFO][4875] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:49.542817 containerd[1456]: 2025-07-07 00:00:49.531 [INFO][4875] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:49.542817 containerd[1456]: 2025-07-07 00:00:49.536 [WARNING][4875] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac" HandleID="k8s-pod-network.939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac" Workload="localhost-k8s-goldmane--768f4c5c69--6c7vz-eth0" Jul 7 00:00:49.542817 containerd[1456]: 2025-07-07 00:00:49.536 [INFO][4875] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac" HandleID="k8s-pod-network.939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac" Workload="localhost-k8s-goldmane--768f4c5c69--6c7vz-eth0" Jul 7 00:00:49.542817 containerd[1456]: 2025-07-07 00:00:49.538 [INFO][4875] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:49.542817 containerd[1456]: 2025-07-07 00:00:49.540 [INFO][4853] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac" Jul 7 00:00:49.545714 containerd[1456]: time="2025-07-07T00:00:49.545575272Z" level=info msg="TearDown network for sandbox \"939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac\" successfully" Jul 7 00:00:49.545714 containerd[1456]: time="2025-07-07T00:00:49.545613144Z" level=info msg="StopPodSandbox for \"939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac\" returns successfully" Jul 7 00:00:49.546819 systemd[1]: run-netns-cni\x2dd365a48d\x2dcfe0\x2d04f7\x2d2302\x2d9029b7d68893.mount: Deactivated successfully. Jul 7 00:00:49.548054 containerd[1456]: time="2025-07-07T00:00:49.547776636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-6c7vz,Uid:f9c9236a-e34c-48ed-9633-47af4ea04d91,Namespace:calico-system,Attempt:1,}" Jul 7 00:00:49.551068 containerd[1456]: 2025-07-07 00:00:49.499 [INFO][4852] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a" Jul 7 00:00:49.551068 containerd[1456]: 2025-07-07 00:00:49.500 [INFO][4852] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a" iface="eth0" netns="/var/run/netns/cni-cc7c5d35-bfbf-3452-6a95-cab1f1d5b8b0" Jul 7 00:00:49.551068 containerd[1456]: 2025-07-07 00:00:49.505 [INFO][4852] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a" iface="eth0" netns="/var/run/netns/cni-cc7c5d35-bfbf-3452-6a95-cab1f1d5b8b0" Jul 7 00:00:49.551068 containerd[1456]: 2025-07-07 00:00:49.505 [INFO][4852] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a" iface="eth0" netns="/var/run/netns/cni-cc7c5d35-bfbf-3452-6a95-cab1f1d5b8b0" Jul 7 00:00:49.551068 containerd[1456]: 2025-07-07 00:00:49.505 [INFO][4852] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a" Jul 7 00:00:49.551068 containerd[1456]: 2025-07-07 00:00:49.505 [INFO][4852] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a" Jul 7 00:00:49.551068 containerd[1456]: 2025-07-07 00:00:49.531 [INFO][4882] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a" HandleID="k8s-pod-network.26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a" Workload="localhost-k8s-csi--node--driver--qnpjz-eth0" Jul 7 00:00:49.551068 containerd[1456]: 2025-07-07 00:00:49.531 [INFO][4882] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:49.551068 containerd[1456]: 2025-07-07 00:00:49.538 [INFO][4882] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:49.551068 containerd[1456]: 2025-07-07 00:00:49.542 [WARNING][4882] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a" HandleID="k8s-pod-network.26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a" Workload="localhost-k8s-csi--node--driver--qnpjz-eth0" Jul 7 00:00:49.551068 containerd[1456]: 2025-07-07 00:00:49.542 [INFO][4882] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a" HandleID="k8s-pod-network.26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a" Workload="localhost-k8s-csi--node--driver--qnpjz-eth0" Jul 7 00:00:49.551068 containerd[1456]: 2025-07-07 00:00:49.544 [INFO][4882] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:49.551068 containerd[1456]: 2025-07-07 00:00:49.548 [INFO][4852] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a" Jul 7 00:00:49.551557 containerd[1456]: time="2025-07-07T00:00:49.551535723Z" level=info msg="TearDown network for sandbox \"26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a\" successfully" Jul 7 00:00:49.551557 containerd[1456]: time="2025-07-07T00:00:49.551554579Z" level=info msg="StopPodSandbox for \"26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a\" returns successfully" Jul 7 00:00:49.552267 containerd[1456]: time="2025-07-07T00:00:49.552232562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qnpjz,Uid:febfaed7-7f45-4497-b75d-f0ee8f991481,Namespace:calico-system,Attempt:1,}" Jul 7 00:00:49.557547 containerd[1456]: 2025-07-07 00:00:49.502 [INFO][4851] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690" Jul 7 00:00:49.557547 containerd[1456]: 2025-07-07 00:00:49.502 [INFO][4851] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690" iface="eth0" netns="/var/run/netns/cni-080ef331-a8c8-3165-8ae6-f09f8609feb6" Jul 7 00:00:49.557547 containerd[1456]: 2025-07-07 00:00:49.504 [INFO][4851] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690" iface="eth0" netns="/var/run/netns/cni-080ef331-a8c8-3165-8ae6-f09f8609feb6" Jul 7 00:00:49.557547 containerd[1456]: 2025-07-07 00:00:49.505 [INFO][4851] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690" iface="eth0" netns="/var/run/netns/cni-080ef331-a8c8-3165-8ae6-f09f8609feb6" Jul 7 00:00:49.557547 containerd[1456]: 2025-07-07 00:00:49.505 [INFO][4851] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690" Jul 7 00:00:49.557547 containerd[1456]: 2025-07-07 00:00:49.505 [INFO][4851] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690" Jul 7 00:00:49.557547 containerd[1456]: 2025-07-07 00:00:49.537 [INFO][4889] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690" HandleID="k8s-pod-network.00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690" Workload="localhost-k8s-calico--apiserver--5ffb9474b--k66n7-eth0" Jul 7 00:00:49.557547 containerd[1456]: 2025-07-07 00:00:49.537 [INFO][4889] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:49.557547 containerd[1456]: 2025-07-07 00:00:49.544 [INFO][4889] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:49.557547 containerd[1456]: 2025-07-07 00:00:49.550 [WARNING][4889] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690" HandleID="k8s-pod-network.00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690" Workload="localhost-k8s-calico--apiserver--5ffb9474b--k66n7-eth0" Jul 7 00:00:49.557547 containerd[1456]: 2025-07-07 00:00:49.550 [INFO][4889] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690" HandleID="k8s-pod-network.00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690" Workload="localhost-k8s-calico--apiserver--5ffb9474b--k66n7-eth0" Jul 7 00:00:49.557547 containerd[1456]: 2025-07-07 00:00:49.551 [INFO][4889] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:49.557547 containerd[1456]: 2025-07-07 00:00:49.554 [INFO][4851] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690" Jul 7 00:00:49.558077 containerd[1456]: time="2025-07-07T00:00:49.558046828Z" level=info msg="TearDown network for sandbox \"00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690\" successfully" Jul 7 00:00:49.558077 containerd[1456]: time="2025-07-07T00:00:49.558065574Z" level=info msg="StopPodSandbox for \"00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690\" returns successfully" Jul 7 00:00:49.558668 containerd[1456]: time="2025-07-07T00:00:49.558629924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ffb9474b-k66n7,Uid:6154d981-1514-483e-9a6f-b65a800f05e0,Namespace:calico-apiserver,Attempt:1,}" Jul 7 00:00:49.615838 containerd[1456]: time="2025-07-07T00:00:49.615807088Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:49.653607 containerd[1456]: time="2025-07-07T00:00:49.653566484Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 7 00:00:49.658904 containerd[1456]: time="2025-07-07T00:00:49.658852188Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:49.669790 containerd[1456]: time="2025-07-07T00:00:49.669738477Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:49.670366 containerd[1456]: time="2025-07-07T00:00:49.670336751Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 2.74672394s" Jul 7 00:00:49.670426 containerd[1456]: time="2025-07-07T00:00:49.670367950Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 7 00:00:49.671537 containerd[1456]: time="2025-07-07T00:00:49.671371384Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 00:00:49.676616 containerd[1456]: time="2025-07-07T00:00:49.676590854Z" level=info msg="CreateContainer within sandbox \"8cea13b78a0cbbdee26edbaefa9bbc6820db1b3ab0509c2d3e3b62f5915d2406\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 7 00:00:49.703715 containerd[1456]: time="2025-07-07T00:00:49.703665498Z" level=info msg="CreateContainer within sandbox \"8cea13b78a0cbbdee26edbaefa9bbc6820db1b3ab0509c2d3e3b62f5915d2406\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"767e0c51dab2d0395a4640bf9462959790ad0a9eefe116a60332a29f37806bf0\"" Jul 7 00:00:49.705962 containerd[1456]: time="2025-07-07T00:00:49.704359902Z" level=info msg="StartContainer for \"767e0c51dab2d0395a4640bf9462959790ad0a9eefe116a60332a29f37806bf0\"" Jul 7 00:00:49.738460 systemd[1]: run-netns-cni\x2dcc7c5d35\x2dbfbf\x2d3452\x2d6a95\x2dcab1f1d5b8b0.mount: Deactivated successfully. Jul 7 00:00:49.738562 systemd[1]: run-netns-cni\x2d080ef331\x2da8c8\x2d3165\x2d8ae6\x2df09f8609feb6.mount: Deactivated successfully. Jul 7 00:00:49.758722 systemd[1]: Started cri-containerd-767e0c51dab2d0395a4640bf9462959790ad0a9eefe116a60332a29f37806bf0.scope - libcontainer container 767e0c51dab2d0395a4640bf9462959790ad0a9eefe116a60332a29f37806bf0. Jul 7 00:00:49.805091 systemd-networkd[1387]: cali013d28e5d85: Link UP Jul 7 00:00:49.806884 systemd-networkd[1387]: cali013d28e5d85: Gained carrier Jul 7 00:00:49.818299 containerd[1456]: time="2025-07-07T00:00:49.818183895Z" level=info msg="StartContainer for \"767e0c51dab2d0395a4640bf9462959790ad0a9eefe116a60332a29f37806bf0\" returns successfully" Jul 7 00:00:49.822685 containerd[1456]: 2025-07-07 00:00:49.728 [INFO][4919] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--768f4c5c69--6c7vz-eth0 goldmane-768f4c5c69- calico-system f9c9236a-e34c-48ed-9633-47af4ea04d91 1121 0 2025-07-07 00:00:19 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-768f4c5c69-6c7vz eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali013d28e5d85 [] [] }} ContainerID="42d92c1d2c343b36112146439fa567b70befb8579c8518f124bc9b133a26c511" Namespace="calico-system" Pod="goldmane-768f4c5c69-6c7vz" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--6c7vz-" Jul 7 00:00:49.822685 containerd[1456]: 2025-07-07 00:00:49.728 [INFO][4919] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="42d92c1d2c343b36112146439fa567b70befb8579c8518f124bc9b133a26c511" Namespace="calico-system" Pod="goldmane-768f4c5c69-6c7vz" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--6c7vz-eth0" Jul 7 00:00:49.822685 containerd[1456]: 2025-07-07 00:00:49.766 [INFO][4969] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="42d92c1d2c343b36112146439fa567b70befb8579c8518f124bc9b133a26c511" HandleID="k8s-pod-network.42d92c1d2c343b36112146439fa567b70befb8579c8518f124bc9b133a26c511" Workload="localhost-k8s-goldmane--768f4c5c69--6c7vz-eth0" Jul 7 00:00:49.822685 containerd[1456]: 2025-07-07 00:00:49.767 [INFO][4969] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="42d92c1d2c343b36112146439fa567b70befb8579c8518f124bc9b133a26c511" HandleID="k8s-pod-network.42d92c1d2c343b36112146439fa567b70befb8579c8518f124bc9b133a26c511" Workload="localhost-k8s-goldmane--768f4c5c69--6c7vz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f970), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-768f4c5c69-6c7vz", "timestamp":"2025-07-07 00:00:49.766905305 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:00:49.822685 containerd[1456]: 2025-07-07 00:00:49.767 [INFO][4969] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:49.822685 containerd[1456]: 2025-07-07 00:00:49.767 [INFO][4969] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:49.822685 containerd[1456]: 2025-07-07 00:00:49.767 [INFO][4969] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 00:00:49.822685 containerd[1456]: 2025-07-07 00:00:49.775 [INFO][4969] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.42d92c1d2c343b36112146439fa567b70befb8579c8518f124bc9b133a26c511" host="localhost" Jul 7 00:00:49.822685 containerd[1456]: 2025-07-07 00:00:49.780 [INFO][4969] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 00:00:49.822685 containerd[1456]: 2025-07-07 00:00:49.784 [INFO][4969] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 00:00:49.822685 containerd[1456]: 2025-07-07 00:00:49.786 [INFO][4969] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 00:00:49.822685 containerd[1456]: 2025-07-07 00:00:49.788 [INFO][4969] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 00:00:49.822685 containerd[1456]: 2025-07-07 00:00:49.788 [INFO][4969] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.42d92c1d2c343b36112146439fa567b70befb8579c8518f124bc9b133a26c511" host="localhost" Jul 7 00:00:49.822685 containerd[1456]: 2025-07-07 00:00:49.789 [INFO][4969] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.42d92c1d2c343b36112146439fa567b70befb8579c8518f124bc9b133a26c511 Jul 7 00:00:49.822685 containerd[1456]: 2025-07-07 00:00:49.793 [INFO][4969] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.42d92c1d2c343b36112146439fa567b70befb8579c8518f124bc9b133a26c511" host="localhost" Jul 7 00:00:49.822685 containerd[1456]: 2025-07-07 00:00:49.798 [INFO][4969] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.42d92c1d2c343b36112146439fa567b70befb8579c8518f124bc9b133a26c511" host="localhost" Jul 7 00:00:49.822685 containerd[1456]: 2025-07-07 00:00:49.798 [INFO][4969] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.42d92c1d2c343b36112146439fa567b70befb8579c8518f124bc9b133a26c511" host="localhost" Jul 7 00:00:49.822685 containerd[1456]: 2025-07-07 00:00:49.798 [INFO][4969] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:49.822685 containerd[1456]: 2025-07-07 00:00:49.798 [INFO][4969] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="42d92c1d2c343b36112146439fa567b70befb8579c8518f124bc9b133a26c511" HandleID="k8s-pod-network.42d92c1d2c343b36112146439fa567b70befb8579c8518f124bc9b133a26c511" Workload="localhost-k8s-goldmane--768f4c5c69--6c7vz-eth0" Jul 7 00:00:49.823341 containerd[1456]: 2025-07-07 00:00:49.801 [INFO][4919] cni-plugin/k8s.go 418: Populated endpoint ContainerID="42d92c1d2c343b36112146439fa567b70befb8579c8518f124bc9b133a26c511" Namespace="calico-system" Pod="goldmane-768f4c5c69-6c7vz" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--6c7vz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--6c7vz-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"f9c9236a-e34c-48ed-9633-47af4ea04d91", ResourceVersion:"1121", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 0, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-768f4c5c69-6c7vz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali013d28e5d85", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:49.823341 containerd[1456]: 2025-07-07 00:00:49.801 [INFO][4919] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="42d92c1d2c343b36112146439fa567b70befb8579c8518f124bc9b133a26c511" Namespace="calico-system" Pod="goldmane-768f4c5c69-6c7vz" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--6c7vz-eth0" Jul 7 00:00:49.823341 containerd[1456]: 2025-07-07 00:00:49.801 [INFO][4919] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali013d28e5d85 ContainerID="42d92c1d2c343b36112146439fa567b70befb8579c8518f124bc9b133a26c511" Namespace="calico-system" Pod="goldmane-768f4c5c69-6c7vz" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--6c7vz-eth0" Jul 7 00:00:49.823341 containerd[1456]: 2025-07-07 00:00:49.807 [INFO][4919] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="42d92c1d2c343b36112146439fa567b70befb8579c8518f124bc9b133a26c511" Namespace="calico-system" Pod="goldmane-768f4c5c69-6c7vz" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--6c7vz-eth0" Jul 7 00:00:49.823341 containerd[1456]: 2025-07-07 00:00:49.808 [INFO][4919] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="42d92c1d2c343b36112146439fa567b70befb8579c8518f124bc9b133a26c511" Namespace="calico-system" Pod="goldmane-768f4c5c69-6c7vz" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--6c7vz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--6c7vz-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"f9c9236a-e34c-48ed-9633-47af4ea04d91", ResourceVersion:"1121", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 0, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"42d92c1d2c343b36112146439fa567b70befb8579c8518f124bc9b133a26c511", Pod:"goldmane-768f4c5c69-6c7vz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali013d28e5d85", MAC:"c2:da:0c:a8:de:9b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:49.823341 containerd[1456]: 2025-07-07 00:00:49.817 [INFO][4919] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="42d92c1d2c343b36112146439fa567b70befb8579c8518f124bc9b133a26c511" Namespace="calico-system" Pod="goldmane-768f4c5c69-6c7vz" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--6c7vz-eth0" Jul 7 00:00:49.846610 containerd[1456]: time="2025-07-07T00:00:49.845986496Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:00:49.846610 containerd[1456]: time="2025-07-07T00:00:49.846034125Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:00:49.846610 containerd[1456]: time="2025-07-07T00:00:49.846055666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:49.847824 containerd[1456]: time="2025-07-07T00:00:49.846149222Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:49.858114 systemd-networkd[1387]: caliafaad1cce76: Gained IPv6LL Jul 7 00:00:49.875127 systemd[1]: Started cri-containerd-42d92c1d2c343b36112146439fa567b70befb8579c8518f124bc9b133a26c511.scope - libcontainer container 42d92c1d2c343b36112146439fa567b70befb8579c8518f124bc9b133a26c511. Jul 7 00:00:49.890520 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 00:00:49.913065 systemd-networkd[1387]: calic7dc06dfc68: Link UP Jul 7 00:00:49.914171 systemd-networkd[1387]: calic7dc06dfc68: Gained carrier Jul 7 00:00:49.921634 containerd[1456]: time="2025-07-07T00:00:49.921588975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-6c7vz,Uid:f9c9236a-e34c-48ed-9633-47af4ea04d91,Namespace:calico-system,Attempt:1,} returns sandbox id \"42d92c1d2c343b36112146439fa567b70befb8579c8518f124bc9b133a26c511\"" Jul 7 00:00:49.929181 containerd[1456]: 2025-07-07 00:00:49.730 [INFO][4931] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5ffb9474b--k66n7-eth0 calico-apiserver-5ffb9474b- calico-apiserver 6154d981-1514-483e-9a6f-b65a800f05e0 1122 0 2025-07-07 00:00:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5ffb9474b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5ffb9474b-k66n7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic7dc06dfc68 [] [] }} ContainerID="a60e493add90a1760681b4af9334bbc952739acb98108527511a6f994ce4e8b9" Namespace="calico-apiserver" Pod="calico-apiserver-5ffb9474b-k66n7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ffb9474b--k66n7-" Jul 7 00:00:49.929181 containerd[1456]: 2025-07-07 00:00:49.730 [INFO][4931] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a60e493add90a1760681b4af9334bbc952739acb98108527511a6f994ce4e8b9" Namespace="calico-apiserver" Pod="calico-apiserver-5ffb9474b-k66n7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ffb9474b--k66n7-eth0" Jul 7 00:00:49.929181 containerd[1456]: 2025-07-07 00:00:49.770 [INFO][4971] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a60e493add90a1760681b4af9334bbc952739acb98108527511a6f994ce4e8b9" HandleID="k8s-pod-network.a60e493add90a1760681b4af9334bbc952739acb98108527511a6f994ce4e8b9" Workload="localhost-k8s-calico--apiserver--5ffb9474b--k66n7-eth0" Jul 7 00:00:49.929181 containerd[1456]: 2025-07-07 00:00:49.771 [INFO][4971] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a60e493add90a1760681b4af9334bbc952739acb98108527511a6f994ce4e8b9" HandleID="k8s-pod-network.a60e493add90a1760681b4af9334bbc952739acb98108527511a6f994ce4e8b9" Workload="localhost-k8s-calico--apiserver--5ffb9474b--k66n7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df110), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5ffb9474b-k66n7", "timestamp":"2025-07-07 00:00:49.770555619 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:00:49.929181 containerd[1456]: 2025-07-07 00:00:49.771 [INFO][4971] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:49.929181 containerd[1456]: 2025-07-07 00:00:49.798 [INFO][4971] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:49.929181 containerd[1456]: 2025-07-07 00:00:49.798 [INFO][4971] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 00:00:49.929181 containerd[1456]: 2025-07-07 00:00:49.876 [INFO][4971] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a60e493add90a1760681b4af9334bbc952739acb98108527511a6f994ce4e8b9" host="localhost" Jul 7 00:00:49.929181 containerd[1456]: 2025-07-07 00:00:49.881 [INFO][4971] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 00:00:49.929181 containerd[1456]: 2025-07-07 00:00:49.888 [INFO][4971] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 00:00:49.929181 containerd[1456]: 2025-07-07 00:00:49.889 [INFO][4971] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 00:00:49.929181 containerd[1456]: 2025-07-07 00:00:49.891 [INFO][4971] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 00:00:49.929181 containerd[1456]: 2025-07-07 00:00:49.891 [INFO][4971] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a60e493add90a1760681b4af9334bbc952739acb98108527511a6f994ce4e8b9" host="localhost" Jul 7 00:00:49.929181 containerd[1456]: 2025-07-07 00:00:49.893 [INFO][4971] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a60e493add90a1760681b4af9334bbc952739acb98108527511a6f994ce4e8b9 Jul 7 00:00:49.929181 containerd[1456]: 2025-07-07 00:00:49.898 [INFO][4971] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a60e493add90a1760681b4af9334bbc952739acb98108527511a6f994ce4e8b9" host="localhost" Jul 7 00:00:49.929181 containerd[1456]: 2025-07-07 00:00:49.903 [INFO][4971] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.a60e493add90a1760681b4af9334bbc952739acb98108527511a6f994ce4e8b9" host="localhost" Jul 7 00:00:49.929181 containerd[1456]: 2025-07-07 00:00:49.903 [INFO][4971] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.a60e493add90a1760681b4af9334bbc952739acb98108527511a6f994ce4e8b9" host="localhost" Jul 7 00:00:49.929181 containerd[1456]: 2025-07-07 00:00:49.903 [INFO][4971] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:49.929181 containerd[1456]: 2025-07-07 00:00:49.903 [INFO][4971] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="a60e493add90a1760681b4af9334bbc952739acb98108527511a6f994ce4e8b9" HandleID="k8s-pod-network.a60e493add90a1760681b4af9334bbc952739acb98108527511a6f994ce4e8b9" Workload="localhost-k8s-calico--apiserver--5ffb9474b--k66n7-eth0" Jul 7 00:00:49.929654 containerd[1456]: 2025-07-07 00:00:49.907 [INFO][4931] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a60e493add90a1760681b4af9334bbc952739acb98108527511a6f994ce4e8b9" Namespace="calico-apiserver" Pod="calico-apiserver-5ffb9474b-k66n7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ffb9474b--k66n7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5ffb9474b--k66n7-eth0", GenerateName:"calico-apiserver-5ffb9474b-", Namespace:"calico-apiserver", SelfLink:"", UID:"6154d981-1514-483e-9a6f-b65a800f05e0", ResourceVersion:"1122", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 0, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ffb9474b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5ffb9474b-k66n7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic7dc06dfc68", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:49.929654 containerd[1456]: 2025-07-07 00:00:49.908 [INFO][4931] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="a60e493add90a1760681b4af9334bbc952739acb98108527511a6f994ce4e8b9" Namespace="calico-apiserver" Pod="calico-apiserver-5ffb9474b-k66n7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ffb9474b--k66n7-eth0" Jul 7 00:00:49.929654 containerd[1456]: 2025-07-07 00:00:49.908 [INFO][4931] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic7dc06dfc68 ContainerID="a60e493add90a1760681b4af9334bbc952739acb98108527511a6f994ce4e8b9" Namespace="calico-apiserver" Pod="calico-apiserver-5ffb9474b-k66n7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ffb9474b--k66n7-eth0" Jul 7 00:00:49.929654 containerd[1456]: 2025-07-07 00:00:49.915 [INFO][4931] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a60e493add90a1760681b4af9334bbc952739acb98108527511a6f994ce4e8b9" Namespace="calico-apiserver" Pod="calico-apiserver-5ffb9474b-k66n7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ffb9474b--k66n7-eth0" Jul 7 00:00:49.929654 containerd[1456]: 2025-07-07 00:00:49.915 [INFO][4931] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a60e493add90a1760681b4af9334bbc952739acb98108527511a6f994ce4e8b9" Namespace="calico-apiserver" Pod="calico-apiserver-5ffb9474b-k66n7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ffb9474b--k66n7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5ffb9474b--k66n7-eth0", GenerateName:"calico-apiserver-5ffb9474b-", Namespace:"calico-apiserver", SelfLink:"", UID:"6154d981-1514-483e-9a6f-b65a800f05e0", ResourceVersion:"1122", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 0, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ffb9474b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a60e493add90a1760681b4af9334bbc952739acb98108527511a6f994ce4e8b9", Pod:"calico-apiserver-5ffb9474b-k66n7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic7dc06dfc68", MAC:"56:1b:32:99:a0:6a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:49.929654 containerd[1456]: 2025-07-07 00:00:49.926 [INFO][4931] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a60e493add90a1760681b4af9334bbc952739acb98108527511a6f994ce4e8b9" Namespace="calico-apiserver" Pod="calico-apiserver-5ffb9474b-k66n7" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ffb9474b--k66n7-eth0" Jul 7 00:00:49.951854 containerd[1456]: time="2025-07-07T00:00:49.951759522Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:00:49.951854 containerd[1456]: time="2025-07-07T00:00:49.951830836Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:00:49.952001 containerd[1456]: time="2025-07-07T00:00:49.951844993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:49.952065 containerd[1456]: time="2025-07-07T00:00:49.952028037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:49.970077 systemd[1]: Started cri-containerd-a60e493add90a1760681b4af9334bbc952739acb98108527511a6f994ce4e8b9.scope - libcontainer container a60e493add90a1760681b4af9334bbc952739acb98108527511a6f994ce4e8b9. Jul 7 00:00:49.983385 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 00:00:50.036618 containerd[1456]: time="2025-07-07T00:00:50.036444189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ffb9474b-k66n7,Uid:6154d981-1514-483e-9a6f-b65a800f05e0,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"a60e493add90a1760681b4af9334bbc952739acb98108527511a6f994ce4e8b9\"" Jul 7 00:00:50.041457 systemd-networkd[1387]: calia873270c799: Link UP Jul 7 00:00:50.049926 systemd-networkd[1387]: calia873270c799: Gained carrier Jul 7 00:00:50.067686 containerd[1456]: 2025-07-07 00:00:49.724 [INFO][4911] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--qnpjz-eth0 csi-node-driver- calico-system febfaed7-7f45-4497-b75d-f0ee8f991481 1123 0 2025-07-07 00:00:19 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-qnpjz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia873270c799 [] [] }} ContainerID="884682f74b2416e49c40e2d921ee28ed71eafdbe02b3188bfb9d71dacd8d51c2" Namespace="calico-system" Pod="csi-node-driver-qnpjz" WorkloadEndpoint="localhost-k8s-csi--node--driver--qnpjz-" Jul 7 00:00:50.067686 containerd[1456]: 2025-07-07 00:00:49.724 [INFO][4911] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="884682f74b2416e49c40e2d921ee28ed71eafdbe02b3188bfb9d71dacd8d51c2" Namespace="calico-system" Pod="csi-node-driver-qnpjz" WorkloadEndpoint="localhost-k8s-csi--node--driver--qnpjz-eth0" Jul 7 00:00:50.067686 containerd[1456]: 2025-07-07 00:00:49.773 [INFO][4974] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="884682f74b2416e49c40e2d921ee28ed71eafdbe02b3188bfb9d71dacd8d51c2" HandleID="k8s-pod-network.884682f74b2416e49c40e2d921ee28ed71eafdbe02b3188bfb9d71dacd8d51c2" Workload="localhost-k8s-csi--node--driver--qnpjz-eth0" Jul 7 00:00:50.067686 containerd[1456]: 2025-07-07 00:00:49.774 [INFO][4974] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="884682f74b2416e49c40e2d921ee28ed71eafdbe02b3188bfb9d71dacd8d51c2" HandleID="k8s-pod-network.884682f74b2416e49c40e2d921ee28ed71eafdbe02b3188bfb9d71dacd8d51c2" Workload="localhost-k8s-csi--node--driver--qnpjz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139630), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-qnpjz", "timestamp":"2025-07-07 00:00:49.773841498 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:00:50.067686 containerd[1456]: 2025-07-07 00:00:49.774 [INFO][4974] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:50.067686 containerd[1456]: 2025-07-07 00:00:49.903 [INFO][4974] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:50.067686 containerd[1456]: 2025-07-07 00:00:49.903 [INFO][4974] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 00:00:50.067686 containerd[1456]: 2025-07-07 00:00:49.976 [INFO][4974] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.884682f74b2416e49c40e2d921ee28ed71eafdbe02b3188bfb9d71dacd8d51c2" host="localhost" Jul 7 00:00:50.067686 containerd[1456]: 2025-07-07 00:00:49.981 [INFO][4974] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 00:00:50.067686 containerd[1456]: 2025-07-07 00:00:49.986 [INFO][4974] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 00:00:50.067686 containerd[1456]: 2025-07-07 00:00:49.988 [INFO][4974] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 00:00:50.067686 containerd[1456]: 2025-07-07 00:00:49.990 [INFO][4974] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 00:00:50.067686 containerd[1456]: 2025-07-07 00:00:49.990 [INFO][4974] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.884682f74b2416e49c40e2d921ee28ed71eafdbe02b3188bfb9d71dacd8d51c2" host="localhost" Jul 7 00:00:50.067686 containerd[1456]: 2025-07-07 00:00:49.991 [INFO][4974] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.884682f74b2416e49c40e2d921ee28ed71eafdbe02b3188bfb9d71dacd8d51c2 Jul 7 00:00:50.067686 containerd[1456]: 2025-07-07 00:00:49.994 [INFO][4974] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.884682f74b2416e49c40e2d921ee28ed71eafdbe02b3188bfb9d71dacd8d51c2" host="localhost" Jul 7 00:00:50.067686 containerd[1456]: 2025-07-07 00:00:50.011 [INFO][4974] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.884682f74b2416e49c40e2d921ee28ed71eafdbe02b3188bfb9d71dacd8d51c2" host="localhost" Jul 7 00:00:50.067686 containerd[1456]: 2025-07-07 00:00:50.011 [INFO][4974] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.884682f74b2416e49c40e2d921ee28ed71eafdbe02b3188bfb9d71dacd8d51c2" host="localhost" Jul 7 00:00:50.067686 containerd[1456]: 2025-07-07 00:00:50.011 [INFO][4974] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:50.067686 containerd[1456]: 2025-07-07 00:00:50.011 [INFO][4974] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="884682f74b2416e49c40e2d921ee28ed71eafdbe02b3188bfb9d71dacd8d51c2" HandleID="k8s-pod-network.884682f74b2416e49c40e2d921ee28ed71eafdbe02b3188bfb9d71dacd8d51c2" Workload="localhost-k8s-csi--node--driver--qnpjz-eth0" Jul 7 00:00:50.068369 containerd[1456]: 2025-07-07 00:00:50.030 [INFO][4911] cni-plugin/k8s.go 418: Populated endpoint ContainerID="884682f74b2416e49c40e2d921ee28ed71eafdbe02b3188bfb9d71dacd8d51c2" Namespace="calico-system" Pod="csi-node-driver-qnpjz" WorkloadEndpoint="localhost-k8s-csi--node--driver--qnpjz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qnpjz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"febfaed7-7f45-4497-b75d-f0ee8f991481", ResourceVersion:"1123", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 0, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-qnpjz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia873270c799", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:50.068369 containerd[1456]: 2025-07-07 00:00:50.031 [INFO][4911] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="884682f74b2416e49c40e2d921ee28ed71eafdbe02b3188bfb9d71dacd8d51c2" Namespace="calico-system" Pod="csi-node-driver-qnpjz" WorkloadEndpoint="localhost-k8s-csi--node--driver--qnpjz-eth0" Jul 7 00:00:50.068369 containerd[1456]: 2025-07-07 00:00:50.031 [INFO][4911] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia873270c799 ContainerID="884682f74b2416e49c40e2d921ee28ed71eafdbe02b3188bfb9d71dacd8d51c2" Namespace="calico-system" Pod="csi-node-driver-qnpjz" WorkloadEndpoint="localhost-k8s-csi--node--driver--qnpjz-eth0" Jul 7 00:00:50.068369 containerd[1456]: 2025-07-07 00:00:50.049 [INFO][4911] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="884682f74b2416e49c40e2d921ee28ed71eafdbe02b3188bfb9d71dacd8d51c2" Namespace="calico-system" Pod="csi-node-driver-qnpjz" WorkloadEndpoint="localhost-k8s-csi--node--driver--qnpjz-eth0" Jul 7 00:00:50.068369 containerd[1456]: 2025-07-07 00:00:50.054 [INFO][4911] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="884682f74b2416e49c40e2d921ee28ed71eafdbe02b3188bfb9d71dacd8d51c2" Namespace="calico-system" Pod="csi-node-driver-qnpjz" WorkloadEndpoint="localhost-k8s-csi--node--driver--qnpjz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qnpjz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"febfaed7-7f45-4497-b75d-f0ee8f991481", ResourceVersion:"1123", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 0, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"884682f74b2416e49c40e2d921ee28ed71eafdbe02b3188bfb9d71dacd8d51c2", Pod:"csi-node-driver-qnpjz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia873270c799", MAC:"ba:e9:e0:9a:1a:6c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:00:50.068369 containerd[1456]: 2025-07-07 00:00:50.063 [INFO][4911] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="884682f74b2416e49c40e2d921ee28ed71eafdbe02b3188bfb9d71dacd8d51c2" Namespace="calico-system" Pod="csi-node-driver-qnpjz" WorkloadEndpoint="localhost-k8s-csi--node--driver--qnpjz-eth0" Jul 7 00:00:50.087084 containerd[1456]: time="2025-07-07T00:00:50.086925224Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:00:50.087181 containerd[1456]: time="2025-07-07T00:00:50.087062661Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:00:50.087181 containerd[1456]: time="2025-07-07T00:00:50.087089241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:50.087351 containerd[1456]: time="2025-07-07T00:00:50.087301560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:50.110073 systemd[1]: Started cri-containerd-884682f74b2416e49c40e2d921ee28ed71eafdbe02b3188bfb9d71dacd8d51c2.scope - libcontainer container 884682f74b2416e49c40e2d921ee28ed71eafdbe02b3188bfb9d71dacd8d51c2. Jul 7 00:00:50.122346 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 00:00:50.133298 containerd[1456]: time="2025-07-07T00:00:50.133261728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qnpjz,Uid:febfaed7-7f45-4497-b75d-f0ee8f991481,Namespace:calico-system,Attempt:1,} returns sandbox id \"884682f74b2416e49c40e2d921ee28ed71eafdbe02b3188bfb9d71dacd8d51c2\"" Jul 7 00:00:50.360496 kubelet[2515]: I0707 00:00:50.360431 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-bb85c498b-2j25r" podStartSLOduration=1.639477724 podStartE2EDuration="6.360417798s" podCreationTimestamp="2025-07-07 00:00:44 +0000 UTC" firstStartedPulling="2025-07-07 00:00:44.950249348 +0000 UTC m=+41.866982243" lastFinishedPulling="2025-07-07 00:00:49.671189422 +0000 UTC m=+46.587922317" observedRunningTime="2025-07-07 00:00:50.360117985 +0000 UTC m=+47.276850870" watchObservedRunningTime="2025-07-07 00:00:50.360417798 +0000 UTC m=+47.277150693" Jul 7 00:00:51.010135 systemd-networkd[1387]: calic7dc06dfc68: Gained IPv6LL Jul 7 00:00:51.202080 systemd-networkd[1387]: calia873270c799: Gained IPv6LL Jul 7 00:00:51.650133 systemd-networkd[1387]: cali013d28e5d85: Gained IPv6LL Jul 7 00:00:52.995616 containerd[1456]: time="2025-07-07T00:00:52.995569189Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:52.996660 containerd[1456]: time="2025-07-07T00:00:52.996580547Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 7 00:00:52.997848 containerd[1456]: time="2025-07-07T00:00:52.997810867Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:53.000497 containerd[1456]: time="2025-07-07T00:00:53.000463366Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:53.001073 containerd[1456]: time="2025-07-07T00:00:53.001016114Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 3.329605156s" Jul 7 00:00:53.001073 containerd[1456]: time="2025-07-07T00:00:53.001059104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 7 00:00:53.002102 containerd[1456]: time="2025-07-07T00:00:53.002071334Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 7 00:00:53.006491 containerd[1456]: time="2025-07-07T00:00:53.006452047Z" level=info msg="CreateContainer within sandbox \"789d870b31747453c67be80958e428216fe98fd3e50c07e81bd8c32d9fbc0997\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 00:00:53.023548 containerd[1456]: time="2025-07-07T00:00:53.023480079Z" level=info msg="CreateContainer within sandbox \"789d870b31747453c67be80958e428216fe98fd3e50c07e81bd8c32d9fbc0997\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ebcb27981be6ead5d4b9d2699833fc5f52b570958d225c0f21723a5eacc7dd08\"" Jul 7 00:00:53.024198 containerd[1456]: time="2025-07-07T00:00:53.024156780Z" level=info msg="StartContainer for \"ebcb27981be6ead5d4b9d2699833fc5f52b570958d225c0f21723a5eacc7dd08\"" Jul 7 00:00:53.059111 systemd[1]: Started cri-containerd-ebcb27981be6ead5d4b9d2699833fc5f52b570958d225c0f21723a5eacc7dd08.scope - libcontainer container ebcb27981be6ead5d4b9d2699833fc5f52b570958d225c0f21723a5eacc7dd08. Jul 7 00:00:53.099610 containerd[1456]: time="2025-07-07T00:00:53.099570527Z" level=info msg="StartContainer for \"ebcb27981be6ead5d4b9d2699833fc5f52b570958d225c0f21723a5eacc7dd08\" returns successfully" Jul 7 00:00:53.372132 kubelet[2515]: I0707 00:00:53.371669 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5ffb9474b-prlmm" podStartSLOduration=31.059135577 podStartE2EDuration="36.371652874s" podCreationTimestamp="2025-07-07 00:00:17 +0000 UTC" firstStartedPulling="2025-07-07 00:00:47.689367246 +0000 UTC m=+44.606100131" lastFinishedPulling="2025-07-07 00:00:53.001884533 +0000 UTC m=+49.918617428" observedRunningTime="2025-07-07 00:00:53.370735672 +0000 UTC m=+50.287468567" watchObservedRunningTime="2025-07-07 00:00:53.371652874 +0000 UTC m=+50.288385759" Jul 7 00:00:53.493513 systemd[1]: Started sshd@10-10.0.0.146:22-10.0.0.1:58342.service - OpenSSH per-connection server daemon (10.0.0.1:58342). Jul 7 00:00:53.540423 sshd[5229]: Accepted publickey for core from 10.0.0.1 port 58342 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 7 00:00:53.542306 sshd[5229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:00:53.546863 systemd-logind[1440]: New session 11 of user core. Jul 7 00:00:53.554085 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 00:00:53.680614 sshd[5229]: pam_unix(sshd:session): session closed for user core Jul 7 00:00:53.689121 systemd[1]: sshd@10-10.0.0.146:22-10.0.0.1:58342.service: Deactivated successfully. Jul 7 00:00:53.692699 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 00:00:53.699727 systemd-logind[1440]: Session 11 logged out. Waiting for processes to exit. Jul 7 00:00:53.709244 systemd[1]: Started sshd@11-10.0.0.146:22-10.0.0.1:58346.service - OpenSSH per-connection server daemon (10.0.0.1:58346). Jul 7 00:00:53.711841 systemd-logind[1440]: Removed session 11. Jul 7 00:00:53.744177 sshd[5244]: Accepted publickey for core from 10.0.0.1 port 58346 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 7 00:00:53.747268 sshd[5244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:00:53.754119 systemd-logind[1440]: New session 12 of user core. Jul 7 00:00:53.757371 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 00:00:53.899150 sshd[5244]: pam_unix(sshd:session): session closed for user core Jul 7 00:00:53.909488 systemd[1]: sshd@11-10.0.0.146:22-10.0.0.1:58346.service: Deactivated successfully. Jul 7 00:00:53.913094 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 00:00:53.915998 systemd-logind[1440]: Session 12 logged out. Waiting for processes to exit. Jul 7 00:00:53.921248 systemd[1]: Started sshd@12-10.0.0.146:22-10.0.0.1:58352.service - OpenSSH per-connection server daemon (10.0.0.1:58352). Jul 7 00:00:53.922176 systemd-logind[1440]: Removed session 12. Jul 7 00:00:53.956860 sshd[5257]: Accepted publickey for core from 10.0.0.1 port 58352 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 7 00:00:53.958353 sshd[5257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:00:53.962047 systemd-logind[1440]: New session 13 of user core. Jul 7 00:00:53.969076 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 00:00:54.335711 sshd[5257]: pam_unix(sshd:session): session closed for user core Jul 7 00:00:54.339690 systemd[1]: sshd@12-10.0.0.146:22-10.0.0.1:58352.service: Deactivated successfully. Jul 7 00:00:54.341753 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 00:00:54.342344 systemd-logind[1440]: Session 13 logged out. Waiting for processes to exit. Jul 7 00:00:54.343177 systemd-logind[1440]: Removed session 13. Jul 7 00:00:54.363009 kubelet[2515]: I0707 00:00:54.362974 2515 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:00:56.833810 containerd[1456]: time="2025-07-07T00:00:56.833735943Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:56.834568 containerd[1456]: time="2025-07-07T00:00:56.834511288Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 7 00:00:56.835729 containerd[1456]: time="2025-07-07T00:00:56.835701762Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:56.838021 containerd[1456]: time="2025-07-07T00:00:56.838001358Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:56.838633 containerd[1456]: time="2025-07-07T00:00:56.838599301Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 3.836495525s" Jul 7 00:00:56.838670 containerd[1456]: time="2025-07-07T00:00:56.838637602Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 7 00:00:56.839874 containerd[1456]: time="2025-07-07T00:00:56.839700577Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 7 00:00:56.849864 containerd[1456]: time="2025-07-07T00:00:56.849818315Z" level=info msg="CreateContainer within sandbox \"89b7af0f7c217b847cbfbe8e6568ad5fd70a65c2f63f4cd5ea764afb34391ad5\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 7 00:00:56.868624 containerd[1456]: time="2025-07-07T00:00:56.868584165Z" level=info msg="CreateContainer within sandbox \"89b7af0f7c217b847cbfbe8e6568ad5fd70a65c2f63f4cd5ea764afb34391ad5\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"daf57d878c21c1752ea0f1669e12b7b99c39792ab372ef78d2504b7fb1033879\"" Jul 7 00:00:56.869395 containerd[1456]: time="2025-07-07T00:00:56.869138085Z" level=info msg="StartContainer for \"daf57d878c21c1752ea0f1669e12b7b99c39792ab372ef78d2504b7fb1033879\"" Jul 7 00:00:56.904092 systemd[1]: Started cri-containerd-daf57d878c21c1752ea0f1669e12b7b99c39792ab372ef78d2504b7fb1033879.scope - libcontainer container daf57d878c21c1752ea0f1669e12b7b99c39792ab372ef78d2504b7fb1033879. Jul 7 00:00:57.420581 containerd[1456]: time="2025-07-07T00:00:57.420362156Z" level=info msg="StartContainer for \"daf57d878c21c1752ea0f1669e12b7b99c39792ab372ef78d2504b7fb1033879\" returns successfully" Jul 7 00:00:58.440156 kubelet[2515]: I0707 00:00:58.439350 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-dbc45fd6f-nj8xd" podStartSLOduration=31.392348999 podStartE2EDuration="39.439332109s" podCreationTimestamp="2025-07-07 00:00:19 +0000 UTC" firstStartedPulling="2025-07-07 00:00:48.792540615 +0000 UTC m=+45.709273510" lastFinishedPulling="2025-07-07 00:00:56.839523725 +0000 UTC m=+53.756256620" observedRunningTime="2025-07-07 00:00:58.438263945 +0000 UTC m=+55.354996830" watchObservedRunningTime="2025-07-07 00:00:58.439332109 +0000 UTC m=+55.356065004" Jul 7 00:00:59.355603 systemd[1]: Started sshd@13-10.0.0.146:22-10.0.0.1:58356.service - OpenSSH per-connection server daemon (10.0.0.1:58356). Jul 7 00:00:59.417009 sshd[5357]: Accepted publickey for core from 10.0.0.1 port 58356 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 7 00:00:59.419004 sshd[5357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:00:59.424113 systemd-logind[1440]: New session 14 of user core. Jul 7 00:00:59.431088 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 00:00:59.591440 sshd[5357]: pam_unix(sshd:session): session closed for user core Jul 7 00:00:59.596173 systemd[1]: sshd@13-10.0.0.146:22-10.0.0.1:58356.service: Deactivated successfully. Jul 7 00:00:59.598800 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 00:00:59.599758 systemd-logind[1440]: Session 14 logged out. Waiting for processes to exit. Jul 7 00:00:59.600759 systemd-logind[1440]: Removed session 14. Jul 7 00:01:01.010779 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount364427320.mount: Deactivated successfully. Jul 7 00:01:02.136728 containerd[1456]: time="2025-07-07T00:01:02.136677077Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:02.137575 containerd[1456]: time="2025-07-07T00:01:02.137510862Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 7 00:01:02.138968 containerd[1456]: time="2025-07-07T00:01:02.138924804Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:02.158372 containerd[1456]: time="2025-07-07T00:01:02.158328923Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:02.159048 containerd[1456]: time="2025-07-07T00:01:02.159009069Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 5.319281001s" Jul 7 00:01:02.159048 containerd[1456]: time="2025-07-07T00:01:02.159039086Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 7 00:01:02.166157 containerd[1456]: time="2025-07-07T00:01:02.166122406Z" level=info msg="CreateContainer within sandbox \"42d92c1d2c343b36112146439fa567b70befb8579c8518f124bc9b133a26c511\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 7 00:01:02.168061 containerd[1456]: time="2025-07-07T00:01:02.168020387Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 00:01:02.181131 containerd[1456]: time="2025-07-07T00:01:02.181096362Z" level=info msg="CreateContainer within sandbox \"42d92c1d2c343b36112146439fa567b70befb8579c8518f124bc9b133a26c511\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"cba8997dfce1ce578177c970e5a129cd67fbe62dad9e84e2cd73cbe2a1b3d809\"" Jul 7 00:01:02.181573 containerd[1456]: time="2025-07-07T00:01:02.181551505Z" level=info msg="StartContainer for \"cba8997dfce1ce578177c970e5a129cd67fbe62dad9e84e2cd73cbe2a1b3d809\"" Jul 7 00:01:02.242192 systemd[1]: Started cri-containerd-cba8997dfce1ce578177c970e5a129cd67fbe62dad9e84e2cd73cbe2a1b3d809.scope - libcontainer container cba8997dfce1ce578177c970e5a129cd67fbe62dad9e84e2cd73cbe2a1b3d809. Jul 7 00:01:02.287157 containerd[1456]: time="2025-07-07T00:01:02.287113183Z" level=info msg="StartContainer for \"cba8997dfce1ce578177c970e5a129cd67fbe62dad9e84e2cd73cbe2a1b3d809\" returns successfully" Jul 7 00:01:02.451997 kubelet[2515]: I0707 00:01:02.451741 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-6c7vz" podStartSLOduration=31.217272694 podStartE2EDuration="43.451722531s" podCreationTimestamp="2025-07-07 00:00:19 +0000 UTC" firstStartedPulling="2025-07-07 00:00:49.925511189 +0000 UTC m=+46.842244084" lastFinishedPulling="2025-07-07 00:01:02.159961026 +0000 UTC m=+59.076693921" observedRunningTime="2025-07-07 00:01:02.450463118 +0000 UTC m=+59.367196013" watchObservedRunningTime="2025-07-07 00:01:02.451722531 +0000 UTC m=+59.368455426" Jul 7 00:01:02.554936 containerd[1456]: time="2025-07-07T00:01:02.554880829Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:02.555619 containerd[1456]: time="2025-07-07T00:01:02.555585371Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 7 00:01:02.557752 containerd[1456]: time="2025-07-07T00:01:02.557713925Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 389.622425ms" Jul 7 00:01:02.557752 containerd[1456]: time="2025-07-07T00:01:02.557745474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 7 00:01:02.558933 containerd[1456]: time="2025-07-07T00:01:02.558738267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 7 00:01:02.564110 containerd[1456]: time="2025-07-07T00:01:02.564074749Z" level=info msg="CreateContainer within sandbox \"a60e493add90a1760681b4af9334bbc952739acb98108527511a6f994ce4e8b9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 00:01:02.591381 containerd[1456]: time="2025-07-07T00:01:02.591328907Z" level=info msg="CreateContainer within sandbox \"a60e493add90a1760681b4af9334bbc952739acb98108527511a6f994ce4e8b9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6f0ea1fc33e4fdfb46c58328ece06064341aac01a691c9e6cef2cea8c44c6edf\"" Jul 7 00:01:02.591855 containerd[1456]: time="2025-07-07T00:01:02.591815761Z" level=info msg="StartContainer for \"6f0ea1fc33e4fdfb46c58328ece06064341aac01a691c9e6cef2cea8c44c6edf\"" Jul 7 00:01:02.621102 systemd[1]: Started cri-containerd-6f0ea1fc33e4fdfb46c58328ece06064341aac01a691c9e6cef2cea8c44c6edf.scope - libcontainer container 6f0ea1fc33e4fdfb46c58328ece06064341aac01a691c9e6cef2cea8c44c6edf. Jul 7 00:01:02.659509 containerd[1456]: time="2025-07-07T00:01:02.659460183Z" level=info msg="StartContainer for \"6f0ea1fc33e4fdfb46c58328ece06064341aac01a691c9e6cef2cea8c44c6edf\" returns successfully" Jul 7 00:01:03.169373 containerd[1456]: time="2025-07-07T00:01:03.169323967Z" level=info msg="StopPodSandbox for \"939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac\"" Jul 7 00:01:03.261111 containerd[1456]: 2025-07-07 00:01:03.227 [WARNING][5496] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--6c7vz-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"f9c9236a-e34c-48ed-9633-47af4ea04d91", ResourceVersion:"1244", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 0, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"42d92c1d2c343b36112146439fa567b70befb8579c8518f124bc9b133a26c511", Pod:"goldmane-768f4c5c69-6c7vz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali013d28e5d85", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:01:03.261111 containerd[1456]: 2025-07-07 00:01:03.227 [INFO][5496] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac" Jul 7 00:01:03.261111 containerd[1456]: 2025-07-07 00:01:03.227 [INFO][5496] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac" iface="eth0" netns="" Jul 7 00:01:03.261111 containerd[1456]: 2025-07-07 00:01:03.227 [INFO][5496] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac" Jul 7 00:01:03.261111 containerd[1456]: 2025-07-07 00:01:03.227 [INFO][5496] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac" Jul 7 00:01:03.261111 containerd[1456]: 2025-07-07 00:01:03.248 [INFO][5507] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac" HandleID="k8s-pod-network.939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac" Workload="localhost-k8s-goldmane--768f4c5c69--6c7vz-eth0" Jul 7 00:01:03.261111 containerd[1456]: 2025-07-07 00:01:03.248 [INFO][5507] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:01:03.261111 containerd[1456]: 2025-07-07 00:01:03.248 [INFO][5507] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:01:03.261111 containerd[1456]: 2025-07-07 00:01:03.254 [WARNING][5507] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac" HandleID="k8s-pod-network.939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac" Workload="localhost-k8s-goldmane--768f4c5c69--6c7vz-eth0" Jul 7 00:01:03.261111 containerd[1456]: 2025-07-07 00:01:03.254 [INFO][5507] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac" HandleID="k8s-pod-network.939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac" Workload="localhost-k8s-goldmane--768f4c5c69--6c7vz-eth0" Jul 7 00:01:03.261111 containerd[1456]: 2025-07-07 00:01:03.255 [INFO][5507] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:01:03.261111 containerd[1456]: 2025-07-07 00:01:03.258 [INFO][5496] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac" Jul 7 00:01:03.268124 containerd[1456]: time="2025-07-07T00:01:03.268083625Z" level=info msg="TearDown network for sandbox \"939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac\" successfully" Jul 7 00:01:03.268124 containerd[1456]: time="2025-07-07T00:01:03.268117338Z" level=info msg="StopPodSandbox for \"939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac\" returns successfully" Jul 7 00:01:03.305139 containerd[1456]: time="2025-07-07T00:01:03.305090259Z" level=info msg="RemovePodSandbox for \"939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac\"" Jul 7 00:01:03.310021 containerd[1456]: time="2025-07-07T00:01:03.309991806Z" level=info msg="Forcibly stopping sandbox \"939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac\"" Jul 7 00:01:03.380033 containerd[1456]: 2025-07-07 00:01:03.344 [WARNING][5525] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--6c7vz-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"f9c9236a-e34c-48ed-9633-47af4ea04d91", ResourceVersion:"1244", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 0, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"42d92c1d2c343b36112146439fa567b70befb8579c8518f124bc9b133a26c511", Pod:"goldmane-768f4c5c69-6c7vz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali013d28e5d85", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:01:03.380033 containerd[1456]: 2025-07-07 00:01:03.345 [INFO][5525] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac" Jul 7 00:01:03.380033 containerd[1456]: 2025-07-07 00:01:03.345 [INFO][5525] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac" iface="eth0" netns="" Jul 7 00:01:03.380033 containerd[1456]: 2025-07-07 00:01:03.345 [INFO][5525] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac" Jul 7 00:01:03.380033 containerd[1456]: 2025-07-07 00:01:03.345 [INFO][5525] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac" Jul 7 00:01:03.380033 containerd[1456]: 2025-07-07 00:01:03.367 [INFO][5533] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac" HandleID="k8s-pod-network.939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac" Workload="localhost-k8s-goldmane--768f4c5c69--6c7vz-eth0" Jul 7 00:01:03.380033 containerd[1456]: 2025-07-07 00:01:03.367 [INFO][5533] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:01:03.380033 containerd[1456]: 2025-07-07 00:01:03.367 [INFO][5533] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:01:03.380033 containerd[1456]: 2025-07-07 00:01:03.373 [WARNING][5533] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac" HandleID="k8s-pod-network.939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac" Workload="localhost-k8s-goldmane--768f4c5c69--6c7vz-eth0" Jul 7 00:01:03.380033 containerd[1456]: 2025-07-07 00:01:03.373 [INFO][5533] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac" HandleID="k8s-pod-network.939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac" Workload="localhost-k8s-goldmane--768f4c5c69--6c7vz-eth0" Jul 7 00:01:03.380033 containerd[1456]: 2025-07-07 00:01:03.374 [INFO][5533] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:01:03.380033 containerd[1456]: 2025-07-07 00:01:03.377 [INFO][5525] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac" Jul 7 00:01:03.380925 containerd[1456]: time="2025-07-07T00:01:03.380048079Z" level=info msg="TearDown network for sandbox \"939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac\" successfully" Jul 7 00:01:03.425274 containerd[1456]: time="2025-07-07T00:01:03.425172524Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:01:03.425274 containerd[1456]: time="2025-07-07T00:01:03.425267592Z" level=info msg="RemovePodSandbox \"939c19cc8d2da96afee0bee839420880d2836134ddb33b9819a2094dfc5941ac\" returns successfully" Jul 7 00:01:03.435993 containerd[1456]: time="2025-07-07T00:01:03.435662175Z" level=info msg="StopPodSandbox for \"b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa\"" Jul 7 00:01:03.457592 kubelet[2515]: I0707 00:01:03.456546 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5ffb9474b-k66n7" podStartSLOduration=33.936706861 podStartE2EDuration="46.456531552s" podCreationTimestamp="2025-07-07 00:00:17 +0000 UTC" firstStartedPulling="2025-07-07 00:00:50.038792358 +0000 UTC m=+46.955525253" lastFinishedPulling="2025-07-07 00:01:02.558617049 +0000 UTC m=+59.475349944" observedRunningTime="2025-07-07 00:01:03.455454812 +0000 UTC m=+60.372187707" watchObservedRunningTime="2025-07-07 00:01:03.456531552 +0000 UTC m=+60.373264447" Jul 7 00:01:03.543269 containerd[1456]: 2025-07-07 00:01:03.490 [WARNING][5550] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--cd8nn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e866b654-c1cc-4469-bdb5-43648247fb5f", ResourceVersion:"1094", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 0, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c036b51f7907bf5d8148f39cb839efcd0c7190ccf127e17bc1b46a9c23efa911", Pod:"coredns-674b8bbfcf-cd8nn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif918c5d716f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:01:03.543269 containerd[1456]: 2025-07-07 00:01:03.491 [INFO][5550] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa" Jul 7 00:01:03.543269 containerd[1456]: 2025-07-07 00:01:03.491 [INFO][5550] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa" iface="eth0" netns="" Jul 7 00:01:03.543269 containerd[1456]: 2025-07-07 00:01:03.491 [INFO][5550] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa" Jul 7 00:01:03.543269 containerd[1456]: 2025-07-07 00:01:03.491 [INFO][5550] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa" Jul 7 00:01:03.543269 containerd[1456]: 2025-07-07 00:01:03.524 [INFO][5577] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa" HandleID="k8s-pod-network.b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa" Workload="localhost-k8s-coredns--674b8bbfcf--cd8nn-eth0" Jul 7 00:01:03.543269 containerd[1456]: 2025-07-07 00:01:03.524 [INFO][5577] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:01:03.543269 containerd[1456]: 2025-07-07 00:01:03.525 [INFO][5577] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:01:03.543269 containerd[1456]: 2025-07-07 00:01:03.534 [WARNING][5577] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa" HandleID="k8s-pod-network.b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa" Workload="localhost-k8s-coredns--674b8bbfcf--cd8nn-eth0" Jul 7 00:01:03.543269 containerd[1456]: 2025-07-07 00:01:03.534 [INFO][5577] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa" HandleID="k8s-pod-network.b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa" Workload="localhost-k8s-coredns--674b8bbfcf--cd8nn-eth0" Jul 7 00:01:03.543269 containerd[1456]: 2025-07-07 00:01:03.536 [INFO][5577] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:01:03.543269 containerd[1456]: 2025-07-07 00:01:03.539 [INFO][5550] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa" Jul 7 00:01:03.543788 containerd[1456]: time="2025-07-07T00:01:03.543318196Z" level=info msg="TearDown network for sandbox \"b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa\" successfully" Jul 7 00:01:03.543788 containerd[1456]: time="2025-07-07T00:01:03.543347160Z" level=info msg="StopPodSandbox for \"b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa\" returns successfully" Jul 7 00:01:03.543969 containerd[1456]: time="2025-07-07T00:01:03.543873647Z" level=info msg="RemovePodSandbox for \"b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa\"" Jul 7 00:01:03.543969 containerd[1456]: time="2025-07-07T00:01:03.543916538Z" level=info msg="Forcibly stopping sandbox \"b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa\"" Jul 7 00:01:03.646894 containerd[1456]: 2025-07-07 00:01:03.583 [WARNING][5601] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--cd8nn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e866b654-c1cc-4469-bdb5-43648247fb5f", ResourceVersion:"1094", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 0, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c036b51f7907bf5d8148f39cb839efcd0c7190ccf127e17bc1b46a9c23efa911", Pod:"coredns-674b8bbfcf-cd8nn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif918c5d716f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:01:03.646894 containerd[1456]: 2025-07-07 00:01:03.584 [INFO][5601] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa" Jul 7 00:01:03.646894 containerd[1456]: 2025-07-07 00:01:03.584 [INFO][5601] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa" iface="eth0" netns="" Jul 7 00:01:03.646894 containerd[1456]: 2025-07-07 00:01:03.584 [INFO][5601] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa" Jul 7 00:01:03.646894 containerd[1456]: 2025-07-07 00:01:03.584 [INFO][5601] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa" Jul 7 00:01:03.646894 containerd[1456]: 2025-07-07 00:01:03.624 [INFO][5610] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa" HandleID="k8s-pod-network.b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa" Workload="localhost-k8s-coredns--674b8bbfcf--cd8nn-eth0" Jul 7 00:01:03.646894 containerd[1456]: 2025-07-07 00:01:03.624 [INFO][5610] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:01:03.646894 containerd[1456]: 2025-07-07 00:01:03.624 [INFO][5610] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:01:03.646894 containerd[1456]: 2025-07-07 00:01:03.631 [WARNING][5610] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa" HandleID="k8s-pod-network.b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa" Workload="localhost-k8s-coredns--674b8bbfcf--cd8nn-eth0" Jul 7 00:01:03.646894 containerd[1456]: 2025-07-07 00:01:03.631 [INFO][5610] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa" HandleID="k8s-pod-network.b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa" Workload="localhost-k8s-coredns--674b8bbfcf--cd8nn-eth0" Jul 7 00:01:03.646894 containerd[1456]: 2025-07-07 00:01:03.633 [INFO][5610] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:01:03.646894 containerd[1456]: 2025-07-07 00:01:03.640 [INFO][5601] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa" Jul 7 00:01:03.647349 containerd[1456]: time="2025-07-07T00:01:03.646981845Z" level=info msg="TearDown network for sandbox \"b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa\" successfully" Jul 7 00:01:03.655963 containerd[1456]: time="2025-07-07T00:01:03.655906740Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:01:03.656372 containerd[1456]: time="2025-07-07T00:01:03.655989485Z" level=info msg="RemovePodSandbox \"b38076cb0cc71e79e4465a9ee806763d66eca7dc1d0db6d4022c9a193e4409aa\" returns successfully" Jul 7 00:01:03.656680 containerd[1456]: time="2025-07-07T00:01:03.656651828Z" level=info msg="StopPodSandbox for \"5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb\"" Jul 7 00:01:03.724508 containerd[1456]: 2025-07-07 00:01:03.689 [WARNING][5627] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5ffb9474b--prlmm-eth0", GenerateName:"calico-apiserver-5ffb9474b-", Namespace:"calico-apiserver", SelfLink:"", UID:"573569b9-db50-4d0d-a4f3-ce9219aa836d", ResourceVersion:"1162", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 0, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ffb9474b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"789d870b31747453c67be80958e428216fe98fd3e50c07e81bd8c32d9fbc0997", Pod:"calico-apiserver-5ffb9474b-prlmm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali37faeaa72db", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:01:03.724508 containerd[1456]: 2025-07-07 00:01:03.689 [INFO][5627] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb" Jul 7 00:01:03.724508 containerd[1456]: 2025-07-07 00:01:03.689 [INFO][5627] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb" iface="eth0" netns="" Jul 7 00:01:03.724508 containerd[1456]: 2025-07-07 00:01:03.689 [INFO][5627] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb" Jul 7 00:01:03.724508 containerd[1456]: 2025-07-07 00:01:03.689 [INFO][5627] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb" Jul 7 00:01:03.724508 containerd[1456]: 2025-07-07 00:01:03.709 [INFO][5635] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb" HandleID="k8s-pod-network.5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb" Workload="localhost-k8s-calico--apiserver--5ffb9474b--prlmm-eth0" Jul 7 00:01:03.724508 containerd[1456]: 2025-07-07 00:01:03.709 [INFO][5635] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:01:03.724508 containerd[1456]: 2025-07-07 00:01:03.709 [INFO][5635] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:01:03.724508 containerd[1456]: 2025-07-07 00:01:03.717 [WARNING][5635] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb" HandleID="k8s-pod-network.5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb" Workload="localhost-k8s-calico--apiserver--5ffb9474b--prlmm-eth0" Jul 7 00:01:03.724508 containerd[1456]: 2025-07-07 00:01:03.717 [INFO][5635] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb" HandleID="k8s-pod-network.5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb" Workload="localhost-k8s-calico--apiserver--5ffb9474b--prlmm-eth0" Jul 7 00:01:03.724508 containerd[1456]: 2025-07-07 00:01:03.718 [INFO][5635] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:01:03.724508 containerd[1456]: 2025-07-07 00:01:03.721 [INFO][5627] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb" Jul 7 00:01:03.724508 containerd[1456]: time="2025-07-07T00:01:03.724429325Z" level=info msg="TearDown network for sandbox \"5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb\" successfully" Jul 7 00:01:03.724508 containerd[1456]: time="2025-07-07T00:01:03.724454984Z" level=info msg="StopPodSandbox for \"5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb\" returns successfully" Jul 7 00:01:03.725028 containerd[1456]: time="2025-07-07T00:01:03.724976132Z" level=info msg="RemovePodSandbox for \"5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb\"" Jul 7 00:01:03.725028 containerd[1456]: time="2025-07-07T00:01:03.725010426Z" level=info msg="Forcibly stopping sandbox \"5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb\"" Jul 7 00:01:03.793504 containerd[1456]: 2025-07-07 00:01:03.759 [WARNING][5653] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5ffb9474b--prlmm-eth0", GenerateName:"calico-apiserver-5ffb9474b-", Namespace:"calico-apiserver", SelfLink:"", UID:"573569b9-db50-4d0d-a4f3-ce9219aa836d", ResourceVersion:"1162", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 0, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ffb9474b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"789d870b31747453c67be80958e428216fe98fd3e50c07e81bd8c32d9fbc0997", Pod:"calico-apiserver-5ffb9474b-prlmm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali37faeaa72db", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:01:03.793504 containerd[1456]: 2025-07-07 00:01:03.759 [INFO][5653] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb" Jul 7 00:01:03.793504 containerd[1456]: 2025-07-07 00:01:03.760 [INFO][5653] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb" iface="eth0" netns="" Jul 7 00:01:03.793504 containerd[1456]: 2025-07-07 00:01:03.760 [INFO][5653] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb" Jul 7 00:01:03.793504 containerd[1456]: 2025-07-07 00:01:03.760 [INFO][5653] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb" Jul 7 00:01:03.793504 containerd[1456]: 2025-07-07 00:01:03.781 [INFO][5662] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb" HandleID="k8s-pod-network.5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb" Workload="localhost-k8s-calico--apiserver--5ffb9474b--prlmm-eth0" Jul 7 00:01:03.793504 containerd[1456]: 2025-07-07 00:01:03.781 [INFO][5662] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:01:03.793504 containerd[1456]: 2025-07-07 00:01:03.781 [INFO][5662] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:01:03.793504 containerd[1456]: 2025-07-07 00:01:03.786 [WARNING][5662] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb" HandleID="k8s-pod-network.5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb" Workload="localhost-k8s-calico--apiserver--5ffb9474b--prlmm-eth0" Jul 7 00:01:03.793504 containerd[1456]: 2025-07-07 00:01:03.786 [INFO][5662] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb" HandleID="k8s-pod-network.5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb" Workload="localhost-k8s-calico--apiserver--5ffb9474b--prlmm-eth0" Jul 7 00:01:03.793504 containerd[1456]: 2025-07-07 00:01:03.788 [INFO][5662] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:01:03.793504 containerd[1456]: 2025-07-07 00:01:03.790 [INFO][5653] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb" Jul 7 00:01:03.793931 containerd[1456]: time="2025-07-07T00:01:03.793541898Z" level=info msg="TearDown network for sandbox \"5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb\" successfully" Jul 7 00:01:03.797896 containerd[1456]: time="2025-07-07T00:01:03.797858106Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:01:03.797962 containerd[1456]: time="2025-07-07T00:01:03.797910124Z" level=info msg="RemovePodSandbox \"5b01d3d373fb9c6b606b2c934b2e6f5e7e25e3e494af1823b3b79a078a0735fb\" returns successfully" Jul 7 00:01:03.798445 containerd[1456]: time="2025-07-07T00:01:03.798404622Z" level=info msg="StopPodSandbox for \"00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690\"" Jul 7 00:01:03.870173 containerd[1456]: 2025-07-07 00:01:03.832 [WARNING][5681] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5ffb9474b--k66n7-eth0", GenerateName:"calico-apiserver-5ffb9474b-", Namespace:"calico-apiserver", SelfLink:"", UID:"6154d981-1514-483e-9a6f-b65a800f05e0", ResourceVersion:"1252", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 0, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ffb9474b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a60e493add90a1760681b4af9334bbc952739acb98108527511a6f994ce4e8b9", Pod:"calico-apiserver-5ffb9474b-k66n7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic7dc06dfc68", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:01:03.870173 containerd[1456]: 2025-07-07 00:01:03.832 [INFO][5681] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690" Jul 7 00:01:03.870173 containerd[1456]: 2025-07-07 00:01:03.832 [INFO][5681] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690" iface="eth0" netns="" Jul 7 00:01:03.870173 containerd[1456]: 2025-07-07 00:01:03.832 [INFO][5681] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690" Jul 7 00:01:03.870173 containerd[1456]: 2025-07-07 00:01:03.832 [INFO][5681] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690" Jul 7 00:01:03.870173 containerd[1456]: 2025-07-07 00:01:03.855 [INFO][5690] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690" HandleID="k8s-pod-network.00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690" Workload="localhost-k8s-calico--apiserver--5ffb9474b--k66n7-eth0" Jul 7 00:01:03.870173 containerd[1456]: 2025-07-07 00:01:03.855 [INFO][5690] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:01:03.870173 containerd[1456]: 2025-07-07 00:01:03.855 [INFO][5690] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:01:03.870173 containerd[1456]: 2025-07-07 00:01:03.862 [WARNING][5690] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690" HandleID="k8s-pod-network.00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690" Workload="localhost-k8s-calico--apiserver--5ffb9474b--k66n7-eth0" Jul 7 00:01:03.870173 containerd[1456]: 2025-07-07 00:01:03.862 [INFO][5690] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690" HandleID="k8s-pod-network.00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690" Workload="localhost-k8s-calico--apiserver--5ffb9474b--k66n7-eth0" Jul 7 00:01:03.870173 containerd[1456]: 2025-07-07 00:01:03.864 [INFO][5690] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:01:03.870173 containerd[1456]: 2025-07-07 00:01:03.866 [INFO][5681] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690" Jul 7 00:01:03.870684 containerd[1456]: time="2025-07-07T00:01:03.870217972Z" level=info msg="TearDown network for sandbox \"00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690\" successfully" Jul 7 00:01:03.870684 containerd[1456]: time="2025-07-07T00:01:03.870243009Z" level=info msg="StopPodSandbox for \"00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690\" returns successfully" Jul 7 00:01:03.870744 containerd[1456]: time="2025-07-07T00:01:03.870731426Z" level=info msg="RemovePodSandbox for \"00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690\"" Jul 7 00:01:03.870770 containerd[1456]: time="2025-07-07T00:01:03.870752495Z" level=info msg="Forcibly stopping sandbox \"00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690\"" Jul 7 00:01:03.949748 containerd[1456]: 2025-07-07 00:01:03.907 [WARNING][5708] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5ffb9474b--k66n7-eth0", GenerateName:"calico-apiserver-5ffb9474b-", Namespace:"calico-apiserver", SelfLink:"", UID:"6154d981-1514-483e-9a6f-b65a800f05e0", ResourceVersion:"1252", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 0, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ffb9474b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a60e493add90a1760681b4af9334bbc952739acb98108527511a6f994ce4e8b9", Pod:"calico-apiserver-5ffb9474b-k66n7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic7dc06dfc68", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:01:03.949748 containerd[1456]: 2025-07-07 00:01:03.908 [INFO][5708] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690" Jul 7 00:01:03.949748 containerd[1456]: 2025-07-07 00:01:03.908 [INFO][5708] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690" iface="eth0" netns="" Jul 7 00:01:03.949748 containerd[1456]: 2025-07-07 00:01:03.908 [INFO][5708] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690" Jul 7 00:01:03.949748 containerd[1456]: 2025-07-07 00:01:03.908 [INFO][5708] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690" Jul 7 00:01:03.949748 containerd[1456]: 2025-07-07 00:01:03.933 [INFO][5717] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690" HandleID="k8s-pod-network.00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690" Workload="localhost-k8s-calico--apiserver--5ffb9474b--k66n7-eth0" Jul 7 00:01:03.949748 containerd[1456]: 2025-07-07 00:01:03.934 [INFO][5717] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:01:03.949748 containerd[1456]: 2025-07-07 00:01:03.934 [INFO][5717] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:01:03.949748 containerd[1456]: 2025-07-07 00:01:03.941 [WARNING][5717] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690" HandleID="k8s-pod-network.00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690" Workload="localhost-k8s-calico--apiserver--5ffb9474b--k66n7-eth0" Jul 7 00:01:03.949748 containerd[1456]: 2025-07-07 00:01:03.941 [INFO][5717] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690" HandleID="k8s-pod-network.00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690" Workload="localhost-k8s-calico--apiserver--5ffb9474b--k66n7-eth0" Jul 7 00:01:03.949748 containerd[1456]: 2025-07-07 00:01:03.943 [INFO][5717] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:01:03.949748 containerd[1456]: 2025-07-07 00:01:03.946 [INFO][5708] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690" Jul 7 00:01:03.950488 containerd[1456]: time="2025-07-07T00:01:03.950449216Z" level=info msg="TearDown network for sandbox \"00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690\" successfully" Jul 7 00:01:03.962265 containerd[1456]: time="2025-07-07T00:01:03.962231162Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:01:03.962319 containerd[1456]: time="2025-07-07T00:01:03.962288350Z" level=info msg="RemovePodSandbox \"00d997051de1bc91c8ea9428b2aaf91df906e07a1409028eb3f30821d87aa690\" returns successfully" Jul 7 00:01:03.962847 containerd[1456]: time="2025-07-07T00:01:03.962817603Z" level=info msg="StopPodSandbox for \"7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a\"" Jul 7 00:01:04.029900 containerd[1456]: 2025-07-07 00:01:03.997 [WARNING][5734] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a" WorkloadEndpoint="localhost-k8s-whisker--65d9dcd5c4--rvpmp-eth0" Jul 7 00:01:04.029900 containerd[1456]: 2025-07-07 00:01:03.997 [INFO][5734] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a" Jul 7 00:01:04.029900 containerd[1456]: 2025-07-07 00:01:03.997 [INFO][5734] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a" iface="eth0" netns="" Jul 7 00:01:04.029900 containerd[1456]: 2025-07-07 00:01:03.997 [INFO][5734] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a" Jul 7 00:01:04.029900 containerd[1456]: 2025-07-07 00:01:03.997 [INFO][5734] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a" Jul 7 00:01:04.029900 containerd[1456]: 2025-07-07 00:01:04.017 [INFO][5743] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a" HandleID="k8s-pod-network.7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a" Workload="localhost-k8s-whisker--65d9dcd5c4--rvpmp-eth0" Jul 7 00:01:04.029900 containerd[1456]: 2025-07-07 00:01:04.017 [INFO][5743] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:01:04.029900 containerd[1456]: 2025-07-07 00:01:04.017 [INFO][5743] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:01:04.029900 containerd[1456]: 2025-07-07 00:01:04.023 [WARNING][5743] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a" HandleID="k8s-pod-network.7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a" Workload="localhost-k8s-whisker--65d9dcd5c4--rvpmp-eth0" Jul 7 00:01:04.029900 containerd[1456]: 2025-07-07 00:01:04.023 [INFO][5743] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a" HandleID="k8s-pod-network.7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a" Workload="localhost-k8s-whisker--65d9dcd5c4--rvpmp-eth0" Jul 7 00:01:04.029900 containerd[1456]: 2025-07-07 00:01:04.024 [INFO][5743] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:01:04.029900 containerd[1456]: 2025-07-07 00:01:04.027 [INFO][5734] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a" Jul 7 00:01:04.029900 containerd[1456]: time="2025-07-07T00:01:04.029849119Z" level=info msg="TearDown network for sandbox \"7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a\" successfully" Jul 7 00:01:04.029900 containerd[1456]: time="2025-07-07T00:01:04.029876641Z" level=info msg="StopPodSandbox for \"7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a\" returns successfully" Jul 7 00:01:04.030783 containerd[1456]: time="2025-07-07T00:01:04.030327467Z" level=info msg="RemovePodSandbox for \"7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a\"" Jul 7 00:01:04.030783 containerd[1456]: time="2025-07-07T00:01:04.030350911Z" level=info msg="Forcibly stopping sandbox \"7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a\"" Jul 7 00:01:04.096370 containerd[1456]: 2025-07-07 00:01:04.062 [WARNING][5760] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a" WorkloadEndpoint="localhost-k8s-whisker--65d9dcd5c4--rvpmp-eth0" Jul 7 00:01:04.096370 containerd[1456]: 2025-07-07 00:01:04.062 [INFO][5760] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a" Jul 7 00:01:04.096370 containerd[1456]: 2025-07-07 00:01:04.062 [INFO][5760] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a" iface="eth0" netns="" Jul 7 00:01:04.096370 containerd[1456]: 2025-07-07 00:01:04.062 [INFO][5760] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a" Jul 7 00:01:04.096370 containerd[1456]: 2025-07-07 00:01:04.062 [INFO][5760] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a" Jul 7 00:01:04.096370 containerd[1456]: 2025-07-07 00:01:04.082 [INFO][5770] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a" HandleID="k8s-pod-network.7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a" Workload="localhost-k8s-whisker--65d9dcd5c4--rvpmp-eth0" Jul 7 00:01:04.096370 containerd[1456]: 2025-07-07 00:01:04.082 [INFO][5770] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:01:04.096370 containerd[1456]: 2025-07-07 00:01:04.082 [INFO][5770] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:01:04.096370 containerd[1456]: 2025-07-07 00:01:04.089 [WARNING][5770] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a" HandleID="k8s-pod-network.7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a" Workload="localhost-k8s-whisker--65d9dcd5c4--rvpmp-eth0" Jul 7 00:01:04.096370 containerd[1456]: 2025-07-07 00:01:04.089 [INFO][5770] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a" HandleID="k8s-pod-network.7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a" Workload="localhost-k8s-whisker--65d9dcd5c4--rvpmp-eth0" Jul 7 00:01:04.096370 containerd[1456]: 2025-07-07 00:01:04.090 [INFO][5770] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:01:04.096370 containerd[1456]: 2025-07-07 00:01:04.093 [INFO][5760] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a" Jul 7 00:01:04.096764 containerd[1456]: time="2025-07-07T00:01:04.096403518Z" level=info msg="TearDown network for sandbox \"7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a\" successfully" Jul 7 00:01:04.122920 containerd[1456]: time="2025-07-07T00:01:04.122853364Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:01:04.123080 containerd[1456]: time="2025-07-07T00:01:04.122939736Z" level=info msg="RemovePodSandbox \"7842c85466cb75861cd01eac41a39596e404ca13ce93fb080e1a667cc8be0b1a\" returns successfully" Jul 7 00:01:04.123494 containerd[1456]: time="2025-07-07T00:01:04.123439624Z" level=info msg="StopPodSandbox for \"d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97\"" Jul 7 00:01:04.188930 containerd[1456]: 2025-07-07 00:01:04.156 [WARNING][5788] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--dbc45fd6f--nj8xd-eth0", GenerateName:"calico-kube-controllers-dbc45fd6f-", Namespace:"calico-system", SelfLink:"", UID:"3661e791-5a5a-4263-b5c4-3e2adfeb5eb7", ResourceVersion:"1218", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 0, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"dbc45fd6f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"89b7af0f7c217b847cbfbe8e6568ad5fd70a65c2f63f4cd5ea764afb34391ad5", Pod:"calico-kube-controllers-dbc45fd6f-nj8xd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliafaad1cce76", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:01:04.188930 containerd[1456]: 2025-07-07 00:01:04.156 [INFO][5788] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97" Jul 7 00:01:04.188930 containerd[1456]: 2025-07-07 00:01:04.156 [INFO][5788] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97" iface="eth0" netns="" Jul 7 00:01:04.188930 containerd[1456]: 2025-07-07 00:01:04.156 [INFO][5788] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97" Jul 7 00:01:04.188930 containerd[1456]: 2025-07-07 00:01:04.156 [INFO][5788] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97" Jul 7 00:01:04.188930 containerd[1456]: 2025-07-07 00:01:04.175 [INFO][5797] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97" HandleID="k8s-pod-network.d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97" Workload="localhost-k8s-calico--kube--controllers--dbc45fd6f--nj8xd-eth0" Jul 7 00:01:04.188930 containerd[1456]: 2025-07-07 00:01:04.175 [INFO][5797] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:01:04.188930 containerd[1456]: 2025-07-07 00:01:04.175 [INFO][5797] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:01:04.188930 containerd[1456]: 2025-07-07 00:01:04.182 [WARNING][5797] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97" HandleID="k8s-pod-network.d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97" Workload="localhost-k8s-calico--kube--controllers--dbc45fd6f--nj8xd-eth0" Jul 7 00:01:04.188930 containerd[1456]: 2025-07-07 00:01:04.182 [INFO][5797] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97" HandleID="k8s-pod-network.d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97" Workload="localhost-k8s-calico--kube--controllers--dbc45fd6f--nj8xd-eth0" Jul 7 00:01:04.188930 containerd[1456]: 2025-07-07 00:01:04.183 [INFO][5797] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:01:04.188930 containerd[1456]: 2025-07-07 00:01:04.186 [INFO][5788] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97" Jul 7 00:01:04.190178 containerd[1456]: time="2025-07-07T00:01:04.188967907Z" level=info msg="TearDown network for sandbox \"d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97\" successfully" Jul 7 00:01:04.190178 containerd[1456]: time="2025-07-07T00:01:04.188996812Z" level=info msg="StopPodSandbox for \"d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97\" returns successfully" Jul 7 00:01:04.190178 containerd[1456]: time="2025-07-07T00:01:04.189554749Z" level=info msg="RemovePodSandbox for \"d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97\"" Jul 7 00:01:04.190178 containerd[1456]: time="2025-07-07T00:01:04.189590406Z" level=info msg="Forcibly stopping sandbox \"d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97\"" Jul 7 00:01:04.255911 containerd[1456]: 2025-07-07 00:01:04.224 [WARNING][5815] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--dbc45fd6f--nj8xd-eth0", GenerateName:"calico-kube-controllers-dbc45fd6f-", Namespace:"calico-system", SelfLink:"", UID:"3661e791-5a5a-4263-b5c4-3e2adfeb5eb7", ResourceVersion:"1218", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 0, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"dbc45fd6f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"89b7af0f7c217b847cbfbe8e6568ad5fd70a65c2f63f4cd5ea764afb34391ad5", Pod:"calico-kube-controllers-dbc45fd6f-nj8xd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliafaad1cce76", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:01:04.255911 containerd[1456]: 2025-07-07 00:01:04.224 [INFO][5815] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97" Jul 7 00:01:04.255911 containerd[1456]: 2025-07-07 00:01:04.224 [INFO][5815] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97" iface="eth0" netns="" Jul 7 00:01:04.255911 containerd[1456]: 2025-07-07 00:01:04.224 [INFO][5815] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97" Jul 7 00:01:04.255911 containerd[1456]: 2025-07-07 00:01:04.224 [INFO][5815] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97" Jul 7 00:01:04.255911 containerd[1456]: 2025-07-07 00:01:04.243 [INFO][5824] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97" HandleID="k8s-pod-network.d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97" Workload="localhost-k8s-calico--kube--controllers--dbc45fd6f--nj8xd-eth0" Jul 7 00:01:04.255911 containerd[1456]: 2025-07-07 00:01:04.244 [INFO][5824] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:01:04.255911 containerd[1456]: 2025-07-07 00:01:04.244 [INFO][5824] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:01:04.255911 containerd[1456]: 2025-07-07 00:01:04.249 [WARNING][5824] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97" HandleID="k8s-pod-network.d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97" Workload="localhost-k8s-calico--kube--controllers--dbc45fd6f--nj8xd-eth0" Jul 7 00:01:04.255911 containerd[1456]: 2025-07-07 00:01:04.249 [INFO][5824] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97" HandleID="k8s-pod-network.d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97" Workload="localhost-k8s-calico--kube--controllers--dbc45fd6f--nj8xd-eth0" Jul 7 00:01:04.255911 containerd[1456]: 2025-07-07 00:01:04.250 [INFO][5824] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:01:04.255911 containerd[1456]: 2025-07-07 00:01:04.253 [INFO][5815] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97" Jul 7 00:01:04.256347 containerd[1456]: time="2025-07-07T00:01:04.255960289Z" level=info msg="TearDown network for sandbox \"d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97\" successfully" Jul 7 00:01:04.260561 containerd[1456]: time="2025-07-07T00:01:04.260523140Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:01:04.260642 containerd[1456]: time="2025-07-07T00:01:04.260603390Z" level=info msg="RemovePodSandbox \"d0a6c7b447e0ddb327125ae53ea292d3de4511c9a23ec882a5676edb6b186d97\" returns successfully" Jul 7 00:01:04.261166 containerd[1456]: time="2025-07-07T00:01:04.261132783Z" level=info msg="StopPodSandbox for \"26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a\"" Jul 7 00:01:04.327240 containerd[1456]: 2025-07-07 00:01:04.293 [WARNING][5842] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qnpjz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"febfaed7-7f45-4497-b75d-f0ee8f991481", ResourceVersion:"1143", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 0, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"884682f74b2416e49c40e2d921ee28ed71eafdbe02b3188bfb9d71dacd8d51c2", Pod:"csi-node-driver-qnpjz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia873270c799", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:01:04.327240 containerd[1456]: 2025-07-07 00:01:04.294 [INFO][5842] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a" Jul 7 00:01:04.327240 containerd[1456]: 2025-07-07 00:01:04.294 [INFO][5842] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a" iface="eth0" netns="" Jul 7 00:01:04.327240 containerd[1456]: 2025-07-07 00:01:04.294 [INFO][5842] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a" Jul 7 00:01:04.327240 containerd[1456]: 2025-07-07 00:01:04.294 [INFO][5842] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a" Jul 7 00:01:04.327240 containerd[1456]: 2025-07-07 00:01:04.314 [INFO][5851] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a" HandleID="k8s-pod-network.26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a" Workload="localhost-k8s-csi--node--driver--qnpjz-eth0" Jul 7 00:01:04.327240 containerd[1456]: 2025-07-07 00:01:04.314 [INFO][5851] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:01:04.327240 containerd[1456]: 2025-07-07 00:01:04.314 [INFO][5851] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:01:04.327240 containerd[1456]: 2025-07-07 00:01:04.320 [WARNING][5851] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a" HandleID="k8s-pod-network.26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a" Workload="localhost-k8s-csi--node--driver--qnpjz-eth0" Jul 7 00:01:04.327240 containerd[1456]: 2025-07-07 00:01:04.320 [INFO][5851] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a" HandleID="k8s-pod-network.26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a" Workload="localhost-k8s-csi--node--driver--qnpjz-eth0" Jul 7 00:01:04.327240 containerd[1456]: 2025-07-07 00:01:04.322 [INFO][5851] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:01:04.327240 containerd[1456]: 2025-07-07 00:01:04.324 [INFO][5842] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a" Jul 7 00:01:04.327240 containerd[1456]: time="2025-07-07T00:01:04.327196322Z" level=info msg="TearDown network for sandbox \"26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a\" successfully" Jul 7 00:01:04.327240 containerd[1456]: time="2025-07-07T00:01:04.327221639Z" level=info msg="StopPodSandbox for \"26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a\" returns successfully" Jul 7 00:01:04.327985 containerd[1456]: time="2025-07-07T00:01:04.327922453Z" level=info msg="RemovePodSandbox for \"26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a\"" Jul 7 00:01:04.327985 containerd[1456]: time="2025-07-07T00:01:04.327977668Z" level=info msg="Forcibly stopping sandbox \"26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a\"" Jul 7 00:01:04.394360 containerd[1456]: 2025-07-07 00:01:04.362 [WARNING][5869] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qnpjz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"febfaed7-7f45-4497-b75d-f0ee8f991481", ResourceVersion:"1143", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 0, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"884682f74b2416e49c40e2d921ee28ed71eafdbe02b3188bfb9d71dacd8d51c2", Pod:"csi-node-driver-qnpjz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia873270c799", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:01:04.394360 containerd[1456]: 2025-07-07 00:01:04.362 [INFO][5869] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a" Jul 7 00:01:04.394360 containerd[1456]: 2025-07-07 00:01:04.362 [INFO][5869] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a" iface="eth0" netns="" Jul 7 00:01:04.394360 containerd[1456]: 2025-07-07 00:01:04.362 [INFO][5869] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a" Jul 7 00:01:04.394360 containerd[1456]: 2025-07-07 00:01:04.362 [INFO][5869] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a" Jul 7 00:01:04.394360 containerd[1456]: 2025-07-07 00:01:04.382 [INFO][5877] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a" HandleID="k8s-pod-network.26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a" Workload="localhost-k8s-csi--node--driver--qnpjz-eth0" Jul 7 00:01:04.394360 containerd[1456]: 2025-07-07 00:01:04.382 [INFO][5877] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:01:04.394360 containerd[1456]: 2025-07-07 00:01:04.382 [INFO][5877] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:01:04.394360 containerd[1456]: 2025-07-07 00:01:04.387 [WARNING][5877] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a" HandleID="k8s-pod-network.26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a" Workload="localhost-k8s-csi--node--driver--qnpjz-eth0" Jul 7 00:01:04.394360 containerd[1456]: 2025-07-07 00:01:04.387 [INFO][5877] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a" HandleID="k8s-pod-network.26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a" Workload="localhost-k8s-csi--node--driver--qnpjz-eth0" Jul 7 00:01:04.394360 containerd[1456]: 2025-07-07 00:01:04.389 [INFO][5877] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:01:04.394360 containerd[1456]: 2025-07-07 00:01:04.391 [INFO][5869] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a" Jul 7 00:01:04.394782 containerd[1456]: time="2025-07-07T00:01:04.394394238Z" level=info msg="TearDown network for sandbox \"26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a\" successfully" Jul 7 00:01:04.398524 containerd[1456]: time="2025-07-07T00:01:04.398498358Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:01:04.398586 containerd[1456]: time="2025-07-07T00:01:04.398547531Z" level=info msg="RemovePodSandbox \"26924836b0cd9b156bebff4f2ed57bd762082af3559798be9e656af83ce9851a\" returns successfully" Jul 7 00:01:04.399197 containerd[1456]: time="2025-07-07T00:01:04.399152226Z" level=info msg="StopPodSandbox for \"f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958\"" Jul 7 00:01:04.459485 kubelet[2515]: I0707 00:01:04.459440 2515 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:01:04.465077 containerd[1456]: 2025-07-07 00:01:04.431 [WARNING][5894] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--gx8x8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d92e234f-52f0-4d16-a111-2199fe918193", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 0, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"82c5df8b7cfaef12e528c59b8a746a86e136b68cbc60900c9e472ca0182aff6e", Pod:"coredns-674b8bbfcf-gx8x8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia5be83ab918", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:01:04.465077 containerd[1456]: 2025-07-07 00:01:04.431 [INFO][5894] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958" Jul 7 00:01:04.465077 containerd[1456]: 2025-07-07 00:01:04.431 [INFO][5894] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958" iface="eth0" netns="" Jul 7 00:01:04.465077 containerd[1456]: 2025-07-07 00:01:04.431 [INFO][5894] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958" Jul 7 00:01:04.465077 containerd[1456]: 2025-07-07 00:01:04.431 [INFO][5894] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958" Jul 7 00:01:04.465077 containerd[1456]: 2025-07-07 00:01:04.451 [INFO][5903] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958" HandleID="k8s-pod-network.f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958" Workload="localhost-k8s-coredns--674b8bbfcf--gx8x8-eth0" Jul 7 00:01:04.465077 containerd[1456]: 2025-07-07 00:01:04.451 [INFO][5903] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:01:04.465077 containerd[1456]: 2025-07-07 00:01:04.451 [INFO][5903] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:01:04.465077 containerd[1456]: 2025-07-07 00:01:04.457 [WARNING][5903] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958" HandleID="k8s-pod-network.f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958" Workload="localhost-k8s-coredns--674b8bbfcf--gx8x8-eth0" Jul 7 00:01:04.465077 containerd[1456]: 2025-07-07 00:01:04.457 [INFO][5903] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958" HandleID="k8s-pod-network.f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958" Workload="localhost-k8s-coredns--674b8bbfcf--gx8x8-eth0" Jul 7 00:01:04.465077 containerd[1456]: 2025-07-07 00:01:04.459 [INFO][5903] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:01:04.465077 containerd[1456]: 2025-07-07 00:01:04.462 [INFO][5894] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958" Jul 7 00:01:04.465563 containerd[1456]: time="2025-07-07T00:01:04.465097642Z" level=info msg="TearDown network for sandbox \"f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958\" successfully" Jul 7 00:01:04.465563 containerd[1456]: time="2025-07-07T00:01:04.465119362Z" level=info msg="StopPodSandbox for \"f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958\" returns successfully" Jul 7 00:01:04.465563 containerd[1456]: time="2025-07-07T00:01:04.465421399Z" level=info msg="RemovePodSandbox for \"f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958\"" Jul 7 00:01:04.465563 containerd[1456]: time="2025-07-07T00:01:04.465449492Z" level=info msg="Forcibly stopping sandbox \"f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958\"" Jul 7 00:01:04.528495 containerd[1456]: 2025-07-07 00:01:04.496 [WARNING][5922] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--gx8x8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d92e234f-52f0-4d16-a111-2199fe918193", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 0, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"82c5df8b7cfaef12e528c59b8a746a86e136b68cbc60900c9e472ca0182aff6e", Pod:"coredns-674b8bbfcf-gx8x8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia5be83ab918", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:01:04.528495 containerd[1456]: 2025-07-07 00:01:04.496 [INFO][5922] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958" Jul 7 00:01:04.528495 containerd[1456]: 2025-07-07 00:01:04.496 [INFO][5922] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958" iface="eth0" netns="" Jul 7 00:01:04.528495 containerd[1456]: 2025-07-07 00:01:04.496 [INFO][5922] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958" Jul 7 00:01:04.528495 containerd[1456]: 2025-07-07 00:01:04.496 [INFO][5922] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958" Jul 7 00:01:04.528495 containerd[1456]: 2025-07-07 00:01:04.515 [INFO][5931] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958" HandleID="k8s-pod-network.f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958" Workload="localhost-k8s-coredns--674b8bbfcf--gx8x8-eth0" Jul 7 00:01:04.528495 containerd[1456]: 2025-07-07 00:01:04.515 [INFO][5931] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:01:04.528495 containerd[1456]: 2025-07-07 00:01:04.515 [INFO][5931] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:01:04.528495 containerd[1456]: 2025-07-07 00:01:04.520 [WARNING][5931] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958" HandleID="k8s-pod-network.f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958" Workload="localhost-k8s-coredns--674b8bbfcf--gx8x8-eth0" Jul 7 00:01:04.528495 containerd[1456]: 2025-07-07 00:01:04.520 [INFO][5931] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958" HandleID="k8s-pod-network.f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958" Workload="localhost-k8s-coredns--674b8bbfcf--gx8x8-eth0" Jul 7 00:01:04.528495 containerd[1456]: 2025-07-07 00:01:04.522 [INFO][5931] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:01:04.528495 containerd[1456]: 2025-07-07 00:01:04.525 [INFO][5922] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958" Jul 7 00:01:04.536532 containerd[1456]: time="2025-07-07T00:01:04.528440295Z" level=info msg="TearDown network for sandbox \"f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958\" successfully" Jul 7 00:01:04.541099 containerd[1456]: time="2025-07-07T00:01:04.541065953Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:01:04.541145 containerd[1456]: time="2025-07-07T00:01:04.541134492Z" level=info msg="RemovePodSandbox \"f63c80954f66cedddaa7c98bcfe7b937eb46e20cc77f8de823dbd008b5387958\" returns successfully" Jul 7 00:01:04.619239 systemd[1]: Started sshd@14-10.0.0.146:22-10.0.0.1:43994.service - OpenSSH per-connection server daemon (10.0.0.1:43994). Jul 7 00:01:04.659133 sshd[5940]: Accepted publickey for core from 10.0.0.1 port 43994 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 7 00:01:04.660678 sshd[5940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:04.664873 systemd-logind[1440]: New session 15 of user core. Jul 7 00:01:04.670049 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 00:01:04.977299 sshd[5940]: pam_unix(sshd:session): session closed for user core Jul 7 00:01:04.994161 systemd[1]: sshd@14-10.0.0.146:22-10.0.0.1:43994.service: Deactivated successfully. Jul 7 00:01:04.996622 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 00:01:04.998321 systemd-logind[1440]: Session 15 logged out. Waiting for processes to exit. Jul 7 00:01:05.004427 systemd[1]: Started sshd@15-10.0.0.146:22-10.0.0.1:44000.service - OpenSSH per-connection server daemon (10.0.0.1:44000). Jul 7 00:01:05.006354 systemd-logind[1440]: Removed session 15. Jul 7 00:01:05.039641 sshd[5958]: Accepted publickey for core from 10.0.0.1 port 44000 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 7 00:01:05.041635 sshd[5958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:05.047868 systemd-logind[1440]: New session 16 of user core. Jul 7 00:01:05.052494 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 00:01:05.082193 containerd[1456]: time="2025-07-07T00:01:05.082126937Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 7 00:01:05.085678 containerd[1456]: time="2025-07-07T00:01:05.085643243Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 2.526869891s" Jul 7 00:01:05.085735 containerd[1456]: time="2025-07-07T00:01:05.085677968Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 7 00:01:05.086050 containerd[1456]: time="2025-07-07T00:01:05.085821479Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:05.086585 containerd[1456]: time="2025-07-07T00:01:05.086551457Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:05.087191 containerd[1456]: time="2025-07-07T00:01:05.087124844Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:05.090880 containerd[1456]: time="2025-07-07T00:01:05.090853529Z" level=info msg="CreateContainer within sandbox \"884682f74b2416e49c40e2d921ee28ed71eafdbe02b3188bfb9d71dacd8d51c2\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 7 00:01:05.113665 containerd[1456]: time="2025-07-07T00:01:05.113590517Z" level=info msg="CreateContainer within sandbox \"884682f74b2416e49c40e2d921ee28ed71eafdbe02b3188bfb9d71dacd8d51c2\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"2a2a381244d481d71ed38f900e747f4f8ec283bc0e8435a3bc31b872bc009d7e\"" Jul 7 00:01:05.114476 containerd[1456]: time="2025-07-07T00:01:05.114410425Z" level=info msg="StartContainer for \"2a2a381244d481d71ed38f900e747f4f8ec283bc0e8435a3bc31b872bc009d7e\"" Jul 7 00:01:05.156101 systemd[1]: Started cri-containerd-2a2a381244d481d71ed38f900e747f4f8ec283bc0e8435a3bc31b872bc009d7e.scope - libcontainer container 2a2a381244d481d71ed38f900e747f4f8ec283bc0e8435a3bc31b872bc009d7e. Jul 7 00:01:05.252960 containerd[1456]: time="2025-07-07T00:01:05.252816806Z" level=info msg="StartContainer for \"2a2a381244d481d71ed38f900e747f4f8ec283bc0e8435a3bc31b872bc009d7e\" returns successfully" Jul 7 00:01:05.253938 containerd[1456]: time="2025-07-07T00:01:05.253903644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 7 00:01:05.305903 sshd[5958]: pam_unix(sshd:session): session closed for user core Jul 7 00:01:05.313209 systemd[1]: sshd@15-10.0.0.146:22-10.0.0.1:44000.service: Deactivated successfully. Jul 7 00:01:05.315238 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 00:01:05.315980 systemd-logind[1440]: Session 16 logged out. Waiting for processes to exit. Jul 7 00:01:05.326310 systemd[1]: Started sshd@16-10.0.0.146:22-10.0.0.1:44010.service - OpenSSH per-connection server daemon (10.0.0.1:44010). Jul 7 00:01:05.326920 systemd-logind[1440]: Removed session 16. Jul 7 00:01:05.361637 sshd[6009]: Accepted publickey for core from 10.0.0.1 port 44010 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 7 00:01:05.363237 sshd[6009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:05.367767 systemd-logind[1440]: New session 17 of user core. Jul 7 00:01:05.377095 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 00:01:06.312752 sshd[6009]: pam_unix(sshd:session): session closed for user core Jul 7 00:01:06.321487 systemd[1]: sshd@16-10.0.0.146:22-10.0.0.1:44010.service: Deactivated successfully. Jul 7 00:01:06.325798 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 00:01:06.329761 systemd-logind[1440]: Session 17 logged out. Waiting for processes to exit. Jul 7 00:01:06.336443 systemd[1]: Started sshd@17-10.0.0.146:22-10.0.0.1:44026.service - OpenSSH per-connection server daemon (10.0.0.1:44026). Jul 7 00:01:06.337467 systemd-logind[1440]: Removed session 17. Jul 7 00:01:06.375438 sshd[6027]: Accepted publickey for core from 10.0.0.1 port 44026 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 7 00:01:06.377058 sshd[6027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:06.381022 systemd-logind[1440]: New session 18 of user core. Jul 7 00:01:06.387080 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 00:01:06.672710 sshd[6027]: pam_unix(sshd:session): session closed for user core Jul 7 00:01:06.687144 systemd[1]: sshd@17-10.0.0.146:22-10.0.0.1:44026.service: Deactivated successfully. Jul 7 00:01:06.689112 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 00:01:06.693757 systemd-logind[1440]: Session 18 logged out. Waiting for processes to exit. Jul 7 00:01:06.701492 systemd[1]: Started sshd@18-10.0.0.146:22-10.0.0.1:44032.service - OpenSSH per-connection server daemon (10.0.0.1:44032). Jul 7 00:01:06.702531 systemd-logind[1440]: Removed session 18. Jul 7 00:01:06.733247 containerd[1456]: time="2025-07-07T00:01:06.733209348Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:06.733958 containerd[1456]: time="2025-07-07T00:01:06.733876506Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 7 00:01:06.735043 containerd[1456]: time="2025-07-07T00:01:06.735011327Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:06.737357 containerd[1456]: time="2025-07-07T00:01:06.737318573Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:06.737905 containerd[1456]: time="2025-07-07T00:01:06.737876781Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 1.483943171s" Jul 7 00:01:06.737985 containerd[1456]: time="2025-07-07T00:01:06.737912570Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 7 00:01:06.740019 sshd[6045]: Accepted publickey for core from 10.0.0.1 port 44032 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 7 00:01:06.741928 sshd[6045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:06.743751 containerd[1456]: time="2025-07-07T00:01:06.743715937Z" level=info msg="CreateContainer within sandbox \"884682f74b2416e49c40e2d921ee28ed71eafdbe02b3188bfb9d71dacd8d51c2\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 7 00:01:06.747543 systemd-logind[1440]: New session 19 of user core. Jul 7 00:01:06.752077 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 00:01:06.770456 containerd[1456]: time="2025-07-07T00:01:06.770406516Z" level=info msg="CreateContainer within sandbox \"884682f74b2416e49c40e2d921ee28ed71eafdbe02b3188bfb9d71dacd8d51c2\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"74bf1d6a2e9da3c22a5ddc9856da397a09aeff16385f4486c6659d7c9b8f8448\"" Jul 7 00:01:06.771027 containerd[1456]: time="2025-07-07T00:01:06.770999261Z" level=info msg="StartContainer for \"74bf1d6a2e9da3c22a5ddc9856da397a09aeff16385f4486c6659d7c9b8f8448\"" Jul 7 00:01:06.803134 systemd[1]: Started cri-containerd-74bf1d6a2e9da3c22a5ddc9856da397a09aeff16385f4486c6659d7c9b8f8448.scope - libcontainer container 74bf1d6a2e9da3c22a5ddc9856da397a09aeff16385f4486c6659d7c9b8f8448. Jul 7 00:01:06.842281 containerd[1456]: time="2025-07-07T00:01:06.842104004Z" level=info msg="StartContainer for \"74bf1d6a2e9da3c22a5ddc9856da397a09aeff16385f4486c6659d7c9b8f8448\" returns successfully" Jul 7 00:01:06.902198 sshd[6045]: pam_unix(sshd:session): session closed for user core Jul 7 00:01:06.906667 systemd[1]: sshd@18-10.0.0.146:22-10.0.0.1:44032.service: Deactivated successfully. Jul 7 00:01:06.908844 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 00:01:06.909502 systemd-logind[1440]: Session 19 logged out. Waiting for processes to exit. Jul 7 00:01:06.910404 systemd-logind[1440]: Removed session 19. Jul 7 00:01:07.286629 kubelet[2515]: I0707 00:01:07.286583 2515 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 7 00:01:07.287868 kubelet[2515]: I0707 00:01:07.287839 2515 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 7 00:01:07.501801 kubelet[2515]: I0707 00:01:07.501590 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-qnpjz" podStartSLOduration=31.897347837 podStartE2EDuration="48.501557418s" podCreationTimestamp="2025-07-07 00:00:19 +0000 UTC" firstStartedPulling="2025-07-07 00:00:50.134569332 +0000 UTC m=+47.051302227" lastFinishedPulling="2025-07-07 00:01:06.738778913 +0000 UTC m=+63.655511808" observedRunningTime="2025-07-07 00:01:07.494411461 +0000 UTC m=+64.411144356" watchObservedRunningTime="2025-07-07 00:01:07.501557418 +0000 UTC m=+64.418290313" Jul 7 00:01:11.915112 systemd[1]: Started sshd@19-10.0.0.146:22-10.0.0.1:39300.service - OpenSSH per-connection server daemon (10.0.0.1:39300). Jul 7 00:01:11.954516 sshd[6104]: Accepted publickey for core from 10.0.0.1 port 39300 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 7 00:01:11.956038 sshd[6104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:11.960073 systemd-logind[1440]: New session 20 of user core. Jul 7 00:01:11.972065 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 7 00:01:12.096243 sshd[6104]: pam_unix(sshd:session): session closed for user core Jul 7 00:01:12.100570 systemd[1]: sshd@19-10.0.0.146:22-10.0.0.1:39300.service: Deactivated successfully. Jul 7 00:01:12.102799 systemd[1]: session-20.scope: Deactivated successfully. Jul 7 00:01:12.103586 systemd-logind[1440]: Session 20 logged out. Waiting for processes to exit. Jul 7 00:01:12.104422 systemd-logind[1440]: Removed session 20. Jul 7 00:01:12.185872 kubelet[2515]: E0707 00:01:12.185762 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:01:17.108006 systemd[1]: Started sshd@20-10.0.0.146:22-10.0.0.1:39302.service - OpenSSH per-connection server daemon (10.0.0.1:39302). Jul 7 00:01:17.132253 kubelet[2515]: I0707 00:01:17.132212 2515 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:01:17.182789 sshd[6161]: Accepted publickey for core from 10.0.0.1 port 39302 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 7 00:01:17.184509 sshd[6161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:17.189327 systemd-logind[1440]: New session 21 of user core. Jul 7 00:01:17.202079 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 7 00:01:17.325760 sshd[6161]: pam_unix(sshd:session): session closed for user core Jul 7 00:01:17.330147 systemd[1]: sshd@20-10.0.0.146:22-10.0.0.1:39302.service: Deactivated successfully. Jul 7 00:01:17.332342 systemd[1]: session-21.scope: Deactivated successfully. Jul 7 00:01:17.333070 systemd-logind[1440]: Session 21 logged out. Waiting for processes to exit. Jul 7 00:01:17.334051 systemd-logind[1440]: Removed session 21. Jul 7 00:01:19.186029 kubelet[2515]: E0707 00:01:19.185990 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:01:20.186042 kubelet[2515]: E0707 00:01:20.186004 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:01:22.345329 systemd[1]: Started sshd@21-10.0.0.146:22-10.0.0.1:42054.service - OpenSSH per-connection server daemon (10.0.0.1:42054). Jul 7 00:01:22.389304 sshd[6179]: Accepted publickey for core from 10.0.0.1 port 42054 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 7 00:01:22.391230 sshd[6179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:22.395570 systemd-logind[1440]: New session 22 of user core. Jul 7 00:01:22.405133 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 7 00:01:22.701702 sshd[6179]: pam_unix(sshd:session): session closed for user core Jul 7 00:01:22.706274 systemd[1]: sshd@21-10.0.0.146:22-10.0.0.1:42054.service: Deactivated successfully. Jul 7 00:01:22.708217 systemd[1]: session-22.scope: Deactivated successfully. Jul 7 00:01:22.708863 systemd-logind[1440]: Session 22 logged out. Waiting for processes to exit. Jul 7 00:01:22.709732 systemd-logind[1440]: Removed session 22. Jul 7 00:01:23.115399 kubelet[2515]: I0707 00:01:23.115210 2515 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"