Jul 14 22:08:49.922589 kernel: Linux version 6.6.97-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jul 14 20:23:49 -00 2025 Jul 14 22:08:49.922611 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bfa97d577a2baa7448b0ab2cae71f1606bd0084ffae5b72cc7eef5122a2ca497 Jul 14 22:08:49.922623 kernel: BIOS-provided physical RAM map: Jul 14 22:08:49.922629 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 14 22:08:49.922636 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 14 22:08:49.922642 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 14 22:08:49.922649 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jul 14 22:08:49.922656 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jul 14 22:08:49.922662 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 14 22:08:49.922671 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 14 22:08:49.922677 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 14 22:08:49.922684 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 14 22:08:49.922690 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 14 22:08:49.922696 kernel: NX (Execute Disable) protection: active Jul 14 22:08:49.922704 kernel: APIC: Static calls initialized Jul 14 22:08:49.922713 kernel: SMBIOS 2.8 present. Jul 14 22:08:49.922720 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jul 14 22:08:49.922727 kernel: Hypervisor detected: KVM Jul 14 22:08:49.922734 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 14 22:08:49.922741 kernel: kvm-clock: using sched offset of 2315368849 cycles Jul 14 22:08:49.922748 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 14 22:08:49.922755 kernel: tsc: Detected 2794.748 MHz processor Jul 14 22:08:49.922762 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 14 22:08:49.922770 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 14 22:08:49.922777 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jul 14 22:08:49.922786 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 14 22:08:49.922793 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 14 22:08:49.922800 kernel: Using GB pages for direct mapping Jul 14 22:08:49.922807 kernel: ACPI: Early table checksum verification disabled Jul 14 22:08:49.922814 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jul 14 22:08:49.922821 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:08:49.922828 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:08:49.922835 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:08:49.922844 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jul 14 22:08:49.922851 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:08:49.922858 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:08:49.922865 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:08:49.922872 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:08:49.922879 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jul 14 22:08:49.922886 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jul 14 22:08:49.922898 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jul 14 22:08:49.922907 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jul 14 22:08:49.922914 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jul 14 22:08:49.922922 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jul 14 22:08:49.922929 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jul 14 22:08:49.922936 kernel: No NUMA configuration found Jul 14 22:08:49.922943 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jul 14 22:08:49.922951 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jul 14 22:08:49.922960 kernel: Zone ranges: Jul 14 22:08:49.922968 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 14 22:08:49.922982 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jul 14 22:08:49.922989 kernel: Normal empty Jul 14 22:08:49.922997 kernel: Movable zone start for each node Jul 14 22:08:49.923004 kernel: Early memory node ranges Jul 14 22:08:49.923011 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 14 22:08:49.923018 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jul 14 22:08:49.923026 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jul 14 22:08:49.923036 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 14 22:08:49.923043 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 14 22:08:49.923050 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jul 14 22:08:49.923057 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 14 22:08:49.923065 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 14 22:08:49.923072 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 14 22:08:49.923079 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 14 22:08:49.923087 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 14 22:08:49.923094 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 14 22:08:49.923103 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 14 22:08:49.923111 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 14 22:08:49.923118 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 14 22:08:49.923125 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 14 22:08:49.923132 kernel: TSC deadline timer available Jul 14 22:08:49.923140 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jul 14 22:08:49.923147 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 14 22:08:49.923154 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 14 22:08:49.923162 kernel: kvm-guest: setup PV sched yield Jul 14 22:08:49.923169 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 14 22:08:49.923178 kernel: Booting paravirtualized kernel on KVM Jul 14 22:08:49.923186 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 14 22:08:49.923193 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 14 22:08:49.923201 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u524288 Jul 14 22:08:49.923208 kernel: pcpu-alloc: s197096 r8192 d32280 u524288 alloc=1*2097152 Jul 14 22:08:49.923215 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 14 22:08:49.923223 kernel: kvm-guest: PV spinlocks enabled Jul 14 22:08:49.923230 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 14 22:08:49.923238 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bfa97d577a2baa7448b0ab2cae71f1606bd0084ffae5b72cc7eef5122a2ca497 Jul 14 22:08:49.923249 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 14 22:08:49.923256 kernel: random: crng init done Jul 14 22:08:49.923263 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 14 22:08:49.923270 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 14 22:08:49.923278 kernel: Fallback order for Node 0: 0 Jul 14 22:08:49.923285 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jul 14 22:08:49.923292 kernel: Policy zone: DMA32 Jul 14 22:08:49.923299 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 14 22:08:49.923310 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42876K init, 2316K bss, 136900K reserved, 0K cma-reserved) Jul 14 22:08:49.923317 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 14 22:08:49.923324 kernel: ftrace: allocating 37970 entries in 149 pages Jul 14 22:08:49.923332 kernel: ftrace: allocated 149 pages with 4 groups Jul 14 22:08:49.923339 kernel: Dynamic Preempt: voluntary Jul 14 22:08:49.923346 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 14 22:08:49.923354 kernel: rcu: RCU event tracing is enabled. Jul 14 22:08:49.923362 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 14 22:08:49.923369 kernel: Trampoline variant of Tasks RCU enabled. Jul 14 22:08:49.923379 kernel: Rude variant of Tasks RCU enabled. Jul 14 22:08:49.923387 kernel: Tracing variant of Tasks RCU enabled. Jul 14 22:08:49.923394 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 14 22:08:49.923401 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 14 22:08:49.923409 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 14 22:08:49.923416 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 14 22:08:49.923423 kernel: Console: colour VGA+ 80x25 Jul 14 22:08:49.923430 kernel: printk: console [ttyS0] enabled Jul 14 22:08:49.923437 kernel: ACPI: Core revision 20230628 Jul 14 22:08:49.923448 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 14 22:08:49.923455 kernel: APIC: Switch to symmetric I/O mode setup Jul 14 22:08:49.923462 kernel: x2apic enabled Jul 14 22:08:49.923469 kernel: APIC: Switched APIC routing to: physical x2apic Jul 14 22:08:49.923477 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 14 22:08:49.923484 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 14 22:08:49.923492 kernel: kvm-guest: setup PV IPIs Jul 14 22:08:49.923527 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 14 22:08:49.923535 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 14 22:08:49.923543 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jul 14 22:08:49.923550 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 14 22:08:49.923558 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 14 22:08:49.923567 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 14 22:08:49.923575 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 14 22:08:49.923583 kernel: Spectre V2 : Mitigation: Retpolines Jul 14 22:08:49.923590 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 14 22:08:49.923598 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 14 22:08:49.923608 kernel: RETBleed: Mitigation: untrained return thunk Jul 14 22:08:49.923615 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 14 22:08:49.923623 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 14 22:08:49.923631 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 14 22:08:49.923639 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 14 22:08:49.923646 kernel: x86/bugs: return thunk changed Jul 14 22:08:49.923654 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 14 22:08:49.923662 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 14 22:08:49.923672 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 14 22:08:49.923679 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 14 22:08:49.923687 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 14 22:08:49.923694 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 14 22:08:49.923702 kernel: Freeing SMP alternatives memory: 32K Jul 14 22:08:49.923710 kernel: pid_max: default: 32768 minimum: 301 Jul 14 22:08:49.923717 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 14 22:08:49.923725 kernel: landlock: Up and running. Jul 14 22:08:49.923732 kernel: SELinux: Initializing. Jul 14 22:08:49.923742 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 14 22:08:49.923750 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 14 22:08:49.923758 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 14 22:08:49.923765 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 14 22:08:49.923773 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 14 22:08:49.923781 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 14 22:08:49.923788 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 14 22:08:49.923796 kernel: ... version: 0 Jul 14 22:08:49.923803 kernel: ... bit width: 48 Jul 14 22:08:49.923813 kernel: ... generic registers: 6 Jul 14 22:08:49.923821 kernel: ... value mask: 0000ffffffffffff Jul 14 22:08:49.923829 kernel: ... max period: 00007fffffffffff Jul 14 22:08:49.923836 kernel: ... fixed-purpose events: 0 Jul 14 22:08:49.923843 kernel: ... event mask: 000000000000003f Jul 14 22:08:49.923851 kernel: signal: max sigframe size: 1776 Jul 14 22:08:49.923858 kernel: rcu: Hierarchical SRCU implementation. Jul 14 22:08:49.923866 kernel: rcu: Max phase no-delay instances is 400. Jul 14 22:08:49.923874 kernel: smp: Bringing up secondary CPUs ... Jul 14 22:08:49.923883 kernel: smpboot: x86: Booting SMP configuration: Jul 14 22:08:49.923891 kernel: .... node #0, CPUs: #1 #2 #3 Jul 14 22:08:49.923898 kernel: smp: Brought up 1 node, 4 CPUs Jul 14 22:08:49.923906 kernel: smpboot: Max logical packages: 1 Jul 14 22:08:49.923914 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jul 14 22:08:49.923921 kernel: devtmpfs: initialized Jul 14 22:08:49.923929 kernel: x86/mm: Memory block size: 128MB Jul 14 22:08:49.923936 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 14 22:08:49.923944 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 14 22:08:49.923953 kernel: pinctrl core: initialized pinctrl subsystem Jul 14 22:08:49.923961 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 14 22:08:49.923976 kernel: audit: initializing netlink subsys (disabled) Jul 14 22:08:49.923983 kernel: audit: type=2000 audit(1752530928.915:1): state=initialized audit_enabled=0 res=1 Jul 14 22:08:49.923991 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 14 22:08:49.923999 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 14 22:08:49.924006 kernel: cpuidle: using governor menu Jul 14 22:08:49.924014 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 14 22:08:49.924021 kernel: dca service started, version 1.12.1 Jul 14 22:08:49.924031 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jul 14 22:08:49.924040 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jul 14 22:08:49.924048 kernel: PCI: Using configuration type 1 for base access Jul 14 22:08:49.924055 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 14 22:08:49.924063 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 14 22:08:49.924070 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 14 22:08:49.924078 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 14 22:08:49.924086 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 14 22:08:49.924093 kernel: ACPI: Added _OSI(Module Device) Jul 14 22:08:49.924103 kernel: ACPI: Added _OSI(Processor Device) Jul 14 22:08:49.924111 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 14 22:08:49.924118 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 14 22:08:49.924126 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 14 22:08:49.924133 kernel: ACPI: Interpreter enabled Jul 14 22:08:49.924141 kernel: ACPI: PM: (supports S0 S3 S5) Jul 14 22:08:49.924148 kernel: ACPI: Using IOAPIC for interrupt routing Jul 14 22:08:49.924156 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 14 22:08:49.924164 kernel: PCI: Using E820 reservations for host bridge windows Jul 14 22:08:49.924174 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 14 22:08:49.924181 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 14 22:08:49.924373 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 14 22:08:49.924533 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 14 22:08:49.924662 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 14 22:08:49.924672 kernel: PCI host bridge to bus 0000:00 Jul 14 22:08:49.924799 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 14 22:08:49.924922 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 14 22:08:49.925043 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 14 22:08:49.925154 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jul 14 22:08:49.925263 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 14 22:08:49.925373 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jul 14 22:08:49.925484 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 14 22:08:49.925637 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jul 14 22:08:49.925821 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jul 14 22:08:49.925967 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jul 14 22:08:49.926102 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jul 14 22:08:49.926224 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jul 14 22:08:49.926344 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 14 22:08:49.926475 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jul 14 22:08:49.926622 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jul 14 22:08:49.926744 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jul 14 22:08:49.926866 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jul 14 22:08:49.927005 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jul 14 22:08:49.927130 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jul 14 22:08:49.927254 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jul 14 22:08:49.927375 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jul 14 22:08:49.927537 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jul 14 22:08:49.927665 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jul 14 22:08:49.927788 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jul 14 22:08:49.927927 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jul 14 22:08:49.928073 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jul 14 22:08:49.928204 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jul 14 22:08:49.928325 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 14 22:08:49.928459 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jul 14 22:08:49.928600 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jul 14 22:08:49.928720 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jul 14 22:08:49.928850 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jul 14 22:08:49.929009 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jul 14 22:08:49.929024 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 14 22:08:49.929035 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 14 22:08:49.929049 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 14 22:08:49.929060 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 14 22:08:49.929070 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 14 22:08:49.929080 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 14 22:08:49.929091 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 14 22:08:49.929101 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 14 22:08:49.929111 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 14 22:08:49.929119 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 14 22:08:49.929127 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 14 22:08:49.929137 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 14 22:08:49.929145 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 14 22:08:49.929153 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 14 22:08:49.929160 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 14 22:08:49.929168 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 14 22:08:49.929175 kernel: iommu: Default domain type: Translated Jul 14 22:08:49.929183 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 14 22:08:49.929191 kernel: PCI: Using ACPI for IRQ routing Jul 14 22:08:49.929198 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 14 22:08:49.929208 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 14 22:08:49.929216 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jul 14 22:08:49.929350 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 14 22:08:49.929472 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 14 22:08:49.929608 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 14 22:08:49.929619 kernel: vgaarb: loaded Jul 14 22:08:49.929627 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 14 22:08:49.929635 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 14 22:08:49.929646 kernel: clocksource: Switched to clocksource kvm-clock Jul 14 22:08:49.929654 kernel: VFS: Disk quotas dquot_6.6.0 Jul 14 22:08:49.929662 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 14 22:08:49.929669 kernel: pnp: PnP ACPI init Jul 14 22:08:49.929804 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 14 22:08:49.929816 kernel: pnp: PnP ACPI: found 6 devices Jul 14 22:08:49.929824 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 14 22:08:49.929831 kernel: NET: Registered PF_INET protocol family Jul 14 22:08:49.929842 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 14 22:08:49.929851 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 14 22:08:49.929860 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 14 22:08:49.929870 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 14 22:08:49.929879 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 14 22:08:49.929889 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 14 22:08:49.929898 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 14 22:08:49.929908 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 14 22:08:49.929918 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 14 22:08:49.929930 kernel: NET: Registered PF_XDP protocol family Jul 14 22:08:49.930054 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 14 22:08:49.930177 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 14 22:08:49.930314 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 14 22:08:49.930429 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jul 14 22:08:49.930555 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 14 22:08:49.930666 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jul 14 22:08:49.930675 kernel: PCI: CLS 0 bytes, default 64 Jul 14 22:08:49.930688 kernel: Initialise system trusted keyrings Jul 14 22:08:49.930696 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 14 22:08:49.930703 kernel: Key type asymmetric registered Jul 14 22:08:49.930711 kernel: Asymmetric key parser 'x509' registered Jul 14 22:08:49.930719 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 14 22:08:49.930726 kernel: io scheduler mq-deadline registered Jul 14 22:08:49.930734 kernel: io scheduler kyber registered Jul 14 22:08:49.930741 kernel: io scheduler bfq registered Jul 14 22:08:49.930749 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 14 22:08:49.930759 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 14 22:08:49.930767 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 14 22:08:49.930774 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 14 22:08:49.930782 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 14 22:08:49.930798 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 14 22:08:49.930807 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 14 22:08:49.930828 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 14 22:08:49.930837 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 14 22:08:49.930981 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 14 22:08:49.930996 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 14 22:08:49.931112 kernel: rtc_cmos 00:04: registered as rtc0 Jul 14 22:08:49.931227 kernel: rtc_cmos 00:04: setting system clock to 2025-07-14T22:08:49 UTC (1752530929) Jul 14 22:08:49.931341 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 14 22:08:49.931351 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 14 22:08:49.931358 kernel: NET: Registered PF_INET6 protocol family Jul 14 22:08:49.931366 kernel: Segment Routing with IPv6 Jul 14 22:08:49.931373 kernel: In-situ OAM (IOAM) with IPv6 Jul 14 22:08:49.931384 kernel: NET: Registered PF_PACKET protocol family Jul 14 22:08:49.931392 kernel: Key type dns_resolver registered Jul 14 22:08:49.931399 kernel: IPI shorthand broadcast: enabled Jul 14 22:08:49.931407 kernel: sched_clock: Marking stable (678003684, 116607883)->(854214503, -59602936) Jul 14 22:08:49.931415 kernel: registered taskstats version 1 Jul 14 22:08:49.931422 kernel: Loading compiled-in X.509 certificates Jul 14 22:08:49.931430 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.97-flatcar: ff10e110ca3923b510cf0133f4e9f48dd636b870' Jul 14 22:08:49.931438 kernel: Key type .fscrypt registered Jul 14 22:08:49.931445 kernel: Key type fscrypt-provisioning registered Jul 14 22:08:49.931456 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 14 22:08:49.931463 kernel: ima: Allocated hash algorithm: sha1 Jul 14 22:08:49.931471 kernel: ima: No architecture policies found Jul 14 22:08:49.931478 kernel: clk: Disabling unused clocks Jul 14 22:08:49.931486 kernel: Freeing unused kernel image (initmem) memory: 42876K Jul 14 22:08:49.931494 kernel: Write protecting the kernel read-only data: 36864k Jul 14 22:08:49.931570 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Jul 14 22:08:49.931578 kernel: Run /init as init process Jul 14 22:08:49.931586 kernel: with arguments: Jul 14 22:08:49.931597 kernel: /init Jul 14 22:08:49.931604 kernel: with environment: Jul 14 22:08:49.931611 kernel: HOME=/ Jul 14 22:08:49.931619 kernel: TERM=linux Jul 14 22:08:49.931626 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 14 22:08:49.931636 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 14 22:08:49.931646 systemd[1]: Detected virtualization kvm. Jul 14 22:08:49.931654 systemd[1]: Detected architecture x86-64. Jul 14 22:08:49.931665 systemd[1]: Running in initrd. Jul 14 22:08:49.931673 systemd[1]: No hostname configured, using default hostname. Jul 14 22:08:49.931681 systemd[1]: Hostname set to . Jul 14 22:08:49.931689 systemd[1]: Initializing machine ID from VM UUID. Jul 14 22:08:49.931697 systemd[1]: Queued start job for default target initrd.target. Jul 14 22:08:49.931705 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 22:08:49.931714 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 22:08:49.931722 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 14 22:08:49.931734 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 14 22:08:49.931754 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 14 22:08:49.931765 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 14 22:08:49.931775 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 14 22:08:49.931786 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 14 22:08:49.931795 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 22:08:49.931803 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 14 22:08:49.931811 systemd[1]: Reached target paths.target - Path Units. Jul 14 22:08:49.931820 systemd[1]: Reached target slices.target - Slice Units. Jul 14 22:08:49.931828 systemd[1]: Reached target swap.target - Swaps. Jul 14 22:08:49.931836 systemd[1]: Reached target timers.target - Timer Units. Jul 14 22:08:49.931844 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 14 22:08:49.931853 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 14 22:08:49.931863 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 14 22:08:49.931872 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 14 22:08:49.931880 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 14 22:08:49.931888 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 14 22:08:49.931897 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 22:08:49.931906 systemd[1]: Reached target sockets.target - Socket Units. Jul 14 22:08:49.931914 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 14 22:08:49.931922 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 14 22:08:49.931931 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 14 22:08:49.931941 systemd[1]: Starting systemd-fsck-usr.service... Jul 14 22:08:49.931952 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 14 22:08:49.931960 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 14 22:08:49.931975 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 22:08:49.931983 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 14 22:08:49.931992 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 22:08:49.932000 systemd[1]: Finished systemd-fsck-usr.service. Jul 14 22:08:49.932030 systemd-journald[192]: Collecting audit messages is disabled. Jul 14 22:08:49.932051 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 14 22:08:49.932060 systemd-journald[192]: Journal started Jul 14 22:08:49.932081 systemd-journald[192]: Runtime Journal (/run/log/journal/04059004366a4366a7627f3eaf8a1da8) is 6.0M, max 48.4M, 42.3M free. Jul 14 22:08:49.916829 systemd-modules-load[193]: Inserted module 'overlay' Jul 14 22:08:49.950675 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 14 22:08:49.950691 kernel: Bridge firewalling registered Jul 14 22:08:49.944188 systemd-modules-load[193]: Inserted module 'br_netfilter' Jul 14 22:08:49.955128 systemd[1]: Started systemd-journald.service - Journal Service. Jul 14 22:08:49.955661 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 14 22:08:49.957952 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 22:08:49.960378 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 14 22:08:49.977697 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 22:08:49.980957 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 14 22:08:49.983550 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 14 22:08:49.987438 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 14 22:08:49.997821 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 22:08:49.999860 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 14 22:08:50.003580 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 22:08:50.006322 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 22:08:50.015767 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 14 22:08:50.018213 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 14 22:08:50.028244 dracut-cmdline[228]: dracut-dracut-053 Jul 14 22:08:50.032159 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bfa97d577a2baa7448b0ab2cae71f1606bd0084ffae5b72cc7eef5122a2ca497 Jul 14 22:08:50.052763 systemd-resolved[232]: Positive Trust Anchors: Jul 14 22:08:50.052780 systemd-resolved[232]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 22:08:50.052811 systemd-resolved[232]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 14 22:08:50.055476 systemd-resolved[232]: Defaulting to hostname 'linux'. Jul 14 22:08:50.056672 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 14 22:08:50.062858 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 14 22:08:50.126563 kernel: SCSI subsystem initialized Jul 14 22:08:50.136533 kernel: Loading iSCSI transport class v2.0-870. Jul 14 22:08:50.147539 kernel: iscsi: registered transport (tcp) Jul 14 22:08:50.169779 kernel: iscsi: registered transport (qla4xxx) Jul 14 22:08:50.169883 kernel: QLogic iSCSI HBA Driver Jul 14 22:08:50.227468 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 14 22:08:50.233828 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 14 22:08:50.262180 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 14 22:08:50.262271 kernel: device-mapper: uevent: version 1.0.3 Jul 14 22:08:50.262284 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 14 22:08:50.306554 kernel: raid6: avx2x4 gen() 28998 MB/s Jul 14 22:08:50.323526 kernel: raid6: avx2x2 gen() 30577 MB/s Jul 14 22:08:50.385637 kernel: raid6: avx2x1 gen() 22120 MB/s Jul 14 22:08:50.385697 kernel: raid6: using algorithm avx2x2 gen() 30577 MB/s Jul 14 22:08:50.403594 kernel: raid6: .... xor() 19035 MB/s, rmw enabled Jul 14 22:08:50.403635 kernel: raid6: using avx2x2 recovery algorithm Jul 14 22:08:50.425551 kernel: xor: automatically using best checksumming function avx Jul 14 22:08:50.584559 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 14 22:08:50.599709 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 14 22:08:50.620836 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 22:08:50.638665 systemd-udevd[415]: Using default interface naming scheme 'v255'. Jul 14 22:08:50.644273 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 22:08:50.650808 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 14 22:08:50.666514 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation Jul 14 22:08:50.704981 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 14 22:08:50.718689 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 14 22:08:50.787332 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 22:08:50.796815 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 14 22:08:50.807178 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 14 22:08:50.812136 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 14 22:08:50.813585 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 22:08:50.814059 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 14 22:08:50.819713 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 14 22:08:50.831453 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 14 22:08:50.837544 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 14 22:08:50.837927 kernel: cryptd: max_cpu_qlen set to 1000 Jul 14 22:08:50.841612 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 14 22:08:50.846415 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 14 22:08:50.846438 kernel: GPT:9289727 != 19775487 Jul 14 22:08:50.846451 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 14 22:08:50.846463 kernel: GPT:9289727 != 19775487 Jul 14 22:08:50.846489 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 14 22:08:50.846529 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 22:08:50.847536 kernel: AVX2 version of gcm_enc/dec engaged. Jul 14 22:08:50.851515 kernel: AES CTR mode by8 optimization enabled Jul 14 22:08:50.865579 kernel: libata version 3.00 loaded. Jul 14 22:08:50.873623 kernel: ahci 0000:00:1f.2: version 3.0 Jul 14 22:08:50.876005 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 14 22:08:50.879893 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 14 22:08:50.879926 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jul 14 22:08:50.880133 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 14 22:08:50.876349 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 22:08:50.884615 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 22:08:50.887302 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 14 22:08:50.889727 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 22:08:50.897003 kernel: BTRFS: device fsid d23b6972-ad36-4741-bf36-4d440b923127 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (481) Jul 14 22:08:50.897027 kernel: scsi host0: ahci Jul 14 22:08:50.897223 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (461) Jul 14 22:08:50.897243 kernel: scsi host1: ahci Jul 14 22:08:50.897406 kernel: scsi host2: ahci Jul 14 22:08:50.895636 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 22:08:50.905668 kernel: scsi host3: ahci Jul 14 22:08:50.905911 kernel: scsi host4: ahci Jul 14 22:08:50.906131 kernel: scsi host5: ahci Jul 14 22:08:50.906313 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jul 14 22:08:50.906332 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jul 14 22:08:50.906350 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jul 14 22:08:50.906361 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jul 14 22:08:50.906373 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jul 14 22:08:50.907451 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jul 14 22:08:50.909438 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 22:08:50.928490 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 14 22:08:50.965317 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 22:08:50.973264 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 14 22:08:50.983113 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 14 22:08:50.988981 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 14 22:08:50.991491 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 14 22:08:51.012895 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 14 22:08:51.014693 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 22:08:51.037987 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 22:08:51.220535 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 14 22:08:51.220610 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 14 22:08:51.221920 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 14 22:08:51.222012 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 14 22:08:51.223553 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 14 22:08:51.223640 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 14 22:08:51.224545 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 14 22:08:51.226002 kernel: ata3.00: applying bridge limits Jul 14 22:08:51.226028 kernel: ata3.00: configured for UDMA/100 Jul 14 22:08:51.226531 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 14 22:08:51.274105 disk-uuid[569]: Primary Header is updated. Jul 14 22:08:51.274105 disk-uuid[569]: Secondary Entries is updated. Jul 14 22:08:51.274105 disk-uuid[569]: Secondary Header is updated. Jul 14 22:08:51.278558 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 14 22:08:51.278823 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 14 22:08:51.278847 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 22:08:51.283528 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 22:08:51.288541 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 14 22:08:52.285522 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 22:08:52.285908 disk-uuid[579]: The operation has completed successfully. Jul 14 22:08:52.312920 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 14 22:08:52.313046 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 14 22:08:52.342862 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 14 22:08:52.347852 sh[594]: Success Jul 14 22:08:52.362543 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 14 22:08:52.397301 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 14 22:08:52.413827 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 14 22:08:52.416739 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 14 22:08:52.433288 kernel: BTRFS info (device dm-0): first mount of filesystem d23b6972-ad36-4741-bf36-4d440b923127 Jul 14 22:08:52.433339 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 14 22:08:52.433351 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 14 22:08:52.434318 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 14 22:08:52.435212 kernel: BTRFS info (device dm-0): using free space tree Jul 14 22:08:52.441265 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 14 22:08:52.442585 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 14 22:08:52.443646 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 14 22:08:52.447369 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 14 22:08:52.461639 kernel: BTRFS info (device vda6): first mount of filesystem 1f379987-f438-494c-89f9-63473ca1b18d Jul 14 22:08:52.461701 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 14 22:08:52.461712 kernel: BTRFS info (device vda6): using free space tree Jul 14 22:08:52.465540 kernel: BTRFS info (device vda6): auto enabling async discard Jul 14 22:08:52.475003 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 14 22:08:52.477058 kernel: BTRFS info (device vda6): last unmount of filesystem 1f379987-f438-494c-89f9-63473ca1b18d Jul 14 22:08:52.489257 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 14 22:08:52.501730 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 14 22:08:52.588640 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 14 22:08:52.598771 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 14 22:08:52.600670 ignition[692]: Ignition 2.19.0 Jul 14 22:08:52.600678 ignition[692]: Stage: fetch-offline Jul 14 22:08:52.600724 ignition[692]: no configs at "/usr/lib/ignition/base.d" Jul 14 22:08:52.600737 ignition[692]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:08:52.600823 ignition[692]: parsed url from cmdline: "" Jul 14 22:08:52.600827 ignition[692]: no config URL provided Jul 14 22:08:52.600833 ignition[692]: reading system config file "/usr/lib/ignition/user.ign" Jul 14 22:08:52.600844 ignition[692]: no config at "/usr/lib/ignition/user.ign" Jul 14 22:08:52.600876 ignition[692]: op(1): [started] loading QEMU firmware config module Jul 14 22:08:52.600883 ignition[692]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 14 22:08:52.610245 ignition[692]: op(1): [finished] loading QEMU firmware config module Jul 14 22:08:52.626456 systemd-networkd[780]: lo: Link UP Jul 14 22:08:52.626467 systemd-networkd[780]: lo: Gained carrier Jul 14 22:08:52.628364 systemd-networkd[780]: Enumeration completed Jul 14 22:08:52.628841 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 22:08:52.628846 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 14 22:08:52.632142 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 14 22:08:52.634722 systemd[1]: Reached target network.target - Network. Jul 14 22:08:52.635866 systemd-networkd[780]: eth0: Link UP Jul 14 22:08:52.635871 systemd-networkd[780]: eth0: Gained carrier Jul 14 22:08:52.635885 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 22:08:52.659636 systemd-networkd[780]: eth0: DHCPv4 address 10.0.0.59/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 14 22:08:52.669079 ignition[692]: parsing config with SHA512: cba468b39527673130f52ed91ab29aaf26df871005e0f55013dd5e6df1f0b2abcab7e99908d3e7626c2c345a321467e0e954ef92738210204bf36aa5726a2535 Jul 14 22:08:52.678215 unknown[692]: fetched base config from "system" Jul 14 22:08:52.678236 unknown[692]: fetched user config from "qemu" Jul 14 22:08:52.678753 ignition[692]: fetch-offline: fetch-offline passed Jul 14 22:08:52.678817 ignition[692]: Ignition finished successfully Jul 14 22:08:52.684034 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 14 22:08:52.686834 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 14 22:08:52.698863 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 14 22:08:52.719143 ignition[787]: Ignition 2.19.0 Jul 14 22:08:52.719154 ignition[787]: Stage: kargs Jul 14 22:08:52.719353 ignition[787]: no configs at "/usr/lib/ignition/base.d" Jul 14 22:08:52.719367 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:08:52.720386 ignition[787]: kargs: kargs passed Jul 14 22:08:52.720441 ignition[787]: Ignition finished successfully Jul 14 22:08:52.727797 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 14 22:08:52.741850 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 14 22:08:52.756799 ignition[795]: Ignition 2.19.0 Jul 14 22:08:52.756811 ignition[795]: Stage: disks Jul 14 22:08:52.757001 ignition[795]: no configs at "/usr/lib/ignition/base.d" Jul 14 22:08:52.757013 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:08:52.758058 ignition[795]: disks: disks passed Jul 14 22:08:52.758108 ignition[795]: Ignition finished successfully Jul 14 22:08:52.763866 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 14 22:08:52.766259 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 14 22:08:52.766857 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 14 22:08:52.767200 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 14 22:08:52.767524 systemd[1]: Reached target sysinit.target - System Initialization. Jul 14 22:08:52.772821 systemd[1]: Reached target basic.target - Basic System. Jul 14 22:08:52.785683 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 14 22:08:52.801041 systemd-resolved[232]: Detected conflict on linux IN A 10.0.0.59 Jul 14 22:08:52.801056 systemd-resolved[232]: Hostname conflict, changing published hostname from 'linux' to 'linux5'. Jul 14 22:08:52.802406 systemd-fsck[805]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 14 22:08:52.809783 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 14 22:08:52.816717 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 14 22:08:52.913545 kernel: EXT4-fs (vda9): mounted filesystem dda007d3-640b-4d11-976f-3b761ca7aabd r/w with ordered data mode. Quota mode: none. Jul 14 22:08:52.914202 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 14 22:08:52.915997 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 14 22:08:52.927729 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 14 22:08:52.930093 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 14 22:08:52.931693 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 14 22:08:52.931765 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 14 22:08:52.939263 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (813) Jul 14 22:08:52.939283 kernel: BTRFS info (device vda6): first mount of filesystem 1f379987-f438-494c-89f9-63473ca1b18d Jul 14 22:08:52.931800 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 14 22:08:52.943328 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 14 22:08:52.943347 kernel: BTRFS info (device vda6): using free space tree Jul 14 22:08:52.940303 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 14 22:08:52.946655 kernel: BTRFS info (device vda6): auto enabling async discard Jul 14 22:08:52.956780 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 14 22:08:52.959278 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 14 22:08:52.998658 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Jul 14 22:08:53.006095 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Jul 14 22:08:53.010997 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Jul 14 22:08:53.015556 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Jul 14 22:08:53.110979 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 14 22:08:53.127660 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 14 22:08:53.129415 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 14 22:08:53.140529 kernel: BTRFS info (device vda6): last unmount of filesystem 1f379987-f438-494c-89f9-63473ca1b18d Jul 14 22:08:53.154206 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 14 22:08:53.250495 ignition[931]: INFO : Ignition 2.19.0 Jul 14 22:08:53.250495 ignition[931]: INFO : Stage: mount Jul 14 22:08:53.252300 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 22:08:53.252300 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:08:53.255164 ignition[931]: INFO : mount: mount passed Jul 14 22:08:53.255993 ignition[931]: INFO : Ignition finished successfully Jul 14 22:08:53.258998 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 14 22:08:53.271619 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 14 22:08:53.432984 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 14 22:08:53.446755 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 14 22:08:53.458536 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (940) Jul 14 22:08:53.458584 kernel: BTRFS info (device vda6): first mount of filesystem 1f379987-f438-494c-89f9-63473ca1b18d Jul 14 22:08:53.458596 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 14 22:08:53.459532 kernel: BTRFS info (device vda6): using free space tree Jul 14 22:08:53.464535 kernel: BTRFS info (device vda6): auto enabling async discard Jul 14 22:08:53.465686 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 14 22:08:53.494079 ignition[957]: INFO : Ignition 2.19.0 Jul 14 22:08:53.494079 ignition[957]: INFO : Stage: files Jul 14 22:08:53.496033 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 22:08:53.496033 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:08:53.496033 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Jul 14 22:08:53.500162 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 14 22:08:53.500162 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 14 22:08:53.503536 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 14 22:08:53.505198 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 14 22:08:53.507056 unknown[957]: wrote ssh authorized keys file for user: core Jul 14 22:08:53.508570 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 14 22:08:53.509994 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 14 22:08:53.509994 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 14 22:08:53.509994 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 14 22:08:53.509994 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 14 22:08:53.891701 systemd-networkd[780]: eth0: Gained IPv6LL Jul 14 22:09:03.580209 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 14 22:09:03.822808 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 14 22:09:03.822808 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 14 22:09:03.827556 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 14 22:09:03.827556 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 14 22:09:03.827556 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 14 22:09:03.827556 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 14 22:09:03.827556 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 14 22:09:03.827556 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 14 22:09:03.827556 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 14 22:09:03.827556 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 22:09:03.827556 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 22:09:03.827556 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 14 22:09:03.827556 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 14 22:09:03.827556 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 14 22:09:03.827556 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 14 22:09:34.423016 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 14 22:09:34.819373 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 14 22:09:34.819373 ignition[957]: INFO : files: op(c): [started] processing unit "containerd.service" Jul 14 22:09:34.823069 ignition[957]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 14 22:09:34.823069 ignition[957]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 14 22:09:34.823069 ignition[957]: INFO : files: op(c): [finished] processing unit "containerd.service" Jul 14 22:09:34.823069 ignition[957]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jul 14 22:09:34.823069 ignition[957]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 14 22:09:34.823069 ignition[957]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 14 22:09:34.823069 ignition[957]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jul 14 22:09:34.823069 ignition[957]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Jul 14 22:09:34.823069 ignition[957]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 22:09:34.823069 ignition[957]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 22:09:34.823069 ignition[957]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Jul 14 22:09:34.823069 ignition[957]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Jul 14 22:09:34.857359 ignition[957]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 22:09:34.862433 ignition[957]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 22:09:34.864368 ignition[957]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Jul 14 22:09:34.864368 ignition[957]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Jul 14 22:09:34.864368 ignition[957]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Jul 14 22:09:34.864368 ignition[957]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 14 22:09:34.864368 ignition[957]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 14 22:09:34.864368 ignition[957]: INFO : files: files passed Jul 14 22:09:34.864368 ignition[957]: INFO : Ignition finished successfully Jul 14 22:09:34.865785 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 14 22:09:34.875966 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 14 22:09:34.878785 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 14 22:09:34.881079 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 14 22:09:34.881203 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 14 22:09:34.892597 initrd-setup-root-after-ignition[986]: grep: /sysroot/oem/oem-release: No such file or directory Jul 14 22:09:34.896387 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 14 22:09:34.896387 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 14 22:09:34.901121 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 14 22:09:34.899500 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 14 22:09:34.901965 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 14 22:09:34.910905 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 14 22:09:34.939979 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 14 22:09:34.940243 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 14 22:09:34.941263 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 14 22:09:34.944027 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 14 22:09:34.944381 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 14 22:09:34.945645 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 14 22:09:34.968739 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 14 22:09:34.978937 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 14 22:09:34.989908 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 14 22:09:34.991440 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 22:09:34.993948 systemd[1]: Stopped target timers.target - Timer Units. Jul 14 22:09:34.996148 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 14 22:09:34.996293 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 14 22:09:34.998771 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 14 22:09:35.000420 systemd[1]: Stopped target basic.target - Basic System. Jul 14 22:09:35.002557 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 14 22:09:35.004677 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 14 22:09:35.006776 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 14 22:09:35.009253 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 14 22:09:35.011596 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 14 22:09:35.014231 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 14 22:09:35.016613 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 14 22:09:35.019626 systemd[1]: Stopped target swap.target - Swaps. Jul 14 22:09:35.021750 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 14 22:09:35.021930 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 14 22:09:35.024043 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 14 22:09:35.025602 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 22:09:35.027835 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 14 22:09:35.028012 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 22:09:35.030132 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 14 22:09:35.030313 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 14 22:09:35.032453 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 14 22:09:35.032577 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 14 22:09:35.034537 systemd[1]: Stopped target paths.target - Path Units. Jul 14 22:09:35.036260 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 14 22:09:35.036439 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 22:09:35.039032 systemd[1]: Stopped target slices.target - Slice Units. Jul 14 22:09:35.040973 systemd[1]: Stopped target sockets.target - Socket Units. Jul 14 22:09:35.043081 systemd[1]: iscsid.socket: Deactivated successfully. Jul 14 22:09:35.043231 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 14 22:09:35.045298 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 14 22:09:35.045437 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 14 22:09:35.047533 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 14 22:09:35.047684 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 14 22:09:35.049706 systemd[1]: ignition-files.service: Deactivated successfully. Jul 14 22:09:35.049854 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 14 22:09:35.060832 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 14 22:09:35.063224 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 14 22:09:35.063402 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 22:09:35.067170 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 14 22:09:35.069111 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 14 22:09:35.077432 ignition[1012]: INFO : Ignition 2.19.0 Jul 14 22:09:35.077432 ignition[1012]: INFO : Stage: umount Jul 14 22:09:35.077432 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 22:09:35.077432 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:09:35.069317 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 22:09:35.087732 ignition[1012]: INFO : umount: umount passed Jul 14 22:09:35.087732 ignition[1012]: INFO : Ignition finished successfully Jul 14 22:09:35.071656 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 14 22:09:35.071791 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 14 22:09:35.078453 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 14 22:09:35.078618 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 14 22:09:35.081173 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 14 22:09:35.081308 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 14 22:09:35.086186 systemd[1]: Stopped target network.target - Network. Jul 14 22:09:35.087693 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 14 22:09:35.087783 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 14 22:09:35.089856 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 14 22:09:35.089919 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 14 22:09:35.092029 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 14 22:09:35.092093 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 14 22:09:35.094218 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 14 22:09:35.094271 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 14 22:09:35.096838 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 14 22:09:35.099357 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 14 22:09:35.102577 systemd-networkd[780]: eth0: DHCPv6 lease lost Jul 14 22:09:35.102915 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 14 22:09:35.103610 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 14 22:09:35.103748 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 14 22:09:35.106595 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 14 22:09:35.106770 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 14 22:09:35.109206 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 14 22:09:35.109369 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 14 22:09:35.112925 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 14 22:09:35.113022 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 14 22:09:35.114759 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 14 22:09:35.114829 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 14 22:09:35.130740 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 14 22:09:35.131811 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 14 22:09:35.131901 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 14 22:09:35.134345 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 14 22:09:35.134401 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 14 22:09:35.136859 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 14 22:09:35.136924 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 14 22:09:35.138198 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 14 22:09:35.138249 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 22:09:35.140768 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 22:09:35.155048 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 14 22:09:35.155205 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 14 22:09:35.157431 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 14 22:09:35.157625 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 22:09:35.160584 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 14 22:09:35.160667 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 14 22:09:35.165680 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 14 22:09:35.165758 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 22:09:35.167772 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 14 22:09:35.167853 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 14 22:09:35.170463 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 14 22:09:35.170559 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 14 22:09:35.172653 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 14 22:09:35.172720 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 22:09:35.182755 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 14 22:09:35.184075 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 14 22:09:35.184161 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 22:09:35.186749 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 14 22:09:35.186801 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 22:09:35.192398 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 14 22:09:35.192555 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 14 22:09:35.194978 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 14 22:09:35.201802 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 14 22:09:35.213595 systemd[1]: Switching root. Jul 14 22:09:35.244679 systemd-journald[192]: Journal stopped Jul 14 22:09:37.201642 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Jul 14 22:09:37.201721 kernel: SELinux: policy capability network_peer_controls=1 Jul 14 22:09:37.201746 kernel: SELinux: policy capability open_perms=1 Jul 14 22:09:37.201763 kernel: SELinux: policy capability extended_socket_class=1 Jul 14 22:09:37.201778 kernel: SELinux: policy capability always_check_network=0 Jul 14 22:09:37.201790 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 14 22:09:37.201802 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 14 22:09:37.201813 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 14 22:09:37.201824 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 14 22:09:37.201846 kernel: audit: type=1403 audit(1752530976.074:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 14 22:09:37.201885 systemd[1]: Successfully loaded SELinux policy in 40.553ms. Jul 14 22:09:37.201917 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.890ms. Jul 14 22:09:37.201934 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 14 22:09:37.201954 systemd[1]: Detected virtualization kvm. Jul 14 22:09:37.201968 systemd[1]: Detected architecture x86-64. Jul 14 22:09:37.202282 systemd[1]: Detected first boot. Jul 14 22:09:37.202304 systemd[1]: Initializing machine ID from VM UUID. Jul 14 22:09:37.202316 zram_generator::config[1076]: No configuration found. Jul 14 22:09:37.202330 systemd[1]: Populated /etc with preset unit settings. Jul 14 22:09:37.202343 systemd[1]: Queued start job for default target multi-user.target. Jul 14 22:09:37.202355 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 14 22:09:37.202371 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 14 22:09:37.202384 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 14 22:09:37.202396 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 14 22:09:37.202407 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 14 22:09:37.202421 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 14 22:09:37.202433 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 14 22:09:37.202446 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 14 22:09:37.202458 systemd[1]: Created slice user.slice - User and Session Slice. Jul 14 22:09:37.202470 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 22:09:37.202485 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 22:09:37.202511 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 14 22:09:37.202524 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 14 22:09:37.202536 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 14 22:09:37.202548 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 14 22:09:37.202561 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 14 22:09:37.202573 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 22:09:37.202586 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 14 22:09:37.202598 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 22:09:37.202620 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 14 22:09:37.202633 systemd[1]: Reached target slices.target - Slice Units. Jul 14 22:09:37.202645 systemd[1]: Reached target swap.target - Swaps. Jul 14 22:09:37.202658 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 14 22:09:37.202670 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 14 22:09:37.202682 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 14 22:09:37.202694 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 14 22:09:37.202708 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 14 22:09:37.202723 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 14 22:09:37.202735 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 22:09:37.202747 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 14 22:09:37.202759 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 14 22:09:37.202771 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 14 22:09:37.202783 systemd[1]: Mounting media.mount - External Media Directory... Jul 14 22:09:37.202796 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:09:37.202808 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 14 22:09:37.202820 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 14 22:09:37.202843 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 14 22:09:37.202856 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 14 22:09:37.202868 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 22:09:37.202880 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 14 22:09:37.202892 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 14 22:09:37.202905 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 22:09:37.202917 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 14 22:09:37.202930 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 22:09:37.202944 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 14 22:09:37.202957 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 22:09:37.202969 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 14 22:09:37.202983 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 14 22:09:37.202996 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jul 14 22:09:37.203008 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 14 22:09:37.203019 kernel: fuse: init (API version 7.39) Jul 14 22:09:37.203031 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 14 22:09:37.203043 kernel: loop: module loaded Jul 14 22:09:37.203057 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 14 22:09:37.203069 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 14 22:09:37.203089 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 14 22:09:37.203103 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:09:37.203140 systemd-journald[1157]: Collecting audit messages is disabled. Jul 14 22:09:37.203164 systemd-journald[1157]: Journal started Jul 14 22:09:37.203192 systemd-journald[1157]: Runtime Journal (/run/log/journal/04059004366a4366a7627f3eaf8a1da8) is 6.0M, max 48.4M, 42.3M free. Jul 14 22:09:37.209016 systemd[1]: Started systemd-journald.service - Journal Service. Jul 14 22:09:37.209094 kernel: ACPI: bus type drm_connector registered Jul 14 22:09:37.211577 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 14 22:09:37.213435 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 14 22:09:37.215013 systemd[1]: Mounted media.mount - External Media Directory. Jul 14 22:09:37.216191 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 14 22:09:37.217497 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 14 22:09:37.218899 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 14 22:09:37.220420 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 22:09:37.222666 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 14 22:09:37.222982 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 14 22:09:37.224963 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 14 22:09:37.227162 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 22:09:37.227460 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 22:09:37.230070 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 14 22:09:37.231144 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 14 22:09:37.232966 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 22:09:37.233238 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 22:09:37.234970 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 14 22:09:37.235200 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 14 22:09:37.236693 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 22:09:37.237183 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 22:09:37.239208 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 14 22:09:37.274199 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 14 22:09:37.276048 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 14 22:09:37.290980 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 14 22:09:37.307817 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 14 22:09:37.311585 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 14 22:09:37.313118 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 14 22:09:37.316856 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 14 22:09:37.321729 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 14 22:09:37.326705 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 22:09:37.334879 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 14 22:09:37.336722 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 14 22:09:37.340668 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 14 22:09:37.343349 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 14 22:09:37.345693 systemd-journald[1157]: Time spent on flushing to /var/log/journal/04059004366a4366a7627f3eaf8a1da8 is 17.239ms for 940 entries. Jul 14 22:09:37.345693 systemd-journald[1157]: System Journal (/var/log/journal/04059004366a4366a7627f3eaf8a1da8) is 8.0M, max 195.6M, 187.6M free. Jul 14 22:09:37.419083 systemd-journald[1157]: Received client request to flush runtime journal. Jul 14 22:09:37.346709 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 22:09:37.348250 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 14 22:09:37.349671 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 14 22:09:37.365031 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 14 22:09:37.389606 udevadm[1219]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 14 22:09:37.395759 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 14 22:09:37.397887 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 14 22:09:37.399581 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 14 22:09:37.409720 systemd-tmpfiles[1213]: ACLs are not supported, ignoring. Jul 14 22:09:37.409739 systemd-tmpfiles[1213]: ACLs are not supported, ignoring. Jul 14 22:09:37.416999 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 14 22:09:37.430788 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 14 22:09:37.434682 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 14 22:09:37.523929 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 14 22:09:37.541978 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 14 22:09:37.564018 systemd-tmpfiles[1234]: ACLs are not supported, ignoring. Jul 14 22:09:37.564045 systemd-tmpfiles[1234]: ACLs are not supported, ignoring. Jul 14 22:09:37.572634 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 22:09:38.269981 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 14 22:09:38.284967 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 22:09:38.315406 systemd-udevd[1240]: Using default interface naming scheme 'v255'. Jul 14 22:09:38.334025 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 22:09:38.342680 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 14 22:09:38.355665 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 14 22:09:38.377245 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jul 14 22:09:38.401926 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1256) Jul 14 22:09:38.433167 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 14 22:09:38.473417 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 14 22:09:38.482563 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 14 22:09:38.513996 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 14 22:09:38.514337 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jul 14 22:09:38.514584 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 14 22:09:38.526933 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 14 22:09:38.549612 systemd-networkd[1245]: lo: Link UP Jul 14 22:09:38.549624 systemd-networkd[1245]: lo: Gained carrier Jul 14 22:09:38.552922 systemd-networkd[1245]: Enumeration completed Jul 14 22:09:38.553081 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 14 22:09:38.553815 systemd-networkd[1245]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 22:09:38.553820 systemd-networkd[1245]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 14 22:09:38.554967 systemd-networkd[1245]: eth0: Link UP Jul 14 22:09:38.555025 systemd-networkd[1245]: eth0: Gained carrier Jul 14 22:09:38.555039 systemd-networkd[1245]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 22:09:38.560592 kernel: ACPI: button: Power Button [PWRF] Jul 14 22:09:38.563746 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 14 22:09:38.571640 systemd-networkd[1245]: eth0: DHCPv4 address 10.0.0.59/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 14 22:09:38.657539 kernel: mousedev: PS/2 mouse device common for all mice Jul 14 22:09:38.657988 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 22:09:38.676656 kernel: kvm_amd: TSC scaling supported Jul 14 22:09:38.676782 kernel: kvm_amd: Nested Virtualization enabled Jul 14 22:09:38.676814 kernel: kvm_amd: Nested Paging enabled Jul 14 22:09:38.677615 kernel: kvm_amd: LBR virtualization supported Jul 14 22:09:38.677641 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 14 22:09:38.678619 kernel: kvm_amd: Virtual GIF supported Jul 14 22:09:38.706542 kernel: EDAC MC: Ver: 3.0.0 Jul 14 22:09:38.747612 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 14 22:09:38.764731 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 22:09:38.777783 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 14 22:09:38.788543 lvm[1286]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 14 22:09:38.830004 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 14 22:09:38.832512 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 14 22:09:38.848739 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 14 22:09:38.853920 lvm[1289]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 14 22:09:38.888229 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 14 22:09:38.933270 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 14 22:09:38.934813 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 14 22:09:38.934836 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 14 22:09:38.936050 systemd[1]: Reached target machines.target - Containers. Jul 14 22:09:38.938368 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 14 22:09:38.949653 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 14 22:09:38.952438 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 14 22:09:38.953803 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 22:09:38.954977 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 14 22:09:38.957825 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 14 22:09:38.963604 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 14 22:09:38.966924 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 14 22:09:38.990325 kernel: loop0: detected capacity change from 0 to 140768 Jul 14 22:09:38.992262 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 14 22:09:39.009717 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 14 22:09:39.028532 kernel: loop1: detected capacity change from 0 to 221472 Jul 14 22:09:39.129545 kernel: loop2: detected capacity change from 0 to 142488 Jul 14 22:09:39.165529 kernel: loop3: detected capacity change from 0 to 140768 Jul 14 22:09:39.168969 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 14 22:09:39.170137 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 14 22:09:39.189525 kernel: loop4: detected capacity change from 0 to 221472 Jul 14 22:09:39.197521 kernel: loop5: detected capacity change from 0 to 142488 Jul 14 22:09:39.207425 (sd-merge)[1308]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 14 22:09:39.208225 (sd-merge)[1308]: Merged extensions into '/usr'. Jul 14 22:09:39.213351 systemd[1]: Reloading requested from client PID 1297 ('systemd-sysext') (unit systemd-sysext.service)... Jul 14 22:09:39.213927 systemd[1]: Reloading... Jul 14 22:09:39.278531 zram_generator::config[1338]: No configuration found. Jul 14 22:09:39.392398 ldconfig[1293]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 14 22:09:39.424428 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 22:09:39.500806 systemd[1]: Reloading finished in 286 ms. Jul 14 22:09:39.523862 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 14 22:09:39.525557 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 14 22:09:39.547843 systemd[1]: Starting ensure-sysext.service... Jul 14 22:09:39.550581 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 14 22:09:39.557220 systemd[1]: Reloading requested from client PID 1382 ('systemctl') (unit ensure-sysext.service)... Jul 14 22:09:39.557237 systemd[1]: Reloading... Jul 14 22:09:39.586660 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 14 22:09:39.587134 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 14 22:09:39.588197 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 14 22:09:39.588531 systemd-tmpfiles[1383]: ACLs are not supported, ignoring. Jul 14 22:09:39.588618 systemd-tmpfiles[1383]: ACLs are not supported, ignoring. Jul 14 22:09:39.592239 systemd-tmpfiles[1383]: Detected autofs mount point /boot during canonicalization of boot. Jul 14 22:09:39.592252 systemd-tmpfiles[1383]: Skipping /boot Jul 14 22:09:39.606326 systemd-tmpfiles[1383]: Detected autofs mount point /boot during canonicalization of boot. Jul 14 22:09:39.606487 systemd-tmpfiles[1383]: Skipping /boot Jul 14 22:09:39.620679 zram_generator::config[1410]: No configuration found. Jul 14 22:09:39.753625 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 22:09:39.826207 systemd[1]: Reloading finished in 268 ms. Jul 14 22:09:39.847252 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 22:09:39.865724 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 14 22:09:39.868984 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 14 22:09:39.871997 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 14 22:09:39.877111 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 14 22:09:39.880729 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 14 22:09:39.888476 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:09:39.888727 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 22:09:39.894460 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 22:09:39.901681 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 22:09:39.906584 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 22:09:39.908096 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 22:09:39.908307 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:09:39.910069 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 22:09:39.910368 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 22:09:39.951810 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 22:09:39.952096 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 22:09:39.954788 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 22:09:39.955067 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 22:09:39.955369 augenrules[1482]: No rules Jul 14 22:09:39.957459 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 14 22:09:39.960112 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 14 22:09:39.991448 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 14 22:09:40.002146 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:09:40.002401 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 22:09:40.004364 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 22:09:40.008953 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 14 22:09:40.024812 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 22:09:40.027674 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 22:09:40.040904 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 22:09:40.042734 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 14 22:09:40.043897 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:09:40.044837 systemd[1]: Finished ensure-sysext.service. Jul 14 22:09:40.046153 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 14 22:09:40.047980 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 22:09:40.048203 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 22:09:40.050035 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 14 22:09:40.050306 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 14 22:09:40.051953 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 22:09:40.052216 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 22:09:40.054129 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 22:09:40.054379 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 22:09:40.059167 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 14 22:09:40.065394 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 22:09:40.065530 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 14 22:09:40.075802 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 14 22:09:40.089953 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 14 22:09:40.106542 systemd-resolved[1461]: Positive Trust Anchors: Jul 14 22:09:40.106564 systemd-resolved[1461]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 22:09:40.106606 systemd-resolved[1461]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 14 22:09:40.111245 systemd-resolved[1461]: Defaulting to hostname 'linux'. Jul 14 22:09:40.113702 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 14 22:09:40.115975 systemd[1]: Reached target network.target - Network. Jul 14 22:09:40.116968 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 14 22:09:40.155584 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 14 22:09:40.156951 systemd-timesyncd[1519]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 14 22:09:40.157000 systemd-timesyncd[1519]: Initial clock synchronization to Mon 2025-07-14 22:09:40.307966 UTC. Jul 14 22:09:40.157950 systemd[1]: Reached target sysinit.target - System Initialization. Jul 14 22:09:40.159704 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 14 22:09:40.161358 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 14 22:09:40.163437 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 14 22:09:40.163900 systemd-networkd[1245]: eth0: Gained IPv6LL Jul 14 22:09:40.165333 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 14 22:09:40.165373 systemd[1]: Reached target paths.target - Path Units. Jul 14 22:09:40.166695 systemd[1]: Reached target time-set.target - System Time Set. Jul 14 22:09:40.168453 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 14 22:09:40.170036 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 14 22:09:40.171658 systemd[1]: Reached target timers.target - Timer Units. Jul 14 22:09:40.173738 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 14 22:09:40.177483 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 14 22:09:40.180202 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 14 22:09:40.186081 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 14 22:09:40.188816 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 14 22:09:40.190140 systemd[1]: Reached target network-online.target - Network is Online. Jul 14 22:09:40.191380 systemd[1]: Reached target sockets.target - Socket Units. Jul 14 22:09:40.192483 systemd[1]: Reached target basic.target - Basic System. Jul 14 22:09:40.193740 systemd[1]: System is tainted: cgroupsv1 Jul 14 22:09:40.193811 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 14 22:09:40.193848 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 14 22:09:40.195628 systemd[1]: Starting containerd.service - containerd container runtime... Jul 14 22:09:40.198361 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 14 22:09:40.201573 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 14 22:09:40.204177 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 14 22:09:40.209806 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 14 22:09:40.211454 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 14 22:09:40.217577 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:09:40.217965 jq[1528]: false Jul 14 22:09:40.223202 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 14 22:09:40.230101 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 14 22:09:40.232291 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 14 22:09:40.235575 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 14 22:09:40.235922 extend-filesystems[1531]: Found loop3 Jul 14 22:09:40.235922 extend-filesystems[1531]: Found loop4 Jul 14 22:09:40.243308 extend-filesystems[1531]: Found loop5 Jul 14 22:09:40.243308 extend-filesystems[1531]: Found sr0 Jul 14 22:09:40.243308 extend-filesystems[1531]: Found vda Jul 14 22:09:40.243308 extend-filesystems[1531]: Found vda1 Jul 14 22:09:40.243308 extend-filesystems[1531]: Found vda2 Jul 14 22:09:40.243308 extend-filesystems[1531]: Found vda3 Jul 14 22:09:40.243308 extend-filesystems[1531]: Found usr Jul 14 22:09:40.243308 extend-filesystems[1531]: Found vda4 Jul 14 22:09:40.243308 extend-filesystems[1531]: Found vda6 Jul 14 22:09:40.243308 extend-filesystems[1531]: Found vda7 Jul 14 22:09:40.243308 extend-filesystems[1531]: Found vda9 Jul 14 22:09:40.243308 extend-filesystems[1531]: Checking size of /dev/vda9 Jul 14 22:09:40.247718 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 14 22:09:40.249419 dbus-daemon[1527]: [system] SELinux support is enabled Jul 14 22:09:40.257567 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 14 22:09:40.258423 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 14 22:09:40.263800 systemd[1]: Starting update-engine.service - Update Engine... Jul 14 22:09:40.269649 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 14 22:09:40.270836 extend-filesystems[1531]: Resized partition /dev/vda9 Jul 14 22:09:40.275362 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 14 22:09:40.281818 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 14 22:09:40.282170 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 14 22:09:40.299533 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1252) Jul 14 22:09:40.299607 extend-filesystems[1556]: resize2fs 1.47.1 (20-May-2024) Jul 14 22:09:40.331810 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 14 22:09:40.293300 systemd[1]: motdgen.service: Deactivated successfully. Jul 14 22:09:40.331961 update_engine[1550]: I20250714 22:09:40.322834 1550 main.cc:92] Flatcar Update Engine starting Jul 14 22:09:40.331961 update_engine[1550]: I20250714 22:09:40.324275 1550 update_check_scheduler.cc:74] Next update check in 10m13s Jul 14 22:09:40.334459 jq[1554]: true Jul 14 22:09:40.301846 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 14 22:09:40.325429 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 14 22:09:40.325911 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 14 22:09:40.348371 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 14 22:09:40.351614 (ntainerd)[1574]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 14 22:09:40.386320 jq[1571]: true Jul 14 22:09:40.475805 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 14 22:09:40.476280 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 14 22:09:40.488952 systemd-logind[1545]: Watching system buttons on /dev/input/event1 (Power Button) Jul 14 22:09:40.490295 systemd-logind[1545]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 14 22:09:40.507222 systemd-logind[1545]: New seat seat0. Jul 14 22:09:40.511466 tar[1563]: linux-amd64/helm Jul 14 22:09:40.519188 systemd[1]: Started update-engine.service - Update Engine. Jul 14 22:09:40.536347 systemd[1]: Started systemd-logind.service - User Login Management. Jul 14 22:09:40.545118 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 14 22:09:40.561892 sshd_keygen[1566]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 14 22:09:40.579366 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 14 22:09:40.579658 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 14 22:09:40.581370 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 14 22:09:40.581525 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 14 22:09:40.583654 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 14 22:09:40.642554 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 14 22:09:40.643896 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 14 22:09:40.650763 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 14 22:09:40.671011 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 14 22:09:40.680011 locksmithd[1614]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 14 22:09:40.683086 systemd[1]: issuegen.service: Deactivated successfully. Jul 14 22:09:40.683547 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 14 22:09:40.708942 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 14 22:09:40.726075 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 14 22:09:40.735744 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 14 22:09:40.739052 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 14 22:09:40.740635 systemd[1]: Reached target getty.target - Login Prompts. Jul 14 22:09:40.787728 extend-filesystems[1556]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 14 22:09:40.787728 extend-filesystems[1556]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 14 22:09:40.787728 extend-filesystems[1556]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 14 22:09:40.795820 extend-filesystems[1531]: Resized filesystem in /dev/vda9 Jul 14 22:09:40.796849 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 14 22:09:40.797323 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 14 22:09:40.815265 bash[1605]: Updated "/home/core/.ssh/authorized_keys" Jul 14 22:09:40.817327 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 14 22:09:40.822003 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 14 22:09:40.999461 containerd[1574]: time="2025-07-14T22:09:40.998978383Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 14 22:09:41.056272 containerd[1574]: time="2025-07-14T22:09:41.056008469Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 14 22:09:41.060554 containerd[1574]: time="2025-07-14T22:09:41.059792529Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.97-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 14 22:09:41.060554 containerd[1574]: time="2025-07-14T22:09:41.060081550Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 14 22:09:41.060554 containerd[1574]: time="2025-07-14T22:09:41.060111031Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 14 22:09:41.060554 containerd[1574]: time="2025-07-14T22:09:41.060400623Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 14 22:09:41.060554 containerd[1574]: time="2025-07-14T22:09:41.060429694Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 14 22:09:41.060800 containerd[1574]: time="2025-07-14T22:09:41.060777024Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 22:09:41.060871 containerd[1574]: time="2025-07-14T22:09:41.060846132Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 14 22:09:41.061537 containerd[1574]: time="2025-07-14T22:09:41.061494976Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 22:09:41.061613 containerd[1574]: time="2025-07-14T22:09:41.061597097Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 14 22:09:41.061683 containerd[1574]: time="2025-07-14T22:09:41.061665062Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 22:09:41.061742 containerd[1574]: time="2025-07-14T22:09:41.061727259Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 14 22:09:41.062005 containerd[1574]: time="2025-07-14T22:09:41.061983298Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 14 22:09:41.063374 containerd[1574]: time="2025-07-14T22:09:41.063347419Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 14 22:09:41.063682 containerd[1574]: time="2025-07-14T22:09:41.063658713Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 22:09:41.063768 containerd[1574]: time="2025-07-14T22:09:41.063749902Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 14 22:09:41.064021 containerd[1574]: time="2025-07-14T22:09:41.063997314Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 14 22:09:41.064281 containerd[1574]: time="2025-07-14T22:09:41.064232252Z" level=info msg="metadata content store policy set" policy=shared Jul 14 22:09:41.073811 containerd[1574]: time="2025-07-14T22:09:41.073732050Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 14 22:09:41.074029 containerd[1574]: time="2025-07-14T22:09:41.074011598Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 14 22:09:41.074164 containerd[1574]: time="2025-07-14T22:09:41.074143128Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 14 22:09:41.074271 containerd[1574]: time="2025-07-14T22:09:41.074255120Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 14 22:09:41.074440 containerd[1574]: time="2025-07-14T22:09:41.074424289Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 14 22:09:41.074754 containerd[1574]: time="2025-07-14T22:09:41.074733163Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 14 22:09:41.075600 containerd[1574]: time="2025-07-14T22:09:41.075389448Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 14 22:09:41.075708 containerd[1574]: time="2025-07-14T22:09:41.075676121Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 14 22:09:41.075787 containerd[1574]: time="2025-07-14T22:09:41.075770433Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 14 22:09:41.075865 containerd[1574]: time="2025-07-14T22:09:41.075850179Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 14 22:09:41.075926 containerd[1574]: time="2025-07-14T22:09:41.075913039Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 14 22:09:41.075992 containerd[1574]: time="2025-07-14T22:09:41.075977534Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 14 22:09:41.076056 containerd[1574]: time="2025-07-14T22:09:41.076043111Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 14 22:09:41.076131 containerd[1574]: time="2025-07-14T22:09:41.076116374Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 14 22:09:41.076213 containerd[1574]: time="2025-07-14T22:09:41.076196793Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 14 22:09:41.076274 containerd[1574]: time="2025-07-14T22:09:41.076261910Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 14 22:09:41.076376 containerd[1574]: time="2025-07-14T22:09:41.076358100Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 14 22:09:41.076444 containerd[1574]: time="2025-07-14T22:09:41.076426443Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 14 22:09:41.076515 containerd[1574]: time="2025-07-14T22:09:41.076501115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 14 22:09:41.076598 containerd[1574]: time="2025-07-14T22:09:41.076583626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 14 22:09:41.076682 containerd[1574]: time="2025-07-14T22:09:41.076666281Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 14 22:09:41.076769 containerd[1574]: time="2025-07-14T22:09:41.076752212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 14 22:09:41.076835 containerd[1574]: time="2025-07-14T22:09:41.076818043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 14 22:09:41.076896 containerd[1574]: time="2025-07-14T22:09:41.076882150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 14 22:09:41.076957 containerd[1574]: time="2025-07-14T22:09:41.076944440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 14 22:09:41.077021 containerd[1574]: time="2025-07-14T22:09:41.077008761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 14 22:09:41.077094 containerd[1574]: time="2025-07-14T22:09:41.077078645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 14 22:09:41.077170 containerd[1574]: time="2025-07-14T22:09:41.077146487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 14 22:09:41.077231 containerd[1574]: time="2025-07-14T22:09:41.077218189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 14 22:09:41.077287 containerd[1574]: time="2025-07-14T22:09:41.077275364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 14 22:09:41.077353 containerd[1574]: time="2025-07-14T22:09:41.077339011Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 14 22:09:41.077438 containerd[1574]: time="2025-07-14T22:09:41.077422217Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 14 22:09:41.077515 containerd[1574]: time="2025-07-14T22:09:41.077502126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 14 22:09:41.077623 containerd[1574]: time="2025-07-14T22:09:41.077605645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 14 22:09:41.077688 containerd[1574]: time="2025-07-14T22:09:41.077672477Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 14 22:09:41.077846 containerd[1574]: time="2025-07-14T22:09:41.077829732Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 14 22:09:41.078047 containerd[1574]: time="2025-07-14T22:09:41.078026931Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 14 22:09:41.079295 containerd[1574]: time="2025-07-14T22:09:41.078096958Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 14 22:09:41.079295 containerd[1574]: time="2025-07-14T22:09:41.078124061Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 14 22:09:41.079295 containerd[1574]: time="2025-07-14T22:09:41.078137423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 14 22:09:41.079295 containerd[1574]: time="2025-07-14T22:09:41.078157390Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 14 22:09:41.079295 containerd[1574]: time="2025-07-14T22:09:41.078170773Z" level=info msg="NRI interface is disabled by configuration." Jul 14 22:09:41.079295 containerd[1574]: time="2025-07-14T22:09:41.078194568Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 14 22:09:41.079441 containerd[1574]: time="2025-07-14T22:09:41.078568917Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 14 22:09:41.079441 containerd[1574]: time="2025-07-14T22:09:41.078647294Z" level=info msg="Connect containerd service" Jul 14 22:09:41.079441 containerd[1574]: time="2025-07-14T22:09:41.078689382Z" level=info msg="using legacy CRI server" Jul 14 22:09:41.079441 containerd[1574]: time="2025-07-14T22:09:41.078698640Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 14 22:09:41.079441 containerd[1574]: time="2025-07-14T22:09:41.079036935Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 14 22:09:41.084179 containerd[1574]: time="2025-07-14T22:09:41.084126216Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 14 22:09:41.084410 containerd[1574]: time="2025-07-14T22:09:41.084365748Z" level=info msg="Start subscribing containerd event" Jul 14 22:09:41.084526 containerd[1574]: time="2025-07-14T22:09:41.084491837Z" level=info msg="Start recovering state" Jul 14 22:09:41.084672 containerd[1574]: time="2025-07-14T22:09:41.084657330Z" level=info msg="Start event monitor" Jul 14 22:09:41.085275 containerd[1574]: time="2025-07-14T22:09:41.085120224Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 14 22:09:41.085424 containerd[1574]: time="2025-07-14T22:09:41.085279500Z" level=info msg="Start snapshots syncer" Jul 14 22:09:41.085472 containerd[1574]: time="2025-07-14T22:09:41.085451751Z" level=info msg="Start cni network conf syncer for default" Jul 14 22:09:41.085533 containerd[1574]: time="2025-07-14T22:09:41.085474392Z" level=info msg="Start streaming server" Jul 14 22:09:41.085688 containerd[1574]: time="2025-07-14T22:09:41.085391197Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 14 22:09:41.085917 systemd[1]: Started containerd.service - containerd container runtime. Jul 14 22:09:41.086355 containerd[1574]: time="2025-07-14T22:09:41.086305695Z" level=info msg="containerd successfully booted in 0.090902s" Jul 14 22:09:41.341896 tar[1563]: linux-amd64/LICENSE Jul 14 22:09:41.342021 tar[1563]: linux-amd64/README.md Jul 14 22:09:41.356471 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 14 22:09:42.278073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:09:42.280188 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 14 22:09:42.284591 systemd[1]: Startup finished in 47.191s (kernel) + 6.249s (userspace) = 53.441s. Jul 14 22:09:42.308139 (kubelet)[1661]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:09:43.025955 kubelet[1661]: E0714 22:09:43.025847 1661 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:09:43.030158 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:09:43.030496 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:09:43.525091 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 14 22:09:43.540935 systemd[1]: Started sshd@0-10.0.0.59:22-10.0.0.1:38630.service - OpenSSH per-connection server daemon (10.0.0.1:38630). Jul 14 22:09:43.593803 sshd[1674]: Accepted publickey for core from 10.0.0.1 port 38630 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:09:43.596065 sshd[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:09:43.605912 systemd-logind[1545]: New session 1 of user core. Jul 14 22:09:43.607322 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 14 22:09:43.615822 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 14 22:09:43.629423 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 14 22:09:43.637981 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 14 22:09:43.641234 (systemd)[1680]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:09:43.748085 systemd[1680]: Queued start job for default target default.target. Jul 14 22:09:43.748534 systemd[1680]: Created slice app.slice - User Application Slice. Jul 14 22:09:43.748552 systemd[1680]: Reached target paths.target - Paths. Jul 14 22:09:43.748565 systemd[1680]: Reached target timers.target - Timers. Jul 14 22:09:43.756693 systemd[1680]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 14 22:09:43.764217 systemd[1680]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 14 22:09:43.764312 systemd[1680]: Reached target sockets.target - Sockets. Jul 14 22:09:43.764331 systemd[1680]: Reached target basic.target - Basic System. Jul 14 22:09:43.764402 systemd[1680]: Reached target default.target - Main User Target. Jul 14 22:09:43.764448 systemd[1680]: Startup finished in 115ms. Jul 14 22:09:43.765124 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 14 22:09:43.767011 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 14 22:09:43.825843 systemd[1]: Started sshd@1-10.0.0.59:22-10.0.0.1:38640.service - OpenSSH per-connection server daemon (10.0.0.1:38640). Jul 14 22:09:43.860804 sshd[1692]: Accepted publickey for core from 10.0.0.1 port 38640 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:09:43.862604 sshd[1692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:09:43.867338 systemd-logind[1545]: New session 2 of user core. Jul 14 22:09:43.880755 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 14 22:09:43.938189 sshd[1692]: pam_unix(sshd:session): session closed for user core Jul 14 22:09:43.954107 systemd[1]: Started sshd@2-10.0.0.59:22-10.0.0.1:38652.service - OpenSSH per-connection server daemon (10.0.0.1:38652). Jul 14 22:09:43.954687 systemd[1]: sshd@1-10.0.0.59:22-10.0.0.1:38640.service: Deactivated successfully. Jul 14 22:09:43.957469 systemd-logind[1545]: Session 2 logged out. Waiting for processes to exit. Jul 14 22:09:43.958264 systemd[1]: session-2.scope: Deactivated successfully. Jul 14 22:09:43.959918 systemd-logind[1545]: Removed session 2. Jul 14 22:09:43.991224 sshd[1697]: Accepted publickey for core from 10.0.0.1 port 38652 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:09:43.993343 sshd[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:09:43.998292 systemd-logind[1545]: New session 3 of user core. Jul 14 22:09:44.013027 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 14 22:09:44.064861 sshd[1697]: pam_unix(sshd:session): session closed for user core Jul 14 22:09:44.078769 systemd[1]: Started sshd@3-10.0.0.59:22-10.0.0.1:38658.service - OpenSSH per-connection server daemon (10.0.0.1:38658). Jul 14 22:09:44.079237 systemd[1]: sshd@2-10.0.0.59:22-10.0.0.1:38652.service: Deactivated successfully. Jul 14 22:09:44.081837 systemd-logind[1545]: Session 3 logged out. Waiting for processes to exit. Jul 14 22:09:44.082994 systemd[1]: session-3.scope: Deactivated successfully. Jul 14 22:09:44.084027 systemd-logind[1545]: Removed session 3. Jul 14 22:09:44.112485 sshd[1705]: Accepted publickey for core from 10.0.0.1 port 38658 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:09:44.114060 sshd[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:09:44.118229 systemd-logind[1545]: New session 4 of user core. Jul 14 22:09:44.128792 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 14 22:09:44.182935 sshd[1705]: pam_unix(sshd:session): session closed for user core Jul 14 22:09:44.201792 systemd[1]: Started sshd@4-10.0.0.59:22-10.0.0.1:38674.service - OpenSSH per-connection server daemon (10.0.0.1:38674). Jul 14 22:09:44.202285 systemd[1]: sshd@3-10.0.0.59:22-10.0.0.1:38658.service: Deactivated successfully. Jul 14 22:09:44.204569 systemd-logind[1545]: Session 4 logged out. Waiting for processes to exit. Jul 14 22:09:44.205426 systemd[1]: session-4.scope: Deactivated successfully. Jul 14 22:09:44.206419 systemd-logind[1545]: Removed session 4. Jul 14 22:09:44.236889 sshd[1713]: Accepted publickey for core from 10.0.0.1 port 38674 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:09:44.238417 sshd[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:09:44.242441 systemd-logind[1545]: New session 5 of user core. Jul 14 22:09:44.251763 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 14 22:09:44.309221 sudo[1720]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 14 22:09:44.309577 sudo[1720]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 22:09:44.325859 sudo[1720]: pam_unix(sudo:session): session closed for user root Jul 14 22:09:44.327952 sshd[1713]: pam_unix(sshd:session): session closed for user core Jul 14 22:09:44.336753 systemd[1]: Started sshd@5-10.0.0.59:22-10.0.0.1:38690.service - OpenSSH per-connection server daemon (10.0.0.1:38690). Jul 14 22:09:44.337230 systemd[1]: sshd@4-10.0.0.59:22-10.0.0.1:38674.service: Deactivated successfully. Jul 14 22:09:44.340033 systemd-logind[1545]: Session 5 logged out. Waiting for processes to exit. Jul 14 22:09:44.341155 systemd[1]: session-5.scope: Deactivated successfully. Jul 14 22:09:44.342744 systemd-logind[1545]: Removed session 5. Jul 14 22:09:44.371169 sshd[1722]: Accepted publickey for core from 10.0.0.1 port 38690 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:09:44.372951 sshd[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:09:44.377891 systemd-logind[1545]: New session 6 of user core. Jul 14 22:09:44.395917 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 14 22:09:44.452212 sudo[1730]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 14 22:09:44.452635 sudo[1730]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 22:09:44.457094 sudo[1730]: pam_unix(sudo:session): session closed for user root Jul 14 22:09:44.464832 sudo[1729]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 14 22:09:44.465235 sudo[1729]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 22:09:44.485753 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 14 22:09:44.487883 auditctl[1733]: No rules Jul 14 22:09:44.488361 systemd[1]: audit-rules.service: Deactivated successfully. Jul 14 22:09:44.488770 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 14 22:09:44.492417 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 14 22:09:44.533892 augenrules[1752]: No rules Jul 14 22:09:44.536163 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 14 22:09:44.537845 sudo[1729]: pam_unix(sudo:session): session closed for user root Jul 14 22:09:44.540174 sshd[1722]: pam_unix(sshd:session): session closed for user core Jul 14 22:09:44.548766 systemd[1]: Started sshd@6-10.0.0.59:22-10.0.0.1:38706.service - OpenSSH per-connection server daemon (10.0.0.1:38706). Jul 14 22:09:44.549484 systemd[1]: sshd@5-10.0.0.59:22-10.0.0.1:38690.service: Deactivated successfully. Jul 14 22:09:44.551947 systemd-logind[1545]: Session 6 logged out. Waiting for processes to exit. Jul 14 22:09:44.553447 systemd[1]: session-6.scope: Deactivated successfully. Jul 14 22:09:44.555050 systemd-logind[1545]: Removed session 6. Jul 14 22:09:44.583606 sshd[1758]: Accepted publickey for core from 10.0.0.1 port 38706 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:09:44.585316 sshd[1758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:09:44.590674 systemd-logind[1545]: New session 7 of user core. Jul 14 22:09:44.600832 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 14 22:09:44.655449 sudo[1765]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 14 22:09:44.655891 sudo[1765]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 22:09:44.968776 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 14 22:09:44.969114 (dockerd)[1783]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 14 22:09:45.253747 dockerd[1783]: time="2025-07-14T22:09:45.253592052Z" level=info msg="Starting up" Jul 14 22:09:46.661587 dockerd[1783]: time="2025-07-14T22:09:46.661492881Z" level=info msg="Loading containers: start." Jul 14 22:09:46.899538 kernel: Initializing XFRM netlink socket Jul 14 22:09:46.976977 systemd-networkd[1245]: docker0: Link UP Jul 14 22:09:47.093312 dockerd[1783]: time="2025-07-14T22:09:47.093244093Z" level=info msg="Loading containers: done." Jul 14 22:09:47.108837 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3254141136-merged.mount: Deactivated successfully. Jul 14 22:09:47.111272 dockerd[1783]: time="2025-07-14T22:09:47.111216198Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 14 22:09:47.111357 dockerd[1783]: time="2025-07-14T22:09:47.111343111Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 14 22:09:47.111519 dockerd[1783]: time="2025-07-14T22:09:47.111476681Z" level=info msg="Daemon has completed initialization" Jul 14 22:09:47.346787 dockerd[1783]: time="2025-07-14T22:09:47.346693042Z" level=info msg="API listen on /run/docker.sock" Jul 14 22:09:47.346914 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 14 22:09:53.061732 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 14 22:09:53.071686 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:09:53.336920 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:09:53.342659 (kubelet)[1942]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:09:53.389777 kubelet[1942]: E0714 22:09:53.389723 1942 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:09:53.396570 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:09:53.396899 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:09:57.681842 containerd[1574]: time="2025-07-14T22:09:57.681782608Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" Jul 14 22:10:03.561592 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 14 22:10:03.573688 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:10:03.804639 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:10:03.809078 (kubelet)[1965]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:10:03.848326 kubelet[1965]: E0714 22:10:03.848178 1965 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:10:03.852388 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:10:03.852728 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:10:09.221801 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1218016114.mount: Deactivated successfully. Jul 14 22:10:10.392704 containerd[1574]: time="2025-07-14T22:10:10.392651895Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:10:10.395324 containerd[1574]: time="2025-07-14T22:10:10.395286119Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=27960987" Jul 14 22:10:10.396526 containerd[1574]: time="2025-07-14T22:10:10.396490831Z" level=info msg="ImageCreate event name:\"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:10:10.399197 containerd[1574]: time="2025-07-14T22:10:10.399153139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:10:10.400278 containerd[1574]: time="2025-07-14T22:10:10.400238359Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"27957787\" in 12.718400768s" Jul 14 22:10:10.400323 containerd[1574]: time="2025-07-14T22:10:10.400286358Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" Jul 14 22:10:10.400886 containerd[1574]: time="2025-07-14T22:10:10.400854517Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" Jul 14 22:10:11.555602 containerd[1574]: time="2025-07-14T22:10:11.555546225Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:10:11.556282 containerd[1574]: time="2025-07-14T22:10:11.556255088Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=24713776" Jul 14 22:10:11.557533 containerd[1574]: time="2025-07-14T22:10:11.557485178Z" level=info msg="ImageCreate event name:\"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:10:11.560297 containerd[1574]: time="2025-07-14T22:10:11.560264063Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:10:11.561140 containerd[1574]: time="2025-07-14T22:10:11.561110331Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"26202149\" in 1.16022728s" Jul 14 22:10:11.561209 containerd[1574]: time="2025-07-14T22:10:11.561146090Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" Jul 14 22:10:11.561676 containerd[1574]: time="2025-07-14T22:10:11.561646591Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" Jul 14 22:10:12.790442 containerd[1574]: time="2025-07-14T22:10:12.790347430Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:10:12.791331 containerd[1574]: time="2025-07-14T22:10:12.791285230Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=18780386" Jul 14 22:10:12.792734 containerd[1574]: time="2025-07-14T22:10:12.792691354Z" level=info msg="ImageCreate event name:\"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:10:12.795592 containerd[1574]: time="2025-07-14T22:10:12.795557071Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:10:12.796604 containerd[1574]: time="2025-07-14T22:10:12.796569897Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"20268777\" in 1.23488875s" Jul 14 22:10:12.796659 containerd[1574]: time="2025-07-14T22:10:12.796606749Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" Jul 14 22:10:12.797245 containerd[1574]: time="2025-07-14T22:10:12.797083320Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" Jul 14 22:10:14.045180 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2161947271.mount: Deactivated successfully. Jul 14 22:10:14.046190 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 14 22:10:14.053662 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:10:14.207869 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:10:14.212879 (kubelet)[2053]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:10:14.335690 kubelet[2053]: E0714 22:10:14.335621 2053 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:10:14.340019 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:10:14.340328 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:10:15.283529 containerd[1574]: time="2025-07-14T22:10:15.283445996Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:10:15.284227 containerd[1574]: time="2025-07-14T22:10:15.284149395Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=30354625" Jul 14 22:10:15.285595 containerd[1574]: time="2025-07-14T22:10:15.285567386Z" level=info msg="ImageCreate event name:\"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:10:15.287889 containerd[1574]: time="2025-07-14T22:10:15.287856061Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:10:15.288801 containerd[1574]: time="2025-07-14T22:10:15.288740205Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"30353644\" in 2.491625225s" Jul 14 22:10:15.288857 containerd[1574]: time="2025-07-14T22:10:15.288806897Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" Jul 14 22:10:15.289365 containerd[1574]: time="2025-07-14T22:10:15.289310239Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 14 22:10:15.821474 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount154913567.mount: Deactivated successfully. Jul 14 22:10:17.277963 containerd[1574]: time="2025-07-14T22:10:17.277878431Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:10:17.279363 containerd[1574]: time="2025-07-14T22:10:17.279309631Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 14 22:10:17.281038 containerd[1574]: time="2025-07-14T22:10:17.281002572Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:10:17.284658 containerd[1574]: time="2025-07-14T22:10:17.284623999Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:10:17.286178 containerd[1574]: time="2025-07-14T22:10:17.286021840Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.996651924s" Jul 14 22:10:17.286178 containerd[1574]: time="2025-07-14T22:10:17.286069861Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 14 22:10:17.286609 containerd[1574]: time="2025-07-14T22:10:17.286585695Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 14 22:10:17.771644 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount892650160.mount: Deactivated successfully. Jul 14 22:10:17.777696 containerd[1574]: time="2025-07-14T22:10:17.777348759Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:10:17.778275 containerd[1574]: time="2025-07-14T22:10:17.778221846Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 14 22:10:17.779408 containerd[1574]: time="2025-07-14T22:10:17.779335839Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:10:17.781735 containerd[1574]: time="2025-07-14T22:10:17.781692266Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:10:17.782577 containerd[1574]: time="2025-07-14T22:10:17.782541702Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 495.923488ms" Jul 14 22:10:17.782650 containerd[1574]: time="2025-07-14T22:10:17.782579862Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 14 22:10:17.783137 containerd[1574]: time="2025-07-14T22:10:17.783112462Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 14 22:10:18.310221 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3057731204.mount: Deactivated successfully. Jul 14 22:10:20.938254 containerd[1574]: time="2025-07-14T22:10:20.938185439Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:10:20.938941 containerd[1574]: time="2025-07-14T22:10:20.938893003Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Jul 14 22:10:20.940087 containerd[1574]: time="2025-07-14T22:10:20.940049605Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:10:20.943151 containerd[1574]: time="2025-07-14T22:10:20.943118324Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:10:20.944388 containerd[1574]: time="2025-07-14T22:10:20.944355162Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.161208349s" Jul 14 22:10:20.944464 containerd[1574]: time="2025-07-14T22:10:20.944392160Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 14 22:10:24.561241 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 14 22:10:24.570719 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:10:24.748279 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:10:24.753298 (kubelet)[2196]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:10:24.800827 kubelet[2196]: E0714 22:10:24.800706 2196 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:10:24.805257 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:10:24.805555 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:10:25.859766 update_engine[1550]: I20250714 22:10:25.859624 1550 update_attempter.cc:509] Updating boot flags... Jul 14 22:10:26.297582 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2213) Jul 14 22:10:26.336535 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2214) Jul 14 22:10:32.994915 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:10:33.007915 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:10:33.035614 systemd[1]: Reloading requested from client PID 2244 ('systemctl') (unit session-7.scope)... Jul 14 22:10:33.035629 systemd[1]: Reloading... Jul 14 22:10:33.120535 zram_generator::config[2289]: No configuration found. Jul 14 22:10:33.356588 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 22:10:33.433761 systemd[1]: Reloading finished in 397 ms. Jul 14 22:10:33.492624 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 14 22:10:33.492754 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 14 22:10:33.493217 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:10:33.495173 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:10:33.658841 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:10:33.664013 (kubelet)[2343]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 14 22:10:33.704365 kubelet[2343]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 22:10:33.704365 kubelet[2343]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 14 22:10:33.704365 kubelet[2343]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 22:10:33.704833 kubelet[2343]: I0714 22:10:33.704445 2343 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 14 22:10:34.115257 kubelet[2343]: I0714 22:10:34.115212 2343 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 14 22:10:34.115257 kubelet[2343]: I0714 22:10:34.115242 2343 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 14 22:10:34.115486 kubelet[2343]: I0714 22:10:34.115469 2343 server.go:934] "Client rotation is on, will bootstrap in background" Jul 14 22:10:34.133217 kubelet[2343]: E0714 22:10:34.133178 2343 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.59:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:10:34.134077 kubelet[2343]: I0714 22:10:34.134054 2343 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 14 22:10:34.139402 kubelet[2343]: E0714 22:10:34.139358 2343 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 14 22:10:34.139402 kubelet[2343]: I0714 22:10:34.139400 2343 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 14 22:10:34.145723 kubelet[2343]: I0714 22:10:34.145679 2343 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 14 22:10:34.146525 kubelet[2343]: I0714 22:10:34.146489 2343 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 14 22:10:34.146689 kubelet[2343]: I0714 22:10:34.146653 2343 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 14 22:10:34.146858 kubelet[2343]: I0714 22:10:34.146681 2343 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 14 22:10:34.146958 kubelet[2343]: I0714 22:10:34.146867 2343 topology_manager.go:138] "Creating topology manager with none policy" Jul 14 22:10:34.146958 kubelet[2343]: I0714 22:10:34.146875 2343 container_manager_linux.go:300] "Creating device plugin manager" Jul 14 22:10:34.147020 kubelet[2343]: I0714 22:10:34.147002 2343 state_mem.go:36] "Initialized new in-memory state store" Jul 14 22:10:34.149097 kubelet[2343]: I0714 22:10:34.149069 2343 kubelet.go:408] "Attempting to sync node with API server" Jul 14 22:10:34.149097 kubelet[2343]: I0714 22:10:34.149097 2343 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 14 22:10:34.149189 kubelet[2343]: I0714 22:10:34.149147 2343 kubelet.go:314] "Adding apiserver pod source" Jul 14 22:10:34.149189 kubelet[2343]: I0714 22:10:34.149179 2343 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 14 22:10:34.152086 kubelet[2343]: I0714 22:10:34.152056 2343 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 14 22:10:34.152445 kubelet[2343]: I0714 22:10:34.152412 2343 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 14 22:10:34.152520 kubelet[2343]: W0714 22:10:34.152485 2343 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 14 22:10:34.155069 kubelet[2343]: W0714 22:10:34.154582 2343 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.59:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused Jul 14 22:10:34.155069 kubelet[2343]: E0714 22:10:34.154644 2343 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.59:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:10:34.155069 kubelet[2343]: I0714 22:10:34.155033 2343 server.go:1274] "Started kubelet" Jul 14 22:10:34.155736 kubelet[2343]: I0714 22:10:34.155662 2343 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 14 22:10:34.155783 kubelet[2343]: W0714 22:10:34.155710 2343 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused Jul 14 22:10:34.155783 kubelet[2343]: E0714 22:10:34.155770 2343 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:10:34.155868 kubelet[2343]: I0714 22:10:34.155808 2343 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 14 22:10:34.156324 kubelet[2343]: I0714 22:10:34.156039 2343 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 14 22:10:34.156730 kubelet[2343]: I0714 22:10:34.156530 2343 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 14 22:10:34.157202 kubelet[2343]: I0714 22:10:34.157167 2343 server.go:449] "Adding debug handlers to kubelet server" Jul 14 22:10:34.159212 kubelet[2343]: I0714 22:10:34.158293 2343 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 14 22:10:34.160861 kubelet[2343]: E0714 22:10:34.160819 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:10:34.161002 kubelet[2343]: I0714 22:10:34.160975 2343 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 14 22:10:34.161877 kubelet[2343]: I0714 22:10:34.161442 2343 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 14 22:10:34.161877 kubelet[2343]: I0714 22:10:34.161540 2343 reconciler.go:26] "Reconciler: start to sync state" Jul 14 22:10:34.161877 kubelet[2343]: E0714 22:10:34.161726 2343 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.59:6443: connect: connection refused" interval="200ms" Jul 14 22:10:34.161973 kubelet[2343]: E0714 22:10:34.161938 2343 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 14 22:10:34.162288 kubelet[2343]: I0714 22:10:34.162261 2343 factory.go:221] Registration of the systemd container factory successfully Jul 14 22:10:34.162561 kubelet[2343]: I0714 22:10:34.162524 2343 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 14 22:10:34.162614 kubelet[2343]: W0714 22:10:34.162296 2343 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused Jul 14 22:10:34.162614 kubelet[2343]: E0714 22:10:34.162605 2343 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:10:34.164100 kubelet[2343]: I0714 22:10:34.164048 2343 factory.go:221] Registration of the containerd container factory successfully Jul 14 22:10:34.165515 kubelet[2343]: E0714 22:10:34.162270 2343 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.59:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.59:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18523db0b4a847b4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-14 22:10:34.155009972 +0000 UTC m=+0.486816365,LastTimestamp:2025-07-14 22:10:34.155009972 +0000 UTC m=+0.486816365,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 14 22:10:34.186972 kubelet[2343]: I0714 22:10:34.186933 2343 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 14 22:10:34.186972 kubelet[2343]: I0714 22:10:34.186960 2343 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 14 22:10:34.186972 kubelet[2343]: I0714 22:10:34.186986 2343 state_mem.go:36] "Initialized new in-memory state store" Jul 14 22:10:34.261355 kubelet[2343]: E0714 22:10:34.261272 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:10:34.361827 kubelet[2343]: E0714 22:10:34.361761 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:10:34.362366 kubelet[2343]: E0714 22:10:34.362311 2343 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.59:6443: connect: connection refused" interval="400ms" Jul 14 22:10:34.462769 kubelet[2343]: E0714 22:10:34.462612 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:10:34.563092 kubelet[2343]: E0714 22:10:34.563026 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:10:34.664123 kubelet[2343]: E0714 22:10:34.664066 2343 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:10:34.751419 kubelet[2343]: I0714 22:10:34.751303 2343 policy_none.go:49] "None policy: Start" Jul 14 22:10:34.752567 kubelet[2343]: I0714 22:10:34.752329 2343 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 14 22:10:34.752567 kubelet[2343]: I0714 22:10:34.752354 2343 state_mem.go:35] "Initializing new in-memory state store" Jul 14 22:10:34.754070 kubelet[2343]: I0714 22:10:34.754009 2343 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 14 22:10:34.755598 kubelet[2343]: I0714 22:10:34.755570 2343 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 14 22:10:34.755645 kubelet[2343]: I0714 22:10:34.755629 2343 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 14 22:10:34.755684 kubelet[2343]: I0714 22:10:34.755666 2343 kubelet.go:2321] "Starting kubelet main sync loop" Jul 14 22:10:34.755761 kubelet[2343]: E0714 22:10:34.755722 2343 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 14 22:10:34.756614 kubelet[2343]: W0714 22:10:34.756151 2343 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused Jul 14 22:10:34.756614 kubelet[2343]: E0714 22:10:34.756204 2343 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:10:34.761194 kubelet[2343]: I0714 22:10:34.761173 2343 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 14 22:10:34.761371 kubelet[2343]: I0714 22:10:34.761356 2343 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 14 22:10:34.761408 kubelet[2343]: I0714 22:10:34.761371 2343 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 14 22:10:34.762177 kubelet[2343]: I0714 22:10:34.762136 2343 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 14 22:10:34.763065 kubelet[2343]: E0714 22:10:34.763031 2343 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.59:6443: connect: connection refused" interval="800ms" Jul 14 22:10:34.763759 kubelet[2343]: E0714 22:10:34.763738 2343 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 14 22:10:34.863268 kubelet[2343]: I0714 22:10:34.863213 2343 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 22:10:34.863904 kubelet[2343]: E0714 22:10:34.863870 2343 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.59:6443/api/v1/nodes\": dial tcp 10.0.0.59:6443: connect: connection refused" node="localhost" Jul 14 22:10:34.865095 kubelet[2343]: I0714 22:10:34.865051 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/356b63c0aaed3a51d99a081572db4a13-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"356b63c0aaed3a51d99a081572db4a13\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:10:34.865095 kubelet[2343]: I0714 22:10:34.865093 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/356b63c0aaed3a51d99a081572db4a13-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"356b63c0aaed3a51d99a081572db4a13\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:10:34.865207 kubelet[2343]: I0714 22:10:34.865114 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:10:34.865207 kubelet[2343]: I0714 22:10:34.865134 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:10:34.865207 kubelet[2343]: I0714 22:10:34.865158 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" Jul 14 22:10:34.865207 kubelet[2343]: I0714 22:10:34.865200 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/356b63c0aaed3a51d99a081572db4a13-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"356b63c0aaed3a51d99a081572db4a13\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:10:34.865331 kubelet[2343]: I0714 22:10:34.865223 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:10:34.865331 kubelet[2343]: I0714 22:10:34.865245 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:10:34.865331 kubelet[2343]: I0714 22:10:34.865265 2343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:10:35.065676 kubelet[2343]: I0714 22:10:35.065654 2343 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 22:10:35.065956 kubelet[2343]: E0714 22:10:35.065935 2343 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.59:6443/api/v1/nodes\": dial tcp 10.0.0.59:6443: connect: connection refused" node="localhost" Jul 14 22:10:35.142901 kubelet[2343]: W0714 22:10:35.142835 2343 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.59:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused Jul 14 22:10:35.143004 kubelet[2343]: E0714 22:10:35.142905 2343 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.59:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:10:35.162175 kubelet[2343]: E0714 22:10:35.162145 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:10:35.162968 containerd[1574]: time="2025-07-14T22:10:35.162927719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" Jul 14 22:10:35.164216 kubelet[2343]: E0714 22:10:35.164167 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:10:35.164749 containerd[1574]: time="2025-07-14T22:10:35.164716701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" Jul 14 22:10:35.166946 kubelet[2343]: E0714 22:10:35.166929 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:10:35.167288 containerd[1574]: time="2025-07-14T22:10:35.167253325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:356b63c0aaed3a51d99a081572db4a13,Namespace:kube-system,Attempt:0,}" Jul 14 22:10:35.456575 kubelet[2343]: W0714 22:10:35.456382 2343 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused Jul 14 22:10:35.456575 kubelet[2343]: E0714 22:10:35.456472 2343 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:10:35.468020 kubelet[2343]: I0714 22:10:35.467984 2343 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 22:10:35.468344 kubelet[2343]: E0714 22:10:35.468297 2343 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.59:6443/api/v1/nodes\": dial tcp 10.0.0.59:6443: connect: connection refused" node="localhost" Jul 14 22:10:35.564276 kubelet[2343]: E0714 22:10:35.564213 2343 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.59:6443: connect: connection refused" interval="1.6s" Jul 14 22:10:35.612169 kubelet[2343]: W0714 22:10:35.612097 2343 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused Jul 14 22:10:35.612246 kubelet[2343]: E0714 22:10:35.612177 2343 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:10:36.131175 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1797623156.mount: Deactivated successfully. Jul 14 22:10:36.142705 containerd[1574]: time="2025-07-14T22:10:36.142631201Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 22:10:36.143788 containerd[1574]: time="2025-07-14T22:10:36.143723036Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 22:10:36.145079 containerd[1574]: time="2025-07-14T22:10:36.145026928Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 14 22:10:36.146222 containerd[1574]: time="2025-07-14T22:10:36.146179613Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 22:10:36.147023 containerd[1574]: time="2025-07-14T22:10:36.146986418Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 14 22:10:36.148110 containerd[1574]: time="2025-07-14T22:10:36.148074978Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 14 22:10:36.149495 containerd[1574]: time="2025-07-14T22:10:36.149461472Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 22:10:36.153475 containerd[1574]: time="2025-07-14T22:10:36.153436522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 22:10:36.154324 containerd[1574]: time="2025-07-14T22:10:36.154299378Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 989.506466ms" Jul 14 22:10:36.155545 containerd[1574]: time="2025-07-14T22:10:36.155514405Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 992.487972ms" Jul 14 22:10:36.156919 containerd[1574]: time="2025-07-14T22:10:36.156885068Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 989.572457ms" Jul 14 22:10:36.202164 kubelet[2343]: E0714 22:10:36.202112 2343 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.59:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:10:36.269906 kubelet[2343]: I0714 22:10:36.269844 2343 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 22:10:36.270385 kubelet[2343]: E0714 22:10:36.270332 2343 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.59:6443/api/v1/nodes\": dial tcp 10.0.0.59:6443: connect: connection refused" node="localhost" Jul 14 22:10:36.270795 kubelet[2343]: W0714 22:10:36.270765 2343 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused Jul 14 22:10:36.270840 kubelet[2343]: E0714 22:10:36.270806 2343 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:10:36.293418 containerd[1574]: time="2025-07-14T22:10:36.292983468Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:10:36.293418 containerd[1574]: time="2025-07-14T22:10:36.293072193Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:10:36.293418 containerd[1574]: time="2025-07-14T22:10:36.293087562Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:10:36.294012 containerd[1574]: time="2025-07-14T22:10:36.293643766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:10:36.294135 containerd[1574]: time="2025-07-14T22:10:36.293880811Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:10:36.294135 containerd[1574]: time="2025-07-14T22:10:36.293935188Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:10:36.294135 containerd[1574]: time="2025-07-14T22:10:36.293948805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:10:36.294319 containerd[1574]: time="2025-07-14T22:10:36.294228675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:10:36.294574 containerd[1574]: time="2025-07-14T22:10:36.293327183Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:10:36.294662 containerd[1574]: time="2025-07-14T22:10:36.294537762Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:10:36.294767 containerd[1574]: time="2025-07-14T22:10:36.294706714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:10:36.295387 containerd[1574]: time="2025-07-14T22:10:36.295313476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:10:36.353534 containerd[1574]: time="2025-07-14T22:10:36.353408918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:356b63c0aaed3a51d99a081572db4a13,Namespace:kube-system,Attempt:0,} returns sandbox id \"9523112edd8e812ef792eb5de81c27e0b97c60f541171a13effd02b252b3d856\"" Jul 14 22:10:36.354588 kubelet[2343]: E0714 22:10:36.354549 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:10:36.356585 containerd[1574]: time="2025-07-14T22:10:36.356553727Z" level=info msg="CreateContainer within sandbox \"9523112edd8e812ef792eb5de81c27e0b97c60f541171a13effd02b252b3d856\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 14 22:10:36.361217 containerd[1574]: time="2025-07-14T22:10:36.361174146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b49a6291f241128dca49abc2d0327e608ee68575bb3739c950eb2d31469669d\"" Jul 14 22:10:36.361858 kubelet[2343]: E0714 22:10:36.361826 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:10:36.363610 containerd[1574]: time="2025-07-14T22:10:36.363556597Z" level=info msg="CreateContainer within sandbox \"7b49a6291f241128dca49abc2d0327e608ee68575bb3739c950eb2d31469669d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 14 22:10:36.366363 containerd[1574]: time="2025-07-14T22:10:36.366134332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"38c9d2552da0eb3fcbfa5cd2b86febca4ddbeeb936bf5f5ee567596a183c3c14\"" Jul 14 22:10:36.367062 kubelet[2343]: E0714 22:10:36.367028 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:10:36.368654 containerd[1574]: time="2025-07-14T22:10:36.368628132Z" level=info msg="CreateContainer within sandbox \"38c9d2552da0eb3fcbfa5cd2b86febca4ddbeeb936bf5f5ee567596a183c3c14\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 14 22:10:36.543435 containerd[1574]: time="2025-07-14T22:10:36.543271369Z" level=info msg="CreateContainer within sandbox \"9523112edd8e812ef792eb5de81c27e0b97c60f541171a13effd02b252b3d856\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6dc46a29f9c64803516ebdbba473b479a0b7413d140b540e25bce254c42a6b92\"" Jul 14 22:10:36.544150 containerd[1574]: time="2025-07-14T22:10:36.544094025Z" level=info msg="StartContainer for \"6dc46a29f9c64803516ebdbba473b479a0b7413d140b540e25bce254c42a6b92\"" Jul 14 22:10:36.547030 containerd[1574]: time="2025-07-14T22:10:36.546981348Z" level=info msg="CreateContainer within sandbox \"38c9d2552da0eb3fcbfa5cd2b86febca4ddbeeb936bf5f5ee567596a183c3c14\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"295a371c45943d1bedb6f8037a34cd319aa8d920909b06e18e575e6011f29208\"" Jul 14 22:10:36.547443 containerd[1574]: time="2025-07-14T22:10:36.547414349Z" level=info msg="StartContainer for \"295a371c45943d1bedb6f8037a34cd319aa8d920909b06e18e575e6011f29208\"" Jul 14 22:10:36.548382 containerd[1574]: time="2025-07-14T22:10:36.548347262Z" level=info msg="CreateContainer within sandbox \"7b49a6291f241128dca49abc2d0327e608ee68575bb3739c950eb2d31469669d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b1c6c4d4765c97f82073e5b3c5c0be08b66eb0b910b30f237255c4937cdbc6bf\"" Jul 14 22:10:36.548914 containerd[1574]: time="2025-07-14T22:10:36.548893255Z" level=info msg="StartContainer for \"b1c6c4d4765c97f82073e5b3c5c0be08b66eb0b910b30f237255c4937cdbc6bf\"" Jul 14 22:10:36.624997 containerd[1574]: time="2025-07-14T22:10:36.624876815Z" level=info msg="StartContainer for \"6dc46a29f9c64803516ebdbba473b479a0b7413d140b540e25bce254c42a6b92\" returns successfully" Jul 14 22:10:36.625438 containerd[1574]: time="2025-07-14T22:10:36.625059985Z" level=info msg="StartContainer for \"295a371c45943d1bedb6f8037a34cd319aa8d920909b06e18e575e6011f29208\" returns successfully" Jul 14 22:10:36.625438 containerd[1574]: time="2025-07-14T22:10:36.625163488Z" level=info msg="StartContainer for \"b1c6c4d4765c97f82073e5b3c5c0be08b66eb0b910b30f237255c4937cdbc6bf\" returns successfully" Jul 14 22:10:36.764807 kubelet[2343]: E0714 22:10:36.764780 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:10:36.766415 kubelet[2343]: E0714 22:10:36.766397 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:10:36.770546 kubelet[2343]: E0714 22:10:36.770521 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:10:37.609063 kubelet[2343]: E0714 22:10:37.609031 2343 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 14 22:10:37.771684 kubelet[2343]: E0714 22:10:37.771643 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:10:37.872136 kubelet[2343]: I0714 22:10:37.872035 2343 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 22:10:37.877431 kubelet[2343]: I0714 22:10:37.877399 2343 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 14 22:10:37.877431 kubelet[2343]: E0714 22:10:37.877430 2343 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 14 22:10:38.151837 kubelet[2343]: I0714 22:10:38.151734 2343 apiserver.go:52] "Watching apiserver" Jul 14 22:10:38.162002 kubelet[2343]: I0714 22:10:38.161959 2343 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 14 22:10:40.105600 systemd[1]: Reloading requested from client PID 2623 ('systemctl') (unit session-7.scope)... Jul 14 22:10:40.105627 systemd[1]: Reloading... Jul 14 22:10:40.191546 zram_generator::config[2663]: No configuration found. Jul 14 22:10:40.285887 kubelet[2343]: E0714 22:10:40.285857 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:10:40.329993 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 22:10:40.416263 systemd[1]: Reloading finished in 310 ms. Jul 14 22:10:40.457208 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:10:40.478806 systemd[1]: kubelet.service: Deactivated successfully. Jul 14 22:10:40.479221 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:10:40.497107 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:10:40.663281 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:10:40.669265 (kubelet)[2717]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 14 22:10:40.717619 kubelet[2717]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 22:10:40.717619 kubelet[2717]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 14 22:10:40.717619 kubelet[2717]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 22:10:40.718053 kubelet[2717]: I0714 22:10:40.717698 2717 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 14 22:10:40.723654 kubelet[2717]: I0714 22:10:40.723601 2717 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 14 22:10:40.723654 kubelet[2717]: I0714 22:10:40.723632 2717 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 14 22:10:40.725120 kubelet[2717]: I0714 22:10:40.724186 2717 server.go:934] "Client rotation is on, will bootstrap in background" Jul 14 22:10:40.726311 kubelet[2717]: I0714 22:10:40.726284 2717 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 14 22:10:40.728771 kubelet[2717]: I0714 22:10:40.728746 2717 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 14 22:10:40.731827 kubelet[2717]: E0714 22:10:40.731799 2717 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 14 22:10:40.731886 kubelet[2717]: I0714 22:10:40.731828 2717 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 14 22:10:40.736269 kubelet[2717]: I0714 22:10:40.736249 2717 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 14 22:10:40.737595 kubelet[2717]: I0714 22:10:40.737571 2717 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 14 22:10:40.737749 kubelet[2717]: I0714 22:10:40.737720 2717 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 14 22:10:40.737920 kubelet[2717]: I0714 22:10:40.737744 2717 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 14 22:10:40.737996 kubelet[2717]: I0714 22:10:40.737922 2717 topology_manager.go:138] "Creating topology manager with none policy" Jul 14 22:10:40.737996 kubelet[2717]: I0714 22:10:40.737932 2717 container_manager_linux.go:300] "Creating device plugin manager" Jul 14 22:10:40.737996 kubelet[2717]: I0714 22:10:40.737960 2717 state_mem.go:36] "Initialized new in-memory state store" Jul 14 22:10:40.738088 kubelet[2717]: I0714 22:10:40.738072 2717 kubelet.go:408] "Attempting to sync node with API server" Jul 14 22:10:40.738088 kubelet[2717]: I0714 22:10:40.738087 2717 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 14 22:10:40.738133 kubelet[2717]: I0714 22:10:40.738121 2717 kubelet.go:314] "Adding apiserver pod source" Jul 14 22:10:40.738133 kubelet[2717]: I0714 22:10:40.738133 2717 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 14 22:10:40.738884 kubelet[2717]: I0714 22:10:40.738858 2717 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 14 22:10:40.740578 kubelet[2717]: I0714 22:10:40.739386 2717 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 14 22:10:40.744517 kubelet[2717]: I0714 22:10:40.742241 2717 server.go:1274] "Started kubelet" Jul 14 22:10:40.744724 kubelet[2717]: I0714 22:10:40.744700 2717 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 14 22:10:40.746377 kubelet[2717]: I0714 22:10:40.746337 2717 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 14 22:10:40.747008 kubelet[2717]: I0714 22:10:40.746969 2717 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 14 22:10:40.747814 kubelet[2717]: I0714 22:10:40.747780 2717 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 14 22:10:40.748466 kubelet[2717]: I0714 22:10:40.747991 2717 server.go:449] "Adding debug handlers to kubelet server" Jul 14 22:10:40.750484 kubelet[2717]: I0714 22:10:40.749380 2717 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 14 22:10:40.752262 kubelet[2717]: I0714 22:10:40.751045 2717 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 14 22:10:40.752262 kubelet[2717]: E0714 22:10:40.751209 2717 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:10:40.752262 kubelet[2717]: I0714 22:10:40.752009 2717 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 14 22:10:40.752262 kubelet[2717]: I0714 22:10:40.752127 2717 factory.go:221] Registration of the systemd container factory successfully Jul 14 22:10:40.752262 kubelet[2717]: I0714 22:10:40.752153 2717 reconciler.go:26] "Reconciler: start to sync state" Jul 14 22:10:40.756640 kubelet[2717]: I0714 22:10:40.756610 2717 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 14 22:10:40.758613 kubelet[2717]: E0714 22:10:40.758579 2717 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 14 22:10:40.759114 kubelet[2717]: I0714 22:10:40.758693 2717 factory.go:221] Registration of the containerd container factory successfully Jul 14 22:10:40.761883 kubelet[2717]: I0714 22:10:40.761851 2717 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 14 22:10:40.763674 kubelet[2717]: I0714 22:10:40.763643 2717 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 14 22:10:40.763734 kubelet[2717]: I0714 22:10:40.763679 2717 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 14 22:10:40.763734 kubelet[2717]: I0714 22:10:40.763697 2717 kubelet.go:2321] "Starting kubelet main sync loop" Jul 14 22:10:40.763801 kubelet[2717]: E0714 22:10:40.763738 2717 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 14 22:10:40.806406 kubelet[2717]: I0714 22:10:40.806377 2717 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 14 22:10:40.806406 kubelet[2717]: I0714 22:10:40.806398 2717 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 14 22:10:40.806563 kubelet[2717]: I0714 22:10:40.806420 2717 state_mem.go:36] "Initialized new in-memory state store" Jul 14 22:10:40.806720 kubelet[2717]: I0714 22:10:40.806694 2717 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 14 22:10:40.806744 kubelet[2717]: I0714 22:10:40.806713 2717 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 14 22:10:40.806765 kubelet[2717]: I0714 22:10:40.806745 2717 policy_none.go:49] "None policy: Start" Jul 14 22:10:40.807556 kubelet[2717]: I0714 22:10:40.807526 2717 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 14 22:10:40.807595 kubelet[2717]: I0714 22:10:40.807559 2717 state_mem.go:35] "Initializing new in-memory state store" Jul 14 22:10:40.807731 kubelet[2717]: I0714 22:10:40.807713 2717 state_mem.go:75] "Updated machine memory state" Jul 14 22:10:40.809461 kubelet[2717]: I0714 22:10:40.809436 2717 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 14 22:10:40.810221 kubelet[2717]: I0714 22:10:40.809694 2717 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 14 22:10:40.810221 kubelet[2717]: I0714 22:10:40.809713 2717 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 14 22:10:40.810221 kubelet[2717]: I0714 22:10:40.810088 2717 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 14 22:10:40.976610 kubelet[2717]: E0714 22:10:40.871939 2717 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 14 22:10:40.982118 kubelet[2717]: I0714 22:10:40.981845 2717 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 22:10:40.988614 kubelet[2717]: I0714 22:10:40.988588 2717 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 14 22:10:40.988691 kubelet[2717]: I0714 22:10:40.988670 2717 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 14 22:10:41.074985 kubelet[2717]: I0714 22:10:41.074926 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:10:41.074985 kubelet[2717]: I0714 22:10:41.074975 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:10:41.074985 kubelet[2717]: I0714 22:10:41.074993 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/356b63c0aaed3a51d99a081572db4a13-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"356b63c0aaed3a51d99a081572db4a13\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:10:41.075209 kubelet[2717]: I0714 22:10:41.075014 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/356b63c0aaed3a51d99a081572db4a13-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"356b63c0aaed3a51d99a081572db4a13\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:10:41.075209 kubelet[2717]: I0714 22:10:41.075042 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:10:41.075209 kubelet[2717]: I0714 22:10:41.075111 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:10:41.075209 kubelet[2717]: I0714 22:10:41.075166 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" Jul 14 22:10:41.075209 kubelet[2717]: I0714 22:10:41.075189 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/356b63c0aaed3a51d99a081572db4a13-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"356b63c0aaed3a51d99a081572db4a13\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:10:41.075348 kubelet[2717]: I0714 22:10:41.075207 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:10:41.275536 kubelet[2717]: E0714 22:10:41.275398 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:10:41.275536 kubelet[2717]: E0714 22:10:41.275444 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:10:41.275662 kubelet[2717]: E0714 22:10:41.275550 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:10:41.739353 kubelet[2717]: I0714 22:10:41.739306 2717 apiserver.go:52] "Watching apiserver" Jul 14 22:10:41.752876 kubelet[2717]: I0714 22:10:41.752838 2717 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 14 22:10:41.773888 kubelet[2717]: E0714 22:10:41.773744 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:10:41.773888 kubelet[2717]: E0714 22:10:41.773771 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:10:41.774108 kubelet[2717]: E0714 22:10:41.774079 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:10:41.949994 kubelet[2717]: I0714 22:10:41.949912 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.949865696 podStartE2EDuration="1.949865696s" podCreationTimestamp="2025-07-14 22:10:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:10:41.949850917 +0000 UTC m=+1.267885193" watchObservedRunningTime="2025-07-14 22:10:41.949865696 +0000 UTC m=+1.267899932" Jul 14 22:10:42.061283 kubelet[2717]: I0714 22:10:42.061187 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.061168302 podStartE2EDuration="2.061168302s" podCreationTimestamp="2025-07-14 22:10:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:10:42.060998431 +0000 UTC m=+1.379032677" watchObservedRunningTime="2025-07-14 22:10:42.061168302 +0000 UTC m=+1.379202548" Jul 14 22:10:42.061427 kubelet[2717]: I0714 22:10:42.061324 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.061317052 podStartE2EDuration="2.061317052s" podCreationTimestamp="2025-07-14 22:10:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:10:42.052662996 +0000 UTC m=+1.370697242" watchObservedRunningTime="2025-07-14 22:10:42.061317052 +0000 UTC m=+1.379351308" Jul 14 22:10:42.775898 kubelet[2717]: E0714 22:10:42.775849 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:10:43.655536 kubelet[2717]: E0714 22:10:43.655484 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:10:44.371140 kubelet[2717]: I0714 22:10:44.371101 2717 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 14 22:10:44.371596 containerd[1574]: time="2025-07-14T22:10:44.371481874Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 14 22:10:44.371884 kubelet[2717]: I0714 22:10:44.371781 2717 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 14 22:10:44.789397 kubelet[2717]: E0714 22:10:44.789282 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:10:45.503847 kubelet[2717]: I0714 22:10:45.503799 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c9dfc076-b0db-4a45-9236-9d688319f551-lib-modules\") pod \"kube-proxy-ql96b\" (UID: \"c9dfc076-b0db-4a45-9236-9d688319f551\") " pod="kube-system/kube-proxy-ql96b" Jul 14 22:10:45.503847 kubelet[2717]: I0714 22:10:45.503831 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c9dfc076-b0db-4a45-9236-9d688319f551-kube-proxy\") pod \"kube-proxy-ql96b\" (UID: \"c9dfc076-b0db-4a45-9236-9d688319f551\") " pod="kube-system/kube-proxy-ql96b" Jul 14 22:10:45.503847 kubelet[2717]: I0714 22:10:45.503848 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c9dfc076-b0db-4a45-9236-9d688319f551-xtables-lock\") pod \"kube-proxy-ql96b\" (UID: \"c9dfc076-b0db-4a45-9236-9d688319f551\") " pod="kube-system/kube-proxy-ql96b" Jul 14 22:10:45.504305 kubelet[2717]: I0714 22:10:45.503885 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq4qf\" (UniqueName: \"kubernetes.io/projected/c9dfc076-b0db-4a45-9236-9d688319f551-kube-api-access-kq4qf\") pod \"kube-proxy-ql96b\" (UID: \"c9dfc076-b0db-4a45-9236-9d688319f551\") " pod="kube-system/kube-proxy-ql96b" Jul 14 22:10:45.695427 kubelet[2717]: E0714 22:10:45.695399 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:10:45.696041 containerd[1574]: time="2025-07-14T22:10:45.696003192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ql96b,Uid:c9dfc076-b0db-4a45-9236-9d688319f551,Namespace:kube-system,Attempt:0,}" Jul 14 22:10:45.719624 containerd[1574]: time="2025-07-14T22:10:45.719463486Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:10:45.720131 containerd[1574]: time="2025-07-14T22:10:45.720083609Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:10:45.720131 containerd[1574]: time="2025-07-14T22:10:45.720105662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:10:45.720280 containerd[1574]: time="2025-07-14T22:10:45.720198091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:10:45.763687 containerd[1574]: time="2025-07-14T22:10:45.763585535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ql96b,Uid:c9dfc076-b0db-4a45-9236-9d688319f551,Namespace:kube-system,Attempt:0,} returns sandbox id \"0924a2fffa92be66982c2ec6e4dfb506d0a2755d5d69fed2cb64ecaa6f694a5c\"" Jul 14 22:10:45.764356 kubelet[2717]: E0714 22:10:45.764333 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:10:45.766069 containerd[1574]: time="2025-07-14T22:10:45.766039886Z" level=info msg="CreateContainer within sandbox \"0924a2fffa92be66982c2ec6e4dfb506d0a2755d5d69fed2cb64ecaa6f694a5c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 14 22:10:45.781043 containerd[1574]: time="2025-07-14T22:10:45.781006534Z" level=info msg="CreateContainer within sandbox \"0924a2fffa92be66982c2ec6e4dfb506d0a2755d5d69fed2cb64ecaa6f694a5c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"604ece400c97d88870dffb1ab4393107ddf4f3452473620ecebaadf244408737\"" Jul 14 22:10:45.782526 containerd[1574]: time="2025-07-14T22:10:45.781414545Z" level=info msg="StartContainer for \"604ece400c97d88870dffb1ab4393107ddf4f3452473620ecebaadf244408737\"" Jul 14 22:10:45.836874 containerd[1574]: time="2025-07-14T22:10:45.836825349Z" level=info msg="StartContainer for \"604ece400c97d88870dffb1ab4393107ddf4f3452473620ecebaadf244408737\" returns successfully" Jul 14 22:10:46.093710 kubelet[2717]: E0714 22:10:46.093678 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:10:46.782814 kubelet[2717]: E0714 22:10:46.782602 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:10:46.783372 kubelet[2717]: E0714 22:10:46.782959 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:10:46.802695 kubelet[2717]: I0714 22:10:46.802612 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ql96b" podStartSLOduration=1.802585077 podStartE2EDuration="1.802585077s" podCreationTimestamp="2025-07-14 22:10:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:10:46.791687522 +0000 UTC m=+6.109721778" watchObservedRunningTime="2025-07-14 22:10:46.802585077 +0000 UTC m=+6.120619333" Jul 14 22:10:47.783857 kubelet[2717]: E0714 22:10:47.783808 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:10:47.784324 kubelet[2717]: E0714 22:10:47.783823 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:10:52.247689 kubelet[2717]: I0714 22:10:52.247642 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/13e131d6-6202-499d-8606-26bc047a0b9d-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-46467\" (UID: \"13e131d6-6202-499d-8606-26bc047a0b9d\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-46467" Jul 14 22:10:52.248211 kubelet[2717]: I0714 22:10:52.247702 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26692\" (UniqueName: \"kubernetes.io/projected/13e131d6-6202-499d-8606-26bc047a0b9d-kube-api-access-26692\") pod \"tigera-operator-5bf8dfcb4-46467\" (UID: \"13e131d6-6202-499d-8606-26bc047a0b9d\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-46467" Jul 14 22:10:52.441324 containerd[1574]: time="2025-07-14T22:10:52.441264741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-46467,Uid:13e131d6-6202-499d-8606-26bc047a0b9d,Namespace:tigera-operator,Attempt:0,}" Jul 14 22:10:52.901300 containerd[1574]: time="2025-07-14T22:10:52.900588985Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:10:52.901300 containerd[1574]: time="2025-07-14T22:10:52.901258416Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:10:52.901300 containerd[1574]: time="2025-07-14T22:10:52.901274768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:10:52.901562 containerd[1574]: time="2025-07-14T22:10:52.901496606Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:10:52.950055 containerd[1574]: time="2025-07-14T22:10:52.949990877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-46467,Uid:13e131d6-6202-499d-8606-26bc047a0b9d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"62dcd29ac951f2e93a9d7448a6f504ae5ff7c1d8d1e0e5626407404ff4cf1693\"" Jul 14 22:10:52.951488 containerd[1574]: time="2025-07-14T22:10:52.951461261Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 14 22:10:53.660238 kubelet[2717]: E0714 22:10:53.660205 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:10:54.188898 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3727341721.mount: Deactivated successfully. Jul 14 22:10:54.601102 containerd[1574]: time="2025-07-14T22:10:54.601048120Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:10:54.602232 containerd[1574]: time="2025-07-14T22:10:54.602192584Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 14 22:10:54.603490 containerd[1574]: time="2025-07-14T22:10:54.603453821Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:10:54.605905 containerd[1574]: time="2025-07-14T22:10:54.605854013Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:10:54.606653 containerd[1574]: time="2025-07-14T22:10:54.606615309Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 1.65511921s" Jul 14 22:10:54.606653 containerd[1574]: time="2025-07-14T22:10:54.606647981Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 14 22:10:54.611576 containerd[1574]: time="2025-07-14T22:10:54.611406153Z" level=info msg="CreateContainer within sandbox \"62dcd29ac951f2e93a9d7448a6f504ae5ff7c1d8d1e0e5626407404ff4cf1693\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 14 22:10:54.622368 containerd[1574]: time="2025-07-14T22:10:54.622331439Z" level=info msg="CreateContainer within sandbox \"62dcd29ac951f2e93a9d7448a6f504ae5ff7c1d8d1e0e5626407404ff4cf1693\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"81ba8c0fcdffe0ae5e7336ac5f08dad6511ba53133dd99d74b04625f1dc1475f\"" Jul 14 22:10:54.622969 containerd[1574]: time="2025-07-14T22:10:54.622930572Z" level=info msg="StartContainer for \"81ba8c0fcdffe0ae5e7336ac5f08dad6511ba53133dd99d74b04625f1dc1475f\"" Jul 14 22:10:54.679140 containerd[1574]: time="2025-07-14T22:10:54.679087988Z" level=info msg="StartContainer for \"81ba8c0fcdffe0ae5e7336ac5f08dad6511ba53133dd99d74b04625f1dc1475f\" returns successfully" Jul 14 22:10:54.793684 kubelet[2717]: E0714 22:10:54.793649 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:10:54.817393 kubelet[2717]: I0714 22:10:54.817318 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-46467" podStartSLOduration=1.160941546 podStartE2EDuration="2.817296616s" podCreationTimestamp="2025-07-14 22:10:52 +0000 UTC" firstStartedPulling="2025-07-14 22:10:52.951038096 +0000 UTC m=+12.269072342" lastFinishedPulling="2025-07-14 22:10:54.607393166 +0000 UTC m=+13.925427412" observedRunningTime="2025-07-14 22:10:54.817143342 +0000 UTC m=+14.135177598" watchObservedRunningTime="2025-07-14 22:10:54.817296616 +0000 UTC m=+14.135330852" Jul 14 22:11:00.136975 sudo[1765]: pam_unix(sudo:session): session closed for user root Jul 14 22:11:00.142018 sshd[1758]: pam_unix(sshd:session): session closed for user core Jul 14 22:11:00.149659 systemd[1]: sshd@6-10.0.0.59:22-10.0.0.1:38706.service: Deactivated successfully. Jul 14 22:11:00.152894 systemd-logind[1545]: Session 7 logged out. Waiting for processes to exit. Jul 14 22:11:00.153810 systemd[1]: session-7.scope: Deactivated successfully. Jul 14 22:11:00.155260 systemd-logind[1545]: Removed session 7. Jul 14 22:11:02.712432 kubelet[2717]: I0714 22:11:02.712358 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/52364a09-cae8-4dc5-bc8f-bc7531358147-typha-certs\") pod \"calico-typha-698b44b86d-4fqpk\" (UID: \"52364a09-cae8-4dc5-bc8f-bc7531358147\") " pod="calico-system/calico-typha-698b44b86d-4fqpk" Jul 14 22:11:02.712432 kubelet[2717]: I0714 22:11:02.712425 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52364a09-cae8-4dc5-bc8f-bc7531358147-tigera-ca-bundle\") pod \"calico-typha-698b44b86d-4fqpk\" (UID: \"52364a09-cae8-4dc5-bc8f-bc7531358147\") " pod="calico-system/calico-typha-698b44b86d-4fqpk" Jul 14 22:11:02.712964 kubelet[2717]: I0714 22:11:02.712451 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vphhb\" (UniqueName: \"kubernetes.io/projected/52364a09-cae8-4dc5-bc8f-bc7531358147-kube-api-access-vphhb\") pod \"calico-typha-698b44b86d-4fqpk\" (UID: \"52364a09-cae8-4dc5-bc8f-bc7531358147\") " pod="calico-system/calico-typha-698b44b86d-4fqpk" Jul 14 22:11:03.215698 kubelet[2717]: I0714 22:11:03.215647 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ae78ae5e-fc33-46ca-8aa5-7e9c8c80068a-node-certs\") pod \"calico-node-zkljv\" (UID: \"ae78ae5e-fc33-46ca-8aa5-7e9c8c80068a\") " pod="calico-system/calico-node-zkljv" Jul 14 22:11:03.215698 kubelet[2717]: I0714 22:11:03.215694 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ae78ae5e-fc33-46ca-8aa5-7e9c8c80068a-cni-log-dir\") pod \"calico-node-zkljv\" (UID: \"ae78ae5e-fc33-46ca-8aa5-7e9c8c80068a\") " pod="calico-system/calico-node-zkljv" Jul 14 22:11:03.215698 kubelet[2717]: I0714 22:11:03.215709 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ae78ae5e-fc33-46ca-8aa5-7e9c8c80068a-cni-bin-dir\") pod \"calico-node-zkljv\" (UID: \"ae78ae5e-fc33-46ca-8aa5-7e9c8c80068a\") " pod="calico-system/calico-node-zkljv" Jul 14 22:11:03.215891 kubelet[2717]: I0714 22:11:03.215722 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ae78ae5e-fc33-46ca-8aa5-7e9c8c80068a-cni-net-dir\") pod \"calico-node-zkljv\" (UID: \"ae78ae5e-fc33-46ca-8aa5-7e9c8c80068a\") " pod="calico-system/calico-node-zkljv" Jul 14 22:11:03.215891 kubelet[2717]: I0714 22:11:03.215736 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ae78ae5e-fc33-46ca-8aa5-7e9c8c80068a-var-run-calico\") pod \"calico-node-zkljv\" (UID: \"ae78ae5e-fc33-46ca-8aa5-7e9c8c80068a\") " pod="calico-system/calico-node-zkljv" Jul 14 22:11:03.215891 kubelet[2717]: I0714 22:11:03.215818 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae78ae5e-fc33-46ca-8aa5-7e9c8c80068a-tigera-ca-bundle\") pod \"calico-node-zkljv\" (UID: \"ae78ae5e-fc33-46ca-8aa5-7e9c8c80068a\") " pod="calico-system/calico-node-zkljv" Jul 14 22:11:03.215891 kubelet[2717]: I0714 22:11:03.215873 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ae78ae5e-fc33-46ca-8aa5-7e9c8c80068a-policysync\") pod \"calico-node-zkljv\" (UID: \"ae78ae5e-fc33-46ca-8aa5-7e9c8c80068a\") " pod="calico-system/calico-node-zkljv" Jul 14 22:11:03.215987 kubelet[2717]: I0714 22:11:03.215894 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae78ae5e-fc33-46ca-8aa5-7e9c8c80068a-xtables-lock\") pod \"calico-node-zkljv\" (UID: \"ae78ae5e-fc33-46ca-8aa5-7e9c8c80068a\") " pod="calico-system/calico-node-zkljv" Jul 14 22:11:03.215987 kubelet[2717]: I0714 22:11:03.215916 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ae78ae5e-fc33-46ca-8aa5-7e9c8c80068a-flexvol-driver-host\") pod \"calico-node-zkljv\" (UID: \"ae78ae5e-fc33-46ca-8aa5-7e9c8c80068a\") " pod="calico-system/calico-node-zkljv" Jul 14 22:11:03.215987 kubelet[2717]: I0714 22:11:03.215936 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae78ae5e-fc33-46ca-8aa5-7e9c8c80068a-lib-modules\") pod \"calico-node-zkljv\" (UID: \"ae78ae5e-fc33-46ca-8aa5-7e9c8c80068a\") " pod="calico-system/calico-node-zkljv" Jul 14 22:11:03.215987 kubelet[2717]: I0714 22:11:03.215951 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g65t4\" (UniqueName: \"kubernetes.io/projected/ae78ae5e-fc33-46ca-8aa5-7e9c8c80068a-kube-api-access-g65t4\") pod \"calico-node-zkljv\" (UID: \"ae78ae5e-fc33-46ca-8aa5-7e9c8c80068a\") " pod="calico-system/calico-node-zkljv" Jul 14 22:11:03.215987 kubelet[2717]: I0714 22:11:03.215968 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ae78ae5e-fc33-46ca-8aa5-7e9c8c80068a-var-lib-calico\") pod \"calico-node-zkljv\" (UID: \"ae78ae5e-fc33-46ca-8aa5-7e9c8c80068a\") " pod="calico-system/calico-node-zkljv" Jul 14 22:11:03.223843 kubelet[2717]: E0714 22:11:03.223815 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:11:03.224612 containerd[1574]: time="2025-07-14T22:11:03.224563532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-698b44b86d-4fqpk,Uid:52364a09-cae8-4dc5-bc8f-bc7531358147,Namespace:calico-system,Attempt:0,}" Jul 14 22:11:03.249876 containerd[1574]: time="2025-07-14T22:11:03.249775586Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:11:03.249876 containerd[1574]: time="2025-07-14T22:11:03.249835330Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:11:03.249876 containerd[1574]: time="2025-07-14T22:11:03.249846111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:11:03.250039 containerd[1574]: time="2025-07-14T22:11:03.249938819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:11:03.282800 kubelet[2717]: E0714 22:11:03.282742 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bhk87" podUID="7829a7f5-eec5-446a-bd2c-44244faa0a80" Jul 14 22:11:03.316022 containerd[1574]: time="2025-07-14T22:11:03.315966275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-698b44b86d-4fqpk,Uid:52364a09-cae8-4dc5-bc8f-bc7531358147,Namespace:calico-system,Attempt:0,} returns sandbox id \"e99d9f7dcad8d04d05cf031a4f0195ef25c94dbc02b7e3b85f209ec70941d026\"" Jul 14 22:11:03.316436 kubelet[2717]: I0714 22:11:03.316392 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7829a7f5-eec5-446a-bd2c-44244faa0a80-kubelet-dir\") pod \"csi-node-driver-bhk87\" (UID: \"7829a7f5-eec5-446a-bd2c-44244faa0a80\") " pod="calico-system/csi-node-driver-bhk87" Jul 14 22:11:03.316497 kubelet[2717]: I0714 22:11:03.316439 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7829a7f5-eec5-446a-bd2c-44244faa0a80-varrun\") pod \"csi-node-driver-bhk87\" (UID: \"7829a7f5-eec5-446a-bd2c-44244faa0a80\") " pod="calico-system/csi-node-driver-bhk87" Jul 14 22:11:03.316562 kubelet[2717]: I0714 22:11:03.316532 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7829a7f5-eec5-446a-bd2c-44244faa0a80-socket-dir\") pod \"csi-node-driver-bhk87\" (UID: \"7829a7f5-eec5-446a-bd2c-44244faa0a80\") " pod="calico-system/csi-node-driver-bhk87" Jul 14 22:11:03.316609 kubelet[2717]: I0714 22:11:03.316558 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s4pk\" (UniqueName: \"kubernetes.io/projected/7829a7f5-eec5-446a-bd2c-44244faa0a80-kube-api-access-7s4pk\") pod \"csi-node-driver-bhk87\" (UID: \"7829a7f5-eec5-446a-bd2c-44244faa0a80\") " pod="calico-system/csi-node-driver-bhk87" Jul 14 22:11:03.316685 kubelet[2717]: I0714 22:11:03.316659 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7829a7f5-eec5-446a-bd2c-44244faa0a80-registration-dir\") pod \"csi-node-driver-bhk87\" (UID: \"7829a7f5-eec5-446a-bd2c-44244faa0a80\") " pod="calico-system/csi-node-driver-bhk87" Jul 14 22:11:03.317241 kubelet[2717]: E0714 22:11:03.317197 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.317241 kubelet[2717]: W0714 22:11:03.317212 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.317322 kubelet[2717]: E0714 22:11:03.317259 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.317572 kubelet[2717]: E0714 22:11:03.317495 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.317572 kubelet[2717]: W0714 22:11:03.317520 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.317572 kubelet[2717]: E0714 22:11:03.317532 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.317753 kubelet[2717]: E0714 22:11:03.317741 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.317942 kubelet[2717]: W0714 22:11:03.317752 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.317942 kubelet[2717]: E0714 22:11:03.317772 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.318169 kubelet[2717]: E0714 22:11:03.318153 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.318309 kubelet[2717]: W0714 22:11:03.318260 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.318309 kubelet[2717]: E0714 22:11:03.318282 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.319598 kubelet[2717]: E0714 22:11:03.318539 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.319598 kubelet[2717]: W0714 22:11:03.318553 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.319598 kubelet[2717]: E0714 22:11:03.318576 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.319598 kubelet[2717]: E0714 22:11:03.318797 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.319598 kubelet[2717]: W0714 22:11:03.318806 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.319598 kubelet[2717]: E0714 22:11:03.318849 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.319598 kubelet[2717]: E0714 22:11:03.318998 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.319598 kubelet[2717]: W0714 22:11:03.319007 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.319598 kubelet[2717]: E0714 22:11:03.319080 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.319598 kubelet[2717]: E0714 22:11:03.319284 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.319915 kubelet[2717]: W0714 22:11:03.319294 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.319915 kubelet[2717]: E0714 22:11:03.319394 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.319915 kubelet[2717]: E0714 22:11:03.319673 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.319915 kubelet[2717]: W0714 22:11:03.319684 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.319915 kubelet[2717]: E0714 22:11:03.319772 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.320949 kubelet[2717]: E0714 22:11:03.320888 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.320949 kubelet[2717]: W0714 22:11:03.320902 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.321084 kubelet[2717]: E0714 22:11:03.321069 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.321179 kubelet[2717]: E0714 22:11:03.321130 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.321264 kubelet[2717]: W0714 22:11:03.321249 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.321349 kubelet[2717]: E0714 22:11:03.321331 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.321614 kubelet[2717]: E0714 22:11:03.321601 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.321749 kubelet[2717]: W0714 22:11:03.321671 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.321749 kubelet[2717]: E0714 22:11:03.321695 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.321998 kubelet[2717]: E0714 22:11:03.321986 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.322064 kubelet[2717]: W0714 22:11:03.322053 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.322158 kubelet[2717]: E0714 22:11:03.322130 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.322448 kubelet[2717]: E0714 22:11:03.322383 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.322448 kubelet[2717]: W0714 22:11:03.322397 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.322448 kubelet[2717]: E0714 22:11:03.322423 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.322867 kubelet[2717]: E0714 22:11:03.322772 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.322867 kubelet[2717]: W0714 22:11:03.322784 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.322990 kubelet[2717]: E0714 22:11:03.322973 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.323185 kubelet[2717]: E0714 22:11:03.323174 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.323312 kubelet[2717]: W0714 22:11:03.323250 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.323406 kubelet[2717]: E0714 22:11:03.323387 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.323878 kubelet[2717]: E0714 22:11:03.323719 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.323878 kubelet[2717]: W0714 22:11:03.323788 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.324014 kubelet[2717]: E0714 22:11:03.323999 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.324389 kubelet[2717]: E0714 22:11:03.324272 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.324389 kubelet[2717]: W0714 22:11:03.324285 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.324389 kubelet[2717]: E0714 22:11:03.324353 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.324896 kubelet[2717]: E0714 22:11:03.324585 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:11:03.324982 kubelet[2717]: E0714 22:11:03.324951 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.325033 kubelet[2717]: W0714 22:11:03.324996 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.325033 kubelet[2717]: E0714 22:11:03.325026 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.325332 kubelet[2717]: E0714 22:11:03.325316 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.325332 kubelet[2717]: W0714 22:11:03.325329 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.325465 kubelet[2717]: E0714 22:11:03.325398 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.325584 kubelet[2717]: E0714 22:11:03.325568 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.325625 kubelet[2717]: W0714 22:11:03.325589 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.325785 kubelet[2717]: E0714 22:11:03.325702 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.325984 kubelet[2717]: E0714 22:11:03.325964 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.325984 kubelet[2717]: W0714 22:11:03.325982 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.326116 kubelet[2717]: E0714 22:11:03.326055 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.326325 kubelet[2717]: E0714 22:11:03.326310 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.326375 kubelet[2717]: W0714 22:11:03.326324 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.326469 kubelet[2717]: E0714 22:11:03.326434 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.326776 kubelet[2717]: E0714 22:11:03.326716 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.326776 kubelet[2717]: W0714 22:11:03.326728 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.326923 kubelet[2717]: E0714 22:11:03.326854 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.327077 kubelet[2717]: E0714 22:11:03.327064 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.327192 kubelet[2717]: W0714 22:11:03.327130 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.327341 kubelet[2717]: E0714 22:11:03.327275 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.327576 kubelet[2717]: E0714 22:11:03.327450 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.327576 kubelet[2717]: W0714 22:11:03.327462 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.327576 kubelet[2717]: E0714 22:11:03.327489 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.327791 kubelet[2717]: E0714 22:11:03.327778 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.327884 kubelet[2717]: W0714 22:11:03.327870 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.328082 kubelet[2717]: E0714 22:11:03.328041 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.328473 kubelet[2717]: E0714 22:11:03.328340 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.328473 kubelet[2717]: W0714 22:11:03.328354 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.328706 containerd[1574]: time="2025-07-14T22:11:03.328676036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 14 22:11:03.328957 kubelet[2717]: E0714 22:11:03.328939 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.329272 kubelet[2717]: E0714 22:11:03.329080 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.329272 kubelet[2717]: W0714 22:11:03.329119 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.329422 kubelet[2717]: E0714 22:11:03.329408 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.329671 kubelet[2717]: E0714 22:11:03.329616 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.329671 kubelet[2717]: W0714 22:11:03.329654 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.330735 kubelet[2717]: E0714 22:11:03.329950 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.330735 kubelet[2717]: E0714 22:11:03.330591 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.330735 kubelet[2717]: W0714 22:11:03.330601 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.330735 kubelet[2717]: E0714 22:11:03.330680 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.330917 kubelet[2717]: E0714 22:11:03.330811 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.330917 kubelet[2717]: W0714 22:11:03.330821 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.330917 kubelet[2717]: E0714 22:11:03.330833 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.331092 kubelet[2717]: E0714 22:11:03.331079 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.331092 kubelet[2717]: W0714 22:11:03.331089 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.331515 kubelet[2717]: E0714 22:11:03.331102 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.331515 kubelet[2717]: E0714 22:11:03.331384 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.331515 kubelet[2717]: W0714 22:11:03.331392 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.331515 kubelet[2717]: E0714 22:11:03.331406 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.331809 kubelet[2717]: E0714 22:11:03.331786 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.331809 kubelet[2717]: W0714 22:11:03.331800 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.331871 kubelet[2717]: E0714 22:11:03.331812 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.419107 kubelet[2717]: E0714 22:11:03.419070 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.419107 kubelet[2717]: W0714 22:11:03.419099 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.419303 kubelet[2717]: E0714 22:11:03.419125 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.421617 kubelet[2717]: E0714 22:11:03.421587 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.421617 kubelet[2717]: W0714 22:11:03.421611 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.421826 kubelet[2717]: E0714 22:11:03.421640 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.422822 kubelet[2717]: E0714 22:11:03.421933 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.422822 kubelet[2717]: W0714 22:11:03.421943 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.422822 kubelet[2717]: E0714 22:11:03.422025 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.424625 kubelet[2717]: E0714 22:11:03.424596 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.424625 kubelet[2717]: W0714 22:11:03.424621 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.424893 kubelet[2717]: E0714 22:11:03.424718 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.425254 kubelet[2717]: E0714 22:11:03.424992 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.425254 kubelet[2717]: W0714 22:11:03.425002 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.425254 kubelet[2717]: E0714 22:11:03.425051 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.427613 kubelet[2717]: E0714 22:11:03.427592 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.427666 kubelet[2717]: W0714 22:11:03.427611 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.427748 kubelet[2717]: E0714 22:11:03.427731 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.428520 kubelet[2717]: E0714 22:11:03.427861 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.428520 kubelet[2717]: W0714 22:11:03.427872 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.428520 kubelet[2717]: E0714 22:11:03.427969 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.429574 kubelet[2717]: E0714 22:11:03.429557 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.429616 kubelet[2717]: W0714 22:11:03.429572 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.429717 kubelet[2717]: E0714 22:11:03.429634 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.429903 kubelet[2717]: E0714 22:11:03.429835 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.429903 kubelet[2717]: W0714 22:11:03.429844 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.429903 kubelet[2717]: E0714 22:11:03.429884 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.432520 kubelet[2717]: E0714 22:11:03.430558 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.432520 kubelet[2717]: W0714 22:11:03.430568 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.432631 kubelet[2717]: E0714 22:11:03.432539 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.432719 kubelet[2717]: E0714 22:11:03.432704 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.432763 kubelet[2717]: W0714 22:11:03.432720 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.432856 kubelet[2717]: E0714 22:11:03.432842 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.432990 kubelet[2717]: E0714 22:11:03.432977 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.432990 kubelet[2717]: W0714 22:11:03.432988 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.433077 kubelet[2717]: E0714 22:11:03.433062 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.435596 kubelet[2717]: E0714 22:11:03.435571 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.435596 kubelet[2717]: W0714 22:11:03.435592 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.435738 kubelet[2717]: E0714 22:11:03.435716 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.435852 kubelet[2717]: E0714 22:11:03.435838 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.435852 kubelet[2717]: W0714 22:11:03.435849 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.437514 kubelet[2717]: E0714 22:11:03.436564 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.437514 kubelet[2717]: E0714 22:11:03.436599 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.437514 kubelet[2717]: W0714 22:11:03.436611 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.437514 kubelet[2717]: E0714 22:11:03.436701 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.437514 kubelet[2717]: E0714 22:11:03.436856 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.437514 kubelet[2717]: W0714 22:11:03.436865 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.437514 kubelet[2717]: E0714 22:11:03.436927 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.438697 kubelet[2717]: E0714 22:11:03.438678 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.438737 kubelet[2717]: W0714 22:11:03.438696 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.438922 kubelet[2717]: E0714 22:11:03.438842 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.438965 kubelet[2717]: E0714 22:11:03.438950 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.438965 kubelet[2717]: W0714 22:11:03.438963 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.439068 kubelet[2717]: E0714 22:11:03.439051 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.439728 kubelet[2717]: E0714 22:11:03.439557 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.439728 kubelet[2717]: W0714 22:11:03.439570 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.439728 kubelet[2717]: E0714 22:11:03.439629 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.442439 kubelet[2717]: E0714 22:11:03.440598 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.442439 kubelet[2717]: W0714 22:11:03.440611 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.442439 kubelet[2717]: E0714 22:11:03.440664 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.442439 kubelet[2717]: E0714 22:11:03.440840 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.442439 kubelet[2717]: W0714 22:11:03.440850 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.442439 kubelet[2717]: E0714 22:11:03.441599 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.442439 kubelet[2717]: E0714 22:11:03.442040 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.442439 kubelet[2717]: W0714 22:11:03.442051 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.443627 kubelet[2717]: E0714 22:11:03.443603 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.446696 kubelet[2717]: E0714 22:11:03.446656 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.446696 kubelet[2717]: W0714 22:11:03.446690 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.446992 kubelet[2717]: E0714 22:11:03.446917 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.447438 kubelet[2717]: E0714 22:11:03.447423 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.447470 kubelet[2717]: W0714 22:11:03.447437 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.449841 kubelet[2717]: E0714 22:11:03.449806 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.450387 kubelet[2717]: E0714 22:11:03.450368 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.450387 kubelet[2717]: W0714 22:11:03.450381 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.450457 kubelet[2717]: E0714 22:11:03.450395 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.451554 kubelet[2717]: E0714 22:11:03.451069 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:03.451554 kubelet[2717]: W0714 22:11:03.451082 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:03.451554 kubelet[2717]: E0714 22:11:03.451094 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:03.478211 containerd[1574]: time="2025-07-14T22:11:03.478081488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zkljv,Uid:ae78ae5e-fc33-46ca-8aa5-7e9c8c80068a,Namespace:calico-system,Attempt:0,}" Jul 14 22:11:03.502219 containerd[1574]: time="2025-07-14T22:11:03.502122497Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:11:03.502360 containerd[1574]: time="2025-07-14T22:11:03.502195116Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:11:03.502360 containerd[1574]: time="2025-07-14T22:11:03.502245653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:11:03.502360 containerd[1574]: time="2025-07-14T22:11:03.502338100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:11:03.548538 containerd[1574]: time="2025-07-14T22:11:03.548413038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zkljv,Uid:ae78ae5e-fc33-46ca-8aa5-7e9c8c80068a,Namespace:calico-system,Attempt:0,} returns sandbox id \"62ee6d791afff9374582bf91248400ccce9088fbaa9f186042a35840ded20997\"" Jul 14 22:11:04.764437 kubelet[2717]: E0714 22:11:04.764380 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bhk87" podUID="7829a7f5-eec5-446a-bd2c-44244faa0a80" Jul 14 22:11:05.588044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3241582301.mount: Deactivated successfully. Jul 14 22:11:06.474218 containerd[1574]: time="2025-07-14T22:11:06.474159921Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:11:06.475443 containerd[1574]: time="2025-07-14T22:11:06.475377915Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 14 22:11:06.477097 containerd[1574]: time="2025-07-14T22:11:06.477065147Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:11:06.480250 containerd[1574]: time="2025-07-14T22:11:06.480194883Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:11:06.481152 containerd[1574]: time="2025-07-14T22:11:06.481114184Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 3.152394114s" Jul 14 22:11:06.481152 containerd[1574]: time="2025-07-14T22:11:06.481145243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 14 22:11:06.486252 containerd[1574]: time="2025-07-14T22:11:06.486215277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 14 22:11:06.501572 containerd[1574]: time="2025-07-14T22:11:06.501497352Z" level=info msg="CreateContainer within sandbox \"e99d9f7dcad8d04d05cf031a4f0195ef25c94dbc02b7e3b85f209ec70941d026\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 14 22:11:06.517211 containerd[1574]: time="2025-07-14T22:11:06.517150519Z" level=info msg="CreateContainer within sandbox \"e99d9f7dcad8d04d05cf031a4f0195ef25c94dbc02b7e3b85f209ec70941d026\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"dc4b8ea8812deb12f36a91723f5d89bc95d2e5e96f30fa5eb6ea4c5262965938\"" Jul 14 22:11:06.517871 containerd[1574]: time="2025-07-14T22:11:06.517772681Z" level=info msg="StartContainer for \"dc4b8ea8812deb12f36a91723f5d89bc95d2e5e96f30fa5eb6ea4c5262965938\"" Jul 14 22:11:06.597348 containerd[1574]: time="2025-07-14T22:11:06.597293450Z" level=info msg="StartContainer for \"dc4b8ea8812deb12f36a91723f5d89bc95d2e5e96f30fa5eb6ea4c5262965938\" returns successfully" Jul 14 22:11:06.769207 kubelet[2717]: E0714 22:11:06.769037 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bhk87" podUID="7829a7f5-eec5-446a-bd2c-44244faa0a80" Jul 14 22:11:06.833412 kubelet[2717]: E0714 22:11:06.833360 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:11:06.855926 kubelet[2717]: I0714 22:11:06.855248 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-698b44b86d-4fqpk" podStartSLOduration=1.697693096 podStartE2EDuration="4.855231104s" podCreationTimestamp="2025-07-14 22:11:02 +0000 UTC" firstStartedPulling="2025-07-14 22:11:03.32841721 +0000 UTC m=+22.646451456" lastFinishedPulling="2025-07-14 22:11:06.485955218 +0000 UTC m=+25.803989464" observedRunningTime="2025-07-14 22:11:06.854908696 +0000 UTC m=+26.172942942" watchObservedRunningTime="2025-07-14 22:11:06.855231104 +0000 UTC m=+26.173265350" Jul 14 22:11:06.913793 kubelet[2717]: E0714 22:11:06.913747 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:06.913793 kubelet[2717]: W0714 22:11:06.913773 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:06.913793 kubelet[2717]: E0714 22:11:06.913797 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:06.914111 kubelet[2717]: E0714 22:11:06.914097 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:06.914111 kubelet[2717]: W0714 22:11:06.914107 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:06.914172 kubelet[2717]: E0714 22:11:06.914117 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:06.914337 kubelet[2717]: E0714 22:11:06.914313 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:06.914337 kubelet[2717]: W0714 22:11:06.914324 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:06.914337 kubelet[2717]: E0714 22:11:06.914333 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:06.914536 kubelet[2717]: E0714 22:11:06.914520 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:06.914536 kubelet[2717]: W0714 22:11:06.914531 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:06.914536 kubelet[2717]: E0714 22:11:06.914539 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:06.914733 kubelet[2717]: E0714 22:11:06.914712 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:06.914733 kubelet[2717]: W0714 22:11:06.914723 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:06.914733 kubelet[2717]: E0714 22:11:06.914730 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:06.914913 kubelet[2717]: E0714 22:11:06.914899 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:06.914913 kubelet[2717]: W0714 22:11:06.914908 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:06.914969 kubelet[2717]: E0714 22:11:06.914918 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:06.915099 kubelet[2717]: E0714 22:11:06.915085 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:06.915099 kubelet[2717]: W0714 22:11:06.915095 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:06.915161 kubelet[2717]: E0714 22:11:06.915102 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:06.915309 kubelet[2717]: E0714 22:11:06.915296 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:06.915309 kubelet[2717]: W0714 22:11:06.915305 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:06.915378 kubelet[2717]: E0714 22:11:06.915313 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:06.915540 kubelet[2717]: E0714 22:11:06.915525 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:06.915540 kubelet[2717]: W0714 22:11:06.915537 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:06.915619 kubelet[2717]: E0714 22:11:06.915547 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:06.915897 kubelet[2717]: E0714 22:11:06.915881 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:06.915897 kubelet[2717]: W0714 22:11:06.915892 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:06.915966 kubelet[2717]: E0714 22:11:06.915900 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:06.916093 kubelet[2717]: E0714 22:11:06.916079 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:06.916093 kubelet[2717]: W0714 22:11:06.916089 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:06.916156 kubelet[2717]: E0714 22:11:06.916097 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:06.916319 kubelet[2717]: E0714 22:11:06.916303 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:06.916319 kubelet[2717]: W0714 22:11:06.916313 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:06.916377 kubelet[2717]: E0714 22:11:06.916321 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:06.916544 kubelet[2717]: E0714 22:11:06.916529 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:06.916544 kubelet[2717]: W0714 22:11:06.916538 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:06.916629 kubelet[2717]: E0714 22:11:06.916546 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:06.916751 kubelet[2717]: E0714 22:11:06.916737 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:06.916751 kubelet[2717]: W0714 22:11:06.916748 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:06.916818 kubelet[2717]: E0714 22:11:06.916756 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:06.916949 kubelet[2717]: E0714 22:11:06.916935 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:06.916949 kubelet[2717]: W0714 22:11:06.916944 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:06.917008 kubelet[2717]: E0714 22:11:06.916952 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:06.958627 kubelet[2717]: E0714 22:11:06.958570 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:06.958627 kubelet[2717]: W0714 22:11:06.958599 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:06.958627 kubelet[2717]: E0714 22:11:06.958623 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:06.959114 kubelet[2717]: E0714 22:11:06.959067 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:06.959151 kubelet[2717]: W0714 22:11:06.959110 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:06.959194 kubelet[2717]: E0714 22:11:06.959147 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:06.959532 kubelet[2717]: E0714 22:11:06.959496 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:06.959532 kubelet[2717]: W0714 22:11:06.959523 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:06.959615 kubelet[2717]: E0714 22:11:06.959540 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:06.959809 kubelet[2717]: E0714 22:11:06.959775 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:06.959809 kubelet[2717]: W0714 22:11:06.959792 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:06.959809 kubelet[2717]: E0714 22:11:06.959807 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:06.960043 kubelet[2717]: E0714 22:11:06.960012 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:06.960043 kubelet[2717]: W0714 22:11:06.960029 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:06.960116 kubelet[2717]: E0714 22:11:06.960047 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:06.960274 kubelet[2717]: E0714 22:11:06.960254 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:06.960274 kubelet[2717]: W0714 22:11:06.960266 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:06.960365 kubelet[2717]: E0714 22:11:06.960282 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:06.960587 kubelet[2717]: E0714 22:11:06.960561 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:06.960587 kubelet[2717]: W0714 22:11:06.960575 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:06.960680 kubelet[2717]: E0714 22:11:06.960637 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:06.960798 kubelet[2717]: E0714 22:11:06.960780 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:06.960798 kubelet[2717]: W0714 22:11:06.960792 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:06.960872 kubelet[2717]: E0714 22:11:06.960827 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:06.961021 kubelet[2717]: E0714 22:11:06.961001 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:06.961021 kubelet[2717]: W0714 22:11:06.961013 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:06.961084 kubelet[2717]: E0714 22:11:06.961029 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:06.961401 kubelet[2717]: E0714 22:11:06.961362 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:06.961401 kubelet[2717]: W0714 22:11:06.961383 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:06.961489 kubelet[2717]: E0714 22:11:06.961408 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:06.961650 kubelet[2717]: E0714 22:11:06.961627 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:06.961650 kubelet[2717]: W0714 22:11:06.961641 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:06.961728 kubelet[2717]: E0714 22:11:06.961657 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:06.961927 kubelet[2717]: E0714 22:11:06.961895 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:06.961927 kubelet[2717]: W0714 22:11:06.961909 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:06.961927 kubelet[2717]: E0714 22:11:06.961926 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:06.962161 kubelet[2717]: E0714 22:11:06.962140 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:06.962161 kubelet[2717]: W0714 22:11:06.962153 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:06.962228 kubelet[2717]: E0714 22:11:06.962170 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:06.962439 kubelet[2717]: E0714 22:11:06.962419 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:06.962439 kubelet[2717]: W0714 22:11:06.962432 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:06.962545 kubelet[2717]: E0714 22:11:06.962450 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:06.962784 kubelet[2717]: E0714 22:11:06.962748 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:06.962784 kubelet[2717]: W0714 22:11:06.962765 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:06.962784 kubelet[2717]: E0714 22:11:06.962777 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:06.963034 kubelet[2717]: E0714 22:11:06.963016 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:06.963034 kubelet[2717]: W0714 22:11:06.963026 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:06.963102 kubelet[2717]: E0714 22:11:06.963039 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:06.963380 kubelet[2717]: E0714 22:11:06.963348 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:06.963380 kubelet[2717]: W0714 22:11:06.963365 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:06.963446 kubelet[2717]: E0714 22:11:06.963382 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:06.963680 kubelet[2717]: E0714 22:11:06.963648 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:06.963680 kubelet[2717]: W0714 22:11:06.963664 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:06.963680 kubelet[2717]: E0714 22:11:06.963676 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:07.834664 kubelet[2717]: I0714 22:11:07.834576 2717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 22:11:07.835087 kubelet[2717]: E0714 22:11:07.835036 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:11:07.924062 kubelet[2717]: E0714 22:11:07.923994 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:07.924272 kubelet[2717]: W0714 22:11:07.924050 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:07.924272 kubelet[2717]: E0714 22:11:07.924170 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:07.924488 kubelet[2717]: E0714 22:11:07.924446 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:07.924488 kubelet[2717]: W0714 22:11:07.924476 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:07.924488 kubelet[2717]: E0714 22:11:07.924486 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:07.924926 kubelet[2717]: E0714 22:11:07.924892 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:07.924926 kubelet[2717]: W0714 22:11:07.924918 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:07.925007 kubelet[2717]: E0714 22:11:07.924929 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:07.925355 kubelet[2717]: E0714 22:11:07.925295 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:07.925355 kubelet[2717]: W0714 22:11:07.925348 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:07.925355 kubelet[2717]: E0714 22:11:07.925361 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:07.925777 kubelet[2717]: E0714 22:11:07.925751 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:07.925841 kubelet[2717]: W0714 22:11:07.925775 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:07.925841 kubelet[2717]: E0714 22:11:07.925799 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:07.926083 kubelet[2717]: E0714 22:11:07.926065 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:07.926083 kubelet[2717]: W0714 22:11:07.926076 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:07.926083 kubelet[2717]: E0714 22:11:07.926086 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:07.926298 kubelet[2717]: E0714 22:11:07.926282 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:07.926298 kubelet[2717]: W0714 22:11:07.926293 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:07.926298 kubelet[2717]: E0714 22:11:07.926310 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:07.926542 kubelet[2717]: E0714 22:11:07.926526 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:07.926542 kubelet[2717]: W0714 22:11:07.926537 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:07.926627 kubelet[2717]: E0714 22:11:07.926547 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:07.926781 kubelet[2717]: E0714 22:11:07.926765 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:07.926781 kubelet[2717]: W0714 22:11:07.926776 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:07.926867 kubelet[2717]: E0714 22:11:07.926786 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:07.926975 kubelet[2717]: E0714 22:11:07.926959 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:07.926975 kubelet[2717]: W0714 22:11:07.926969 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:07.926975 kubelet[2717]: E0714 22:11:07.926976 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:07.927170 kubelet[2717]: E0714 22:11:07.927155 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:07.927170 kubelet[2717]: W0714 22:11:07.927164 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:07.927170 kubelet[2717]: E0714 22:11:07.927172 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:07.927495 kubelet[2717]: E0714 22:11:07.927474 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:07.927495 kubelet[2717]: W0714 22:11:07.927493 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:07.927578 kubelet[2717]: E0714 22:11:07.927548 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:07.928034 kubelet[2717]: E0714 22:11:07.927915 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:07.928034 kubelet[2717]: W0714 22:11:07.927928 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:07.928034 kubelet[2717]: E0714 22:11:07.927939 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:07.928295 kubelet[2717]: E0714 22:11:07.928272 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:07.928367 kubelet[2717]: W0714 22:11:07.928284 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:07.928367 kubelet[2717]: E0714 22:11:07.928326 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:07.928685 kubelet[2717]: E0714 22:11:07.928660 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:07.928685 kubelet[2717]: W0714 22:11:07.928672 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:07.928685 kubelet[2717]: E0714 22:11:07.928681 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:07.965014 kubelet[2717]: E0714 22:11:07.964978 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:07.965014 kubelet[2717]: W0714 22:11:07.965004 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:07.965298 kubelet[2717]: E0714 22:11:07.965028 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:07.965298 kubelet[2717]: E0714 22:11:07.965263 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:07.965298 kubelet[2717]: W0714 22:11:07.965272 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:07.965298 kubelet[2717]: E0714 22:11:07.965285 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:07.965546 kubelet[2717]: E0714 22:11:07.965527 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:07.965546 kubelet[2717]: W0714 22:11:07.965541 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:07.965674 kubelet[2717]: E0714 22:11:07.965560 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:07.965808 kubelet[2717]: E0714 22:11:07.965791 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:07.965808 kubelet[2717]: W0714 22:11:07.965805 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:07.965853 kubelet[2717]: E0714 22:11:07.965822 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:07.966427 kubelet[2717]: E0714 22:11:07.966395 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:07.966607 kubelet[2717]: W0714 22:11:07.966427 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:07.966607 kubelet[2717]: E0714 22:11:07.966456 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:07.967375 kubelet[2717]: E0714 22:11:07.967353 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:07.967375 kubelet[2717]: W0714 22:11:07.967369 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:07.967449 kubelet[2717]: E0714 22:11:07.967387 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:07.967848 kubelet[2717]: E0714 22:11:07.967611 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:07.967848 kubelet[2717]: W0714 22:11:07.967623 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:07.967848 kubelet[2717]: E0714 22:11:07.967662 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:07.967848 kubelet[2717]: E0714 22:11:07.967805 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:07.967848 kubelet[2717]: W0714 22:11:07.967812 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:07.967848 kubelet[2717]: E0714 22:11:07.967845 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:07.968008 kubelet[2717]: E0714 22:11:07.967977 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:07.968008 kubelet[2717]: W0714 22:11:07.967985 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:07.968056 kubelet[2717]: E0714 22:11:07.968014 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:07.968252 kubelet[2717]: E0714 22:11:07.968231 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:07.968252 kubelet[2717]: W0714 22:11:07.968248 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:07.968312 kubelet[2717]: E0714 22:11:07.968265 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:07.968734 kubelet[2717]: E0714 22:11:07.968709 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:07.968837 kubelet[2717]: W0714 22:11:07.968813 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:07.968905 kubelet[2717]: E0714 22:11:07.968891 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:07.969329 kubelet[2717]: E0714 22:11:07.969189 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:07.969329 kubelet[2717]: W0714 22:11:07.969200 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:07.969329 kubelet[2717]: E0714 22:11:07.969213 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:07.969614 kubelet[2717]: E0714 22:11:07.969588 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:07.969614 kubelet[2717]: W0714 22:11:07.969601 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:07.969818 kubelet[2717]: E0714 22:11:07.969691 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:07.970172 kubelet[2717]: E0714 22:11:07.970148 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:07.970172 kubelet[2717]: W0714 22:11:07.970160 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:07.970352 kubelet[2717]: E0714 22:11:07.970215 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:07.970553 kubelet[2717]: E0714 22:11:07.970536 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:07.970553 kubelet[2717]: W0714 22:11:07.970549 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:07.970652 kubelet[2717]: E0714 22:11:07.970636 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:07.970888 kubelet[2717]: E0714 22:11:07.970854 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:07.970888 kubelet[2717]: W0714 22:11:07.970874 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:07.970962 kubelet[2717]: E0714 22:11:07.970894 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:07.971546 kubelet[2717]: E0714 22:11:07.971192 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:07.971546 kubelet[2717]: W0714 22:11:07.971211 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:07.971546 kubelet[2717]: E0714 22:11:07.971221 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:07.971623 kubelet[2717]: E0714 22:11:07.971614 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:11:07.971645 kubelet[2717]: W0714 22:11:07.971623 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:11:07.971645 kubelet[2717]: E0714 22:11:07.971633 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:11:08.008894 containerd[1574]: time="2025-07-14T22:11:08.008839321Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:11:08.010070 containerd[1574]: time="2025-07-14T22:11:08.010025892Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 14 22:11:08.011212 containerd[1574]: time="2025-07-14T22:11:08.011184240Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:11:08.013476 containerd[1574]: time="2025-07-14T22:11:08.013428167Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:11:08.014194 containerd[1574]: time="2025-07-14T22:11:08.014156631Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.52790232s" Jul 14 22:11:08.014194 containerd[1574]: time="2025-07-14T22:11:08.014187551Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 14 22:11:08.016907 containerd[1574]: time="2025-07-14T22:11:08.016873834Z" level=info msg="CreateContainer within sandbox \"62ee6d791afff9374582bf91248400ccce9088fbaa9f186042a35840ded20997\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 14 22:11:08.033754 containerd[1574]: time="2025-07-14T22:11:08.033693429Z" level=info msg="CreateContainer within sandbox \"62ee6d791afff9374582bf91248400ccce9088fbaa9f186042a35840ded20997\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c24b8b34b5c6a06daf21e320185776bacfb6e50943972ac679627d6fadd665c2\"" Jul 14 22:11:08.034473 containerd[1574]: time="2025-07-14T22:11:08.034424168Z" level=info msg="StartContainer for \"c24b8b34b5c6a06daf21e320185776bacfb6e50943972ac679627d6fadd665c2\"" Jul 14 22:11:08.419524 containerd[1574]: time="2025-07-14T22:11:08.419444520Z" level=info msg="StartContainer for \"c24b8b34b5c6a06daf21e320185776bacfb6e50943972ac679627d6fadd665c2\" returns successfully" Jul 14 22:11:08.454442 containerd[1574]: time="2025-07-14T22:11:08.452727356Z" level=info msg="shim disconnected" id=c24b8b34b5c6a06daf21e320185776bacfb6e50943972ac679627d6fadd665c2 namespace=k8s.io Jul 14 22:11:08.454442 containerd[1574]: time="2025-07-14T22:11:08.454442721Z" level=warning msg="cleaning up after shim disconnected" id=c24b8b34b5c6a06daf21e320185776bacfb6e50943972ac679627d6fadd665c2 namespace=k8s.io Jul 14 22:11:08.454718 containerd[1574]: time="2025-07-14T22:11:08.454454202Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 22:11:08.764475 kubelet[2717]: E0714 22:11:08.764242 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bhk87" podUID="7829a7f5-eec5-446a-bd2c-44244faa0a80" Jul 14 22:11:08.838617 containerd[1574]: time="2025-07-14T22:11:08.838583027Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 14 22:11:09.029853 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c24b8b34b5c6a06daf21e320185776bacfb6e50943972ac679627d6fadd665c2-rootfs.mount: Deactivated successfully. Jul 14 22:11:10.764570 kubelet[2717]: E0714 22:11:10.764532 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bhk87" podUID="7829a7f5-eec5-446a-bd2c-44244faa0a80" Jul 14 22:11:12.764291 kubelet[2717]: E0714 22:11:12.764234 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bhk87" podUID="7829a7f5-eec5-446a-bd2c-44244faa0a80" Jul 14 22:11:13.665554 containerd[1574]: time="2025-07-14T22:11:13.665495664Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:11:13.666560 containerd[1574]: time="2025-07-14T22:11:13.666525945Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 14 22:11:13.668076 containerd[1574]: time="2025-07-14T22:11:13.668034119Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:11:13.670303 containerd[1574]: time="2025-07-14T22:11:13.670253163Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:11:13.670938 containerd[1574]: time="2025-07-14T22:11:13.670896715Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 4.832277628s" Jul 14 22:11:13.670938 containerd[1574]: time="2025-07-14T22:11:13.670930940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 14 22:11:13.672737 containerd[1574]: time="2025-07-14T22:11:13.672698550Z" level=info msg="CreateContainer within sandbox \"62ee6d791afff9374582bf91248400ccce9088fbaa9f186042a35840ded20997\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 14 22:11:13.689783 containerd[1574]: time="2025-07-14T22:11:13.689742531Z" level=info msg="CreateContainer within sandbox \"62ee6d791afff9374582bf91248400ccce9088fbaa9f186042a35840ded20997\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e12b304e2fd01ee291db5ab7578b73e73fb03c702b71e64c523200b4fa66afb8\"" Jul 14 22:11:13.690446 containerd[1574]: time="2025-07-14T22:11:13.690342127Z" level=info msg="StartContainer for \"e12b304e2fd01ee291db5ab7578b73e73fb03c702b71e64c523200b4fa66afb8\"" Jul 14 22:11:13.756763 containerd[1574]: time="2025-07-14T22:11:13.756706883Z" level=info msg="StartContainer for \"e12b304e2fd01ee291db5ab7578b73e73fb03c702b71e64c523200b4fa66afb8\" returns successfully" Jul 14 22:11:14.695022 kubelet[2717]: I0714 22:11:14.694956 2717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 22:11:14.695657 kubelet[2717]: E0714 22:11:14.695519 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:11:14.764348 kubelet[2717]: E0714 22:11:14.764305 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bhk87" podUID="7829a7f5-eec5-446a-bd2c-44244faa0a80" Jul 14 22:11:14.852903 kubelet[2717]: E0714 22:11:14.852870 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:11:15.325676 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e12b304e2fd01ee291db5ab7578b73e73fb03c702b71e64c523200b4fa66afb8-rootfs.mount: Deactivated successfully. Jul 14 22:11:15.329671 containerd[1574]: time="2025-07-14T22:11:15.329580005Z" level=info msg="shim disconnected" id=e12b304e2fd01ee291db5ab7578b73e73fb03c702b71e64c523200b4fa66afb8 namespace=k8s.io Jul 14 22:11:15.329671 containerd[1574]: time="2025-07-14T22:11:15.329658135Z" level=warning msg="cleaning up after shim disconnected" id=e12b304e2fd01ee291db5ab7578b73e73fb03c702b71e64c523200b4fa66afb8 namespace=k8s.io Jul 14 22:11:15.329671 containerd[1574]: time="2025-07-14T22:11:15.329667232Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 22:11:15.368665 kubelet[2717]: I0714 22:11:15.368635 2717 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 14 22:11:15.521424 kubelet[2717]: I0714 22:11:15.521365 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/386dcad9-cbe7-41a3-a762-3963d0cad867-config-volume\") pod \"coredns-7c65d6cfc9-swnzl\" (UID: \"386dcad9-cbe7-41a3-a762-3963d0cad867\") " pod="kube-system/coredns-7c65d6cfc9-swnzl" Jul 14 22:11:15.521424 kubelet[2717]: I0714 22:11:15.521425 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ce018b19-dfa1-40d7-99a8-a9a4fa008b27-calico-apiserver-certs\") pod \"calico-apiserver-5b9b695b56-szjht\" (UID: \"ce018b19-dfa1-40d7-99a8-a9a4fa008b27\") " pod="calico-apiserver/calico-apiserver-5b9b695b56-szjht" Jul 14 22:11:15.521646 kubelet[2717]: I0714 22:11:15.521447 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlbd8\" (UniqueName: \"kubernetes.io/projected/2b64c079-46df-4d78-82e9-6807a889d4ac-kube-api-access-hlbd8\") pod \"calico-kube-controllers-b8d99f984-kmljb\" (UID: \"2b64c079-46df-4d78-82e9-6807a889d4ac\") " pod="calico-system/calico-kube-controllers-b8d99f984-kmljb" Jul 14 22:11:15.521646 kubelet[2717]: I0714 22:11:15.521489 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/f6ace18e-bbe6-4202-8c10-c8f8fba9e6ed-goldmane-key-pair\") pod \"goldmane-58fd7646b9-4trbc\" (UID: \"f6ace18e-bbe6-4202-8c10-c8f8fba9e6ed\") " pod="calico-system/goldmane-58fd7646b9-4trbc" Jul 14 22:11:15.521646 kubelet[2717]: I0714 22:11:15.521604 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c73f2d5-835f-459b-8d41-d88723278569-config-volume\") pod \"coredns-7c65d6cfc9-57dnv\" (UID: \"9c73f2d5-835f-459b-8d41-d88723278569\") " pod="kube-system/coredns-7c65d6cfc9-57dnv" Jul 14 22:11:15.521719 kubelet[2717]: I0714 22:11:15.521652 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhk7b\" (UniqueName: \"kubernetes.io/projected/e58136f5-d596-401c-86d5-192b42224dd4-kube-api-access-zhk7b\") pod \"whisker-f4c76c779-rgtfh\" (UID: \"e58136f5-d596-401c-86d5-192b42224dd4\") " pod="calico-system/whisker-f4c76c779-rgtfh" Jul 14 22:11:15.521719 kubelet[2717]: I0714 22:11:15.521681 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqf26\" (UniqueName: \"kubernetes.io/projected/ce018b19-dfa1-40d7-99a8-a9a4fa008b27-kube-api-access-gqf26\") pod \"calico-apiserver-5b9b695b56-szjht\" (UID: \"ce018b19-dfa1-40d7-99a8-a9a4fa008b27\") " pod="calico-apiserver/calico-apiserver-5b9b695b56-szjht" Jul 14 22:11:15.521774 kubelet[2717]: I0714 22:11:15.521704 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6ace18e-bbe6-4202-8c10-c8f8fba9e6ed-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-4trbc\" (UID: \"f6ace18e-bbe6-4202-8c10-c8f8fba9e6ed\") " pod="calico-system/goldmane-58fd7646b9-4trbc" Jul 14 22:11:15.521774 kubelet[2717]: I0714 22:11:15.521760 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b64c079-46df-4d78-82e9-6807a889d4ac-tigera-ca-bundle\") pod \"calico-kube-controllers-b8d99f984-kmljb\" (UID: \"2b64c079-46df-4d78-82e9-6807a889d4ac\") " pod="calico-system/calico-kube-controllers-b8d99f984-kmljb" Jul 14 22:11:15.521825 kubelet[2717]: I0714 22:11:15.521797 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpp6c\" (UniqueName: \"kubernetes.io/projected/9836a141-78f0-47fd-89e8-6b74ea1f1f07-kube-api-access-hpp6c\") pod \"calico-apiserver-5b9b695b56-kzv98\" (UID: \"9836a141-78f0-47fd-89e8-6b74ea1f1f07\") " pod="calico-apiserver/calico-apiserver-5b9b695b56-kzv98" Jul 14 22:11:15.521825 kubelet[2717]: I0714 22:11:15.521817 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lc29\" (UniqueName: \"kubernetes.io/projected/f6ace18e-bbe6-4202-8c10-c8f8fba9e6ed-kube-api-access-5lc29\") pod \"goldmane-58fd7646b9-4trbc\" (UID: \"f6ace18e-bbe6-4202-8c10-c8f8fba9e6ed\") " pod="calico-system/goldmane-58fd7646b9-4trbc" Jul 14 22:11:15.521878 kubelet[2717]: I0714 22:11:15.521834 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9dhd\" (UniqueName: \"kubernetes.io/projected/386dcad9-cbe7-41a3-a762-3963d0cad867-kube-api-access-s9dhd\") pod \"coredns-7c65d6cfc9-swnzl\" (UID: \"386dcad9-cbe7-41a3-a762-3963d0cad867\") " pod="kube-system/coredns-7c65d6cfc9-swnzl" Jul 14 22:11:15.521901 kubelet[2717]: I0714 22:11:15.521873 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e58136f5-d596-401c-86d5-192b42224dd4-whisker-ca-bundle\") pod \"whisker-f4c76c779-rgtfh\" (UID: \"e58136f5-d596-401c-86d5-192b42224dd4\") " pod="calico-system/whisker-f4c76c779-rgtfh" Jul 14 22:11:15.521901 kubelet[2717]: I0714 22:11:15.521894 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6ace18e-bbe6-4202-8c10-c8f8fba9e6ed-config\") pod \"goldmane-58fd7646b9-4trbc\" (UID: \"f6ace18e-bbe6-4202-8c10-c8f8fba9e6ed\") " pod="calico-system/goldmane-58fd7646b9-4trbc" Jul 14 22:11:15.521950 kubelet[2717]: I0714 22:11:15.521911 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hznjj\" (UniqueName: \"kubernetes.io/projected/9c73f2d5-835f-459b-8d41-d88723278569-kube-api-access-hznjj\") pod \"coredns-7c65d6cfc9-57dnv\" (UID: \"9c73f2d5-835f-459b-8d41-d88723278569\") " pod="kube-system/coredns-7c65d6cfc9-57dnv" Jul 14 22:11:15.521950 kubelet[2717]: I0714 22:11:15.521927 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e58136f5-d596-401c-86d5-192b42224dd4-whisker-backend-key-pair\") pod \"whisker-f4c76c779-rgtfh\" (UID: \"e58136f5-d596-401c-86d5-192b42224dd4\") " pod="calico-system/whisker-f4c76c779-rgtfh" Jul 14 22:11:15.521950 kubelet[2717]: I0714 22:11:15.521945 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9836a141-78f0-47fd-89e8-6b74ea1f1f07-calico-apiserver-certs\") pod \"calico-apiserver-5b9b695b56-kzv98\" (UID: \"9836a141-78f0-47fd-89e8-6b74ea1f1f07\") " pod="calico-apiserver/calico-apiserver-5b9b695b56-kzv98" Jul 14 22:11:15.695911 kubelet[2717]: E0714 22:11:15.695756 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:11:15.696580 containerd[1574]: time="2025-07-14T22:11:15.696422780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-swnzl,Uid:386dcad9-cbe7-41a3-a762-3963d0cad867,Namespace:kube-system,Attempt:0,}" Jul 14 22:11:15.712981 kubelet[2717]: E0714 22:11:15.712928 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:11:15.713989 containerd[1574]: time="2025-07-14T22:11:15.713847958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-57dnv,Uid:9c73f2d5-835f-459b-8d41-d88723278569,Namespace:kube-system,Attempt:0,}" Jul 14 22:11:15.724548 containerd[1574]: time="2025-07-14T22:11:15.724485266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b9b695b56-szjht,Uid:ce018b19-dfa1-40d7-99a8-a9a4fa008b27,Namespace:calico-apiserver,Attempt:0,}" Jul 14 22:11:15.724713 containerd[1574]: time="2025-07-14T22:11:15.724692373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-4trbc,Uid:f6ace18e-bbe6-4202-8c10-c8f8fba9e6ed,Namespace:calico-system,Attempt:0,}" Jul 14 22:11:15.724850 containerd[1574]: time="2025-07-14T22:11:15.724810308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f4c76c779-rgtfh,Uid:e58136f5-d596-401c-86d5-192b42224dd4,Namespace:calico-system,Attempt:0,}" Jul 14 22:11:15.724987 containerd[1574]: time="2025-07-14T22:11:15.724922733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b9b695b56-kzv98,Uid:9836a141-78f0-47fd-89e8-6b74ea1f1f07,Namespace:calico-apiserver,Attempt:0,}" Jul 14 22:11:15.726102 containerd[1574]: time="2025-07-14T22:11:15.726065018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b8d99f984-kmljb,Uid:2b64c079-46df-4d78-82e9-6807a889d4ac,Namespace:calico-system,Attempt:0,}" Jul 14 22:11:15.813175 containerd[1574]: time="2025-07-14T22:11:15.812258174Z" level=error msg="Failed to destroy network for sandbox \"4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:11:15.813175 containerd[1574]: time="2025-07-14T22:11:15.812726079Z" level=error msg="encountered an error cleaning up failed sandbox \"4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:11:15.813175 containerd[1574]: time="2025-07-14T22:11:15.812777346Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-57dnv,Uid:9c73f2d5-835f-459b-8d41-d88723278569,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:11:15.813872 containerd[1574]: time="2025-07-14T22:11:15.813652840Z" level=error msg="Failed to destroy network for sandbox \"3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:11:15.815388 containerd[1574]: time="2025-07-14T22:11:15.814125273Z" level=error msg="encountered an error cleaning up failed sandbox \"3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:11:15.815388 containerd[1574]: time="2025-07-14T22:11:15.814183145Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-swnzl,Uid:386dcad9-cbe7-41a3-a762-3963d0cad867,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:11:15.824546 kubelet[2717]: E0714 22:11:15.824481 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:11:15.824661 kubelet[2717]: E0714 22:11:15.824536 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:11:15.824661 kubelet[2717]: E0714 22:11:15.824572 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-swnzl" Jul 14 22:11:15.824661 kubelet[2717]: E0714 22:11:15.824594 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-swnzl" Jul 14 22:11:15.824746 kubelet[2717]: E0714 22:11:15.824635 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-swnzl_kube-system(386dcad9-cbe7-41a3-a762-3963d0cad867)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-swnzl_kube-system(386dcad9-cbe7-41a3-a762-3963d0cad867)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-swnzl" podUID="386dcad9-cbe7-41a3-a762-3963d0cad867" Jul 14 22:11:15.824746 kubelet[2717]: E0714 22:11:15.824715 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-57dnv" Jul 14 22:11:15.824856 kubelet[2717]: E0714 22:11:15.824739 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-57dnv" Jul 14 22:11:15.824960 kubelet[2717]: E0714 22:11:15.824885 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-57dnv_kube-system(9c73f2d5-835f-459b-8d41-d88723278569)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-57dnv_kube-system(9c73f2d5-835f-459b-8d41-d88723278569)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-57dnv" podUID="9c73f2d5-835f-459b-8d41-d88723278569" Jul 14 22:11:15.861879 kubelet[2717]: I0714 22:11:15.861819 2717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0" Jul 14 22:11:15.930277 containerd[1574]: time="2025-07-14T22:11:15.930207179Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 14 22:11:15.932468 containerd[1574]: time="2025-07-14T22:11:15.932427765Z" level=info msg="StopPodSandbox for \"3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0\"" Jul 14 22:11:15.934069 kubelet[2717]: I0714 22:11:15.934039 2717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c" Jul 14 22:11:15.934934 containerd[1574]: time="2025-07-14T22:11:15.934881186Z" level=info msg="StopPodSandbox for \"4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c\"" Jul 14 22:11:15.935348 containerd[1574]: time="2025-07-14T22:11:15.935310277Z" level=info msg="Ensure that sandbox 4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c in task-service has been cleanup successfully" Jul 14 22:11:15.937166 containerd[1574]: time="2025-07-14T22:11:15.936877152Z" level=info msg="Ensure that sandbox 3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0 in task-service has been cleanup successfully" Jul 14 22:11:15.957478 containerd[1574]: time="2025-07-14T22:11:15.957324950Z" level=error msg="Failed to destroy network for sandbox \"5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:11:15.960421 containerd[1574]: time="2025-07-14T22:11:15.960380192Z" level=error msg="Failed to destroy network for sandbox \"6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:11:15.960653 containerd[1574]: time="2025-07-14T22:11:15.960599110Z" level=error msg="encountered an error cleaning up failed sandbox \"5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:11:15.960860 containerd[1574]: time="2025-07-14T22:11:15.960838278Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b9b695b56-szjht,Uid:ce018b19-dfa1-40d7-99a8-a9a4fa008b27,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:11:15.961067 containerd[1574]: time="2025-07-14T22:11:15.960856823Z" level=error msg="encountered an error cleaning up failed sandbox \"6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:11:15.961320 kubelet[2717]: E0714 22:11:15.961288 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:11:15.961492 kubelet[2717]: E0714 22:11:15.961473 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b9b695b56-szjht" Jul 14 22:11:15.961583 kubelet[2717]: E0714 22:11:15.961568 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b9b695b56-szjht" Jul 14 22:11:15.961709 kubelet[2717]: E0714 22:11:15.961673 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b9b695b56-szjht_calico-apiserver(ce018b19-dfa1-40d7-99a8-a9a4fa008b27)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b9b695b56-szjht_calico-apiserver(ce018b19-dfa1-40d7-99a8-a9a4fa008b27)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b9b695b56-szjht" podUID="ce018b19-dfa1-40d7-99a8-a9a4fa008b27" Jul 14 22:11:15.961917 containerd[1574]: time="2025-07-14T22:11:15.961893555Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-4trbc,Uid:f6ace18e-bbe6-4202-8c10-c8f8fba9e6ed,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:11:15.962049 containerd[1574]: time="2025-07-14T22:11:15.960894646Z" level=error msg="Failed to destroy network for sandbox \"108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:11:15.962207 kubelet[2717]: E0714 22:11:15.962190 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:11:15.962302 kubelet[2717]: E0714 22:11:15.962285 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-4trbc" Jul 14 22:11:15.962439 kubelet[2717]: E0714 22:11:15.962404 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-4trbc" Jul 14 22:11:15.962568 kubelet[2717]: E0714 22:11:15.962550 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-4trbc_calico-system(f6ace18e-bbe6-4202-8c10-c8f8fba9e6ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-4trbc_calico-system(f6ace18e-bbe6-4202-8c10-c8f8fba9e6ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-4trbc" podUID="f6ace18e-bbe6-4202-8c10-c8f8fba9e6ed" Jul 14 22:11:15.962976 containerd[1574]: time="2025-07-14T22:11:15.962951228Z" level=error msg="encountered an error cleaning up failed sandbox \"108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:11:15.963081 containerd[1574]: time="2025-07-14T22:11:15.963060337Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b9b695b56-kzv98,Uid:9836a141-78f0-47fd-89e8-6b74ea1f1f07,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:11:15.963261 kubelet[2717]: E0714 22:11:15.963241 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:11:15.963399 kubelet[2717]: E0714 22:11:15.963344 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b9b695b56-kzv98" Jul 14 22:11:15.963476 kubelet[2717]: E0714 22:11:15.963463 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b9b695b56-kzv98" Jul 14 22:11:15.963626 kubelet[2717]: E0714 22:11:15.963607 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b9b695b56-kzv98_calico-apiserver(9836a141-78f0-47fd-89e8-6b74ea1f1f07)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b9b695b56-kzv98_calico-apiserver(9836a141-78f0-47fd-89e8-6b74ea1f1f07)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b9b695b56-kzv98" podUID="9836a141-78f0-47fd-89e8-6b74ea1f1f07" Jul 14 22:11:15.981722 containerd[1574]: time="2025-07-14T22:11:15.981659109Z" level=error msg="StopPodSandbox for \"4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c\" failed" error="failed to destroy network for sandbox \"4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:11:15.982473 kubelet[2717]: E0714 22:11:15.982319 2717 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c" Jul 14 22:11:15.982473 kubelet[2717]: E0714 22:11:15.982415 2717 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c"} Jul 14 22:11:15.982595 kubelet[2717]: E0714 22:11:15.982556 2717 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9c73f2d5-835f-459b-8d41-d88723278569\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:11:15.982655 kubelet[2717]: E0714 22:11:15.982589 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9c73f2d5-835f-459b-8d41-d88723278569\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-57dnv" podUID="9c73f2d5-835f-459b-8d41-d88723278569" Jul 14 22:11:15.982930 containerd[1574]: time="2025-07-14T22:11:15.982881565Z" level=error msg="Failed to destroy network for sandbox \"6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:11:15.983341 containerd[1574]: time="2025-07-14T22:11:15.983296128Z" level=error msg="encountered an error cleaning up failed sandbox \"6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:11:15.983378 containerd[1574]: time="2025-07-14T22:11:15.983343759Z" level=error msg="StopPodSandbox for \"3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0\" failed" error="failed to destroy network for sandbox \"3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:11:15.983489 containerd[1574]: time="2025-07-14T22:11:15.983353928Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f4c76c779-rgtfh,Uid:e58136f5-d596-401c-86d5-192b42224dd4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:11:15.983845 kubelet[2717]: E0714 22:11:15.983622 2717 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0" Jul 14 22:11:15.983845 kubelet[2717]: E0714 22:11:15.983662 2717 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0"} Jul 14 22:11:15.983845 kubelet[2717]: E0714 22:11:15.983657 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:11:15.983845 kubelet[2717]: E0714 22:11:15.983692 2717 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"386dcad9-cbe7-41a3-a762-3963d0cad867\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:11:15.983992 kubelet[2717]: E0714 22:11:15.983710 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"386dcad9-cbe7-41a3-a762-3963d0cad867\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-swnzl" podUID="386dcad9-cbe7-41a3-a762-3963d0cad867" Jul 14 22:11:15.983992 kubelet[2717]: E0714 22:11:15.983713 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-f4c76c779-rgtfh" Jul 14 22:11:15.983992 kubelet[2717]: E0714 22:11:15.983732 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-f4c76c779-rgtfh" Jul 14 22:11:15.984100 kubelet[2717]: E0714 22:11:15.983780 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-f4c76c779-rgtfh_calico-system(e58136f5-d596-401c-86d5-192b42224dd4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-f4c76c779-rgtfh_calico-system(e58136f5-d596-401c-86d5-192b42224dd4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-f4c76c779-rgtfh" podUID="e58136f5-d596-401c-86d5-192b42224dd4" Jul 14 22:11:15.984658 containerd[1574]: time="2025-07-14T22:11:15.984612214Z" level=error msg="Failed to destroy network for sandbox \"232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:11:15.985000 containerd[1574]: time="2025-07-14T22:11:15.984976382Z" level=error msg="encountered an error cleaning up failed sandbox \"232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:11:15.985068 containerd[1574]: time="2025-07-14T22:11:15.985018492Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b8d99f984-kmljb,Uid:2b64c079-46df-4d78-82e9-6807a889d4ac,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:11:15.985234 kubelet[2717]: E0714 22:11:15.985202 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:11:15.985234 kubelet[2717]: E0714 22:11:15.985233 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b8d99f984-kmljb" Jul 14 22:11:15.985383 kubelet[2717]: E0714 22:11:15.985247 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b8d99f984-kmljb" Jul 14 22:11:15.985383 kubelet[2717]: E0714 22:11:15.985277 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-b8d99f984-kmljb_calico-system(2b64c079-46df-4d78-82e9-6807a889d4ac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-b8d99f984-kmljb_calico-system(2b64c079-46df-4d78-82e9-6807a889d4ac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-b8d99f984-kmljb" podUID="2b64c079-46df-4d78-82e9-6807a889d4ac" Jul 14 22:11:16.767870 containerd[1574]: time="2025-07-14T22:11:16.767710983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bhk87,Uid:7829a7f5-eec5-446a-bd2c-44244faa0a80,Namespace:calico-system,Attempt:0,}" Jul 14 22:11:16.826344 containerd[1574]: time="2025-07-14T22:11:16.826281997Z" level=error msg="Failed to destroy network for sandbox \"c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:11:16.826814 containerd[1574]: time="2025-07-14T22:11:16.826770559Z" level=error msg="encountered an error cleaning up failed sandbox \"c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:11:16.826882 containerd[1574]: time="2025-07-14T22:11:16.826833168Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bhk87,Uid:7829a7f5-eec5-446a-bd2c-44244faa0a80,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:11:16.827252 kubelet[2717]: E0714 22:11:16.827186 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:11:16.827826 kubelet[2717]: E0714 22:11:16.827278 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bhk87" Jul 14 22:11:16.827826 kubelet[2717]: E0714 22:11:16.827307 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bhk87" Jul 14 22:11:16.827826 kubelet[2717]: E0714 22:11:16.827370 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bhk87_calico-system(7829a7f5-eec5-446a-bd2c-44244faa0a80)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bhk87_calico-system(7829a7f5-eec5-446a-bd2c-44244faa0a80)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bhk87" podUID="7829a7f5-eec5-446a-bd2c-44244faa0a80" Jul 14 22:11:16.829463 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802-shm.mount: Deactivated successfully. Jul 14 22:11:16.936655 kubelet[2717]: I0714 22:11:16.936609 2717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797" Jul 14 22:11:16.937252 containerd[1574]: time="2025-07-14T22:11:16.937215160Z" level=info msg="StopPodSandbox for \"6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797\"" Jul 14 22:11:16.937405 containerd[1574]: time="2025-07-14T22:11:16.937384753Z" level=info msg="Ensure that sandbox 6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797 in task-service has been cleanup successfully" Jul 14 22:11:16.937641 kubelet[2717]: I0714 22:11:16.937620 2717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2" Jul 14 22:11:16.938105 containerd[1574]: time="2025-07-14T22:11:16.938073228Z" level=info msg="StopPodSandbox for \"5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2\"" Jul 14 22:11:16.938286 containerd[1574]: time="2025-07-14T22:11:16.938265915Z" level=info msg="Ensure that sandbox 5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2 in task-service has been cleanup successfully" Jul 14 22:11:16.939921 kubelet[2717]: I0714 22:11:16.939889 2717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba" Jul 14 22:11:16.941090 containerd[1574]: time="2025-07-14T22:11:16.940449434Z" level=info msg="StopPodSandbox for \"232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba\"" Jul 14 22:11:16.941265 kubelet[2717]: I0714 22:11:16.941226 2717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f" Jul 14 22:11:16.942114 containerd[1574]: time="2025-07-14T22:11:16.942092220Z" level=info msg="StopPodSandbox for \"6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f\"" Jul 14 22:11:16.942286 containerd[1574]: time="2025-07-14T22:11:16.942265381Z" level=info msg="Ensure that sandbox 6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f in task-service has been cleanup successfully" Jul 14 22:11:16.948066 kubelet[2717]: I0714 22:11:16.948024 2717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802" Jul 14 22:11:16.949194 containerd[1574]: time="2025-07-14T22:11:16.949060675Z" level=info msg="Ensure that sandbox 232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba in task-service has been cleanup successfully" Jul 14 22:11:16.949381 containerd[1574]: time="2025-07-14T22:11:16.949355127Z" level=info msg="StopPodSandbox for \"c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802\"" Jul 14 22:11:16.949947 containerd[1574]: time="2025-07-14T22:11:16.949752786Z" level=info msg="Ensure that sandbox c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802 in task-service has been cleanup successfully" Jul 14 22:11:16.950877 kubelet[2717]: I0714 22:11:16.950859 2717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c" Jul 14 22:11:16.953285 containerd[1574]: time="2025-07-14T22:11:16.953259882Z" level=info msg="StopPodSandbox for \"108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c\"" Jul 14 22:11:16.953576 containerd[1574]: time="2025-07-14T22:11:16.953556779Z" level=info msg="Ensure that sandbox 108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c in task-service has been cleanup successfully" Jul 14 22:11:16.976683 containerd[1574]: time="2025-07-14T22:11:16.976633006Z" level=error msg="StopPodSandbox for \"5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2\" failed" error="failed to destroy network for sandbox \"5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:11:16.976939 kubelet[2717]: E0714 22:11:16.976885 2717 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2" Jul 14 22:11:16.977010 kubelet[2717]: E0714 22:11:16.976957 2717 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2"} Jul 14 22:11:16.977037 kubelet[2717]: E0714 22:11:16.977002 2717 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ce018b19-dfa1-40d7-99a8-a9a4fa008b27\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:11:16.977103 kubelet[2717]: E0714 22:11:16.977031 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ce018b19-dfa1-40d7-99a8-a9a4fa008b27\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b9b695b56-szjht" podUID="ce018b19-dfa1-40d7-99a8-a9a4fa008b27" Jul 14 22:11:16.990650 containerd[1574]: time="2025-07-14T22:11:16.990477708Z" level=error msg="StopPodSandbox for \"6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797\" failed" error="failed to destroy network for sandbox \"6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:11:16.990913 kubelet[2717]: E0714 22:11:16.990845 2717 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797" Jul 14 22:11:16.990974 kubelet[2717]: E0714 22:11:16.990917 2717 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797"} Jul 14 22:11:16.990974 kubelet[2717]: E0714 22:11:16.990961 2717 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e58136f5-d596-401c-86d5-192b42224dd4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:11:16.991066 kubelet[2717]: E0714 22:11:16.990989 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e58136f5-d596-401c-86d5-192b42224dd4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-f4c76c779-rgtfh" podUID="e58136f5-d596-401c-86d5-192b42224dd4" Jul 14 22:11:16.992802 containerd[1574]: time="2025-07-14T22:11:16.992647039Z" level=error msg="StopPodSandbox for \"6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f\" failed" error="failed to destroy network for sandbox \"6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:11:16.992984 kubelet[2717]: E0714 22:11:16.992944 2717 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f" Jul 14 22:11:16.993165 kubelet[2717]: E0714 22:11:16.993077 2717 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f"} Jul 14 22:11:16.993165 kubelet[2717]: E0714 22:11:16.993121 2717 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f6ace18e-bbe6-4202-8c10-c8f8fba9e6ed\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:11:16.993165 kubelet[2717]: E0714 22:11:16.993142 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f6ace18e-bbe6-4202-8c10-c8f8fba9e6ed\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-4trbc" podUID="f6ace18e-bbe6-4202-8c10-c8f8fba9e6ed" Jul 14 22:11:16.994547 containerd[1574]: time="2025-07-14T22:11:16.994463658Z" level=error msg="StopPodSandbox for \"232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba\" failed" error="failed to destroy network for sandbox \"232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:11:16.994652 kubelet[2717]: E0714 22:11:16.994621 2717 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba" Jul 14 22:11:16.994701 kubelet[2717]: E0714 22:11:16.994669 2717 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba"} Jul 14 22:11:16.994701 kubelet[2717]: E0714 22:11:16.994694 2717 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2b64c079-46df-4d78-82e9-6807a889d4ac\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:11:16.994840 kubelet[2717]: E0714 22:11:16.994717 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2b64c079-46df-4d78-82e9-6807a889d4ac\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-b8d99f984-kmljb" podUID="2b64c079-46df-4d78-82e9-6807a889d4ac" Jul 14 22:11:16.995031 containerd[1574]: time="2025-07-14T22:11:16.995000883Z" level=error msg="StopPodSandbox for \"108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c\" failed" error="failed to destroy network for sandbox \"108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:11:16.995226 kubelet[2717]: E0714 22:11:16.995188 2717 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c" Jul 14 22:11:16.995226 kubelet[2717]: E0714 22:11:16.995218 2717 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c"} Jul 14 22:11:16.995289 kubelet[2717]: E0714 22:11:16.995240 2717 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9836a141-78f0-47fd-89e8-6b74ea1f1f07\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:11:16.995289 kubelet[2717]: E0714 22:11:16.995258 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9836a141-78f0-47fd-89e8-6b74ea1f1f07\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b9b695b56-kzv98" podUID="9836a141-78f0-47fd-89e8-6b74ea1f1f07" Jul 14 22:11:16.995856 containerd[1574]: time="2025-07-14T22:11:16.995821250Z" level=error msg="StopPodSandbox for \"c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802\" failed" error="failed to destroy network for sandbox \"c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:11:16.995997 kubelet[2717]: E0714 22:11:16.995966 2717 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802" Jul 14 22:11:16.995997 kubelet[2717]: E0714 22:11:16.995992 2717 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802"} Jul 14 22:11:16.996071 kubelet[2717]: E0714 22:11:16.996014 2717 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7829a7f5-eec5-446a-bd2c-44244faa0a80\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:11:16.996071 kubelet[2717]: E0714 22:11:16.996043 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7829a7f5-eec5-446a-bd2c-44244faa0a80\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bhk87" podUID="7829a7f5-eec5-446a-bd2c-44244faa0a80" Jul 14 22:11:22.422745 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3371597731.mount: Deactivated successfully. Jul 14 22:11:24.805856 containerd[1574]: time="2025-07-14T22:11:24.805776978Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:11:24.819831 containerd[1574]: time="2025-07-14T22:11:24.819726700Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 14 22:11:24.852478 containerd[1574]: time="2025-07-14T22:11:24.852392174Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:11:24.855415 containerd[1574]: time="2025-07-14T22:11:24.855364927Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:11:24.856100 containerd[1574]: time="2025-07-14T22:11:24.856065691Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 8.925797265s" Jul 14 22:11:24.856151 containerd[1574]: time="2025-07-14T22:11:24.856104465Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 14 22:11:24.866720 containerd[1574]: time="2025-07-14T22:11:24.866664425Z" level=info msg="CreateContainer within sandbox \"62ee6d791afff9374582bf91248400ccce9088fbaa9f186042a35840ded20997\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 14 22:11:24.895487 containerd[1574]: time="2025-07-14T22:11:24.892923762Z" level=info msg="CreateContainer within sandbox \"62ee6d791afff9374582bf91248400ccce9088fbaa9f186042a35840ded20997\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4b2d8320fe882e6040e64d7a2e6f977c5920852c5a853e8ec8e1831137345bb8\"" Jul 14 22:11:24.896180 containerd[1574]: time="2025-07-14T22:11:24.896144574Z" level=info msg="StartContainer for \"4b2d8320fe882e6040e64d7a2e6f977c5920852c5a853e8ec8e1831137345bb8\"" Jul 14 22:11:24.989255 containerd[1574]: time="2025-07-14T22:11:24.989197051Z" level=info msg="StartContainer for \"4b2d8320fe882e6040e64d7a2e6f977c5920852c5a853e8ec8e1831137345bb8\" returns successfully" Jul 14 22:11:25.075089 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 14 22:11:25.075810 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 14 22:11:25.173578 containerd[1574]: time="2025-07-14T22:11:25.173532563Z" level=info msg="StopPodSandbox for \"6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797\"" Jul 14 22:11:25.349228 containerd[1574]: 2025-07-14 22:11:25.270 [INFO][4017] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797" Jul 14 22:11:25.349228 containerd[1574]: 2025-07-14 22:11:25.270 [INFO][4017] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797" iface="eth0" netns="/var/run/netns/cni-398acb6d-a8e1-7d52-3496-1e32e68b8c1d" Jul 14 22:11:25.349228 containerd[1574]: 2025-07-14 22:11:25.271 [INFO][4017] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797" iface="eth0" netns="/var/run/netns/cni-398acb6d-a8e1-7d52-3496-1e32e68b8c1d" Jul 14 22:11:25.349228 containerd[1574]: 2025-07-14 22:11:25.272 [INFO][4017] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797" iface="eth0" netns="/var/run/netns/cni-398acb6d-a8e1-7d52-3496-1e32e68b8c1d" Jul 14 22:11:25.349228 containerd[1574]: 2025-07-14 22:11:25.272 [INFO][4017] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797" Jul 14 22:11:25.349228 containerd[1574]: 2025-07-14 22:11:25.272 [INFO][4017] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797" Jul 14 22:11:25.349228 containerd[1574]: 2025-07-14 22:11:25.332 [INFO][4028] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797" HandleID="k8s-pod-network.6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797" Workload="localhost-k8s-whisker--f4c76c779--rgtfh-eth0" Jul 14 22:11:25.349228 containerd[1574]: 2025-07-14 22:11:25.332 [INFO][4028] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:11:25.349228 containerd[1574]: 2025-07-14 22:11:25.333 [INFO][4028] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:11:25.349228 containerd[1574]: 2025-07-14 22:11:25.341 [WARNING][4028] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797" HandleID="k8s-pod-network.6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797" Workload="localhost-k8s-whisker--f4c76c779--rgtfh-eth0" Jul 14 22:11:25.349228 containerd[1574]: 2025-07-14 22:11:25.341 [INFO][4028] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797" HandleID="k8s-pod-network.6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797" Workload="localhost-k8s-whisker--f4c76c779--rgtfh-eth0" Jul 14 22:11:25.349228 containerd[1574]: 2025-07-14 22:11:25.343 [INFO][4028] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:11:25.349228 containerd[1574]: 2025-07-14 22:11:25.346 [INFO][4017] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797" Jul 14 22:11:25.349781 containerd[1574]: time="2025-07-14T22:11:25.349331526Z" level=info msg="TearDown network for sandbox \"6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797\" successfully" Jul 14 22:11:25.349781 containerd[1574]: time="2025-07-14T22:11:25.349369167Z" level=info msg="StopPodSandbox for \"6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797\" returns successfully" Jul 14 22:11:25.484255 kubelet[2717]: I0714 22:11:25.484187 2717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhk7b\" (UniqueName: \"kubernetes.io/projected/e58136f5-d596-401c-86d5-192b42224dd4-kube-api-access-zhk7b\") pod \"e58136f5-d596-401c-86d5-192b42224dd4\" (UID: \"e58136f5-d596-401c-86d5-192b42224dd4\") " Jul 14 22:11:25.484255 kubelet[2717]: I0714 22:11:25.484245 2717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e58136f5-d596-401c-86d5-192b42224dd4-whisker-ca-bundle\") pod \"e58136f5-d596-401c-86d5-192b42224dd4\" (UID: \"e58136f5-d596-401c-86d5-192b42224dd4\") " Jul 14 22:11:25.484803 kubelet[2717]: I0714 22:11:25.484275 2717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e58136f5-d596-401c-86d5-192b42224dd4-whisker-backend-key-pair\") pod \"e58136f5-d596-401c-86d5-192b42224dd4\" (UID: \"e58136f5-d596-401c-86d5-192b42224dd4\") " Jul 14 22:11:25.484885 kubelet[2717]: I0714 22:11:25.484838 2717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e58136f5-d596-401c-86d5-192b42224dd4-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e58136f5-d596-401c-86d5-192b42224dd4" (UID: "e58136f5-d596-401c-86d5-192b42224dd4"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 14 22:11:25.488089 kubelet[2717]: I0714 22:11:25.488035 2717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e58136f5-d596-401c-86d5-192b42224dd4-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e58136f5-d596-401c-86d5-192b42224dd4" (UID: "e58136f5-d596-401c-86d5-192b42224dd4"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 14 22:11:25.488089 kubelet[2717]: I0714 22:11:25.488045 2717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e58136f5-d596-401c-86d5-192b42224dd4-kube-api-access-zhk7b" (OuterVolumeSpecName: "kube-api-access-zhk7b") pod "e58136f5-d596-401c-86d5-192b42224dd4" (UID: "e58136f5-d596-401c-86d5-192b42224dd4"). InnerVolumeSpecName "kube-api-access-zhk7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 14 22:11:25.585516 kubelet[2717]: I0714 22:11:25.585450 2717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhk7b\" (UniqueName: \"kubernetes.io/projected/e58136f5-d596-401c-86d5-192b42224dd4-kube-api-access-zhk7b\") on node \"localhost\" DevicePath \"\"" Jul 14 22:11:25.585516 kubelet[2717]: I0714 22:11:25.585485 2717 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e58136f5-d596-401c-86d5-192b42224dd4-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 14 22:11:25.585516 kubelet[2717]: I0714 22:11:25.585495 2717 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e58136f5-d596-401c-86d5-192b42224dd4-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 14 22:11:25.863543 systemd[1]: run-netns-cni\x2d398acb6d\x2da8e1\x2d7d52\x2d3496\x2d1e32e68b8c1d.mount: Deactivated successfully. Jul 14 22:11:25.863757 systemd[1]: var-lib-kubelet-pods-e58136f5\x2dd596\x2d401c\x2d86d5\x2d192b42224dd4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzhk7b.mount: Deactivated successfully. Jul 14 22:11:25.863971 systemd[1]: var-lib-kubelet-pods-e58136f5\x2dd596\x2d401c\x2d86d5\x2d192b42224dd4-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 14 22:11:25.986935 kubelet[2717]: I0714 22:11:25.986867 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-zkljv" podStartSLOduration=1.679448847 podStartE2EDuration="22.986847216s" podCreationTimestamp="2025-07-14 22:11:03 +0000 UTC" firstStartedPulling="2025-07-14 22:11:03.549591788 +0000 UTC m=+22.867626034" lastFinishedPulling="2025-07-14 22:11:24.856990157 +0000 UTC m=+44.175024403" observedRunningTime="2025-07-14 22:11:25.985279405 +0000 UTC m=+45.303313651" watchObservedRunningTime="2025-07-14 22:11:25.986847216 +0000 UTC m=+45.304881462" Jul 14 22:11:26.088802 kubelet[2717]: I0714 22:11:26.088754 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7590d62a-2596-4304-a0ca-452c325546ae-whisker-backend-key-pair\") pod \"whisker-85b59d4bd5-p6dr7\" (UID: \"7590d62a-2596-4304-a0ca-452c325546ae\") " pod="calico-system/whisker-85b59d4bd5-p6dr7" Jul 14 22:11:26.088802 kubelet[2717]: I0714 22:11:26.088806 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56sjv\" (UniqueName: \"kubernetes.io/projected/7590d62a-2596-4304-a0ca-452c325546ae-kube-api-access-56sjv\") pod \"whisker-85b59d4bd5-p6dr7\" (UID: \"7590d62a-2596-4304-a0ca-452c325546ae\") " pod="calico-system/whisker-85b59d4bd5-p6dr7" Jul 14 22:11:26.089039 kubelet[2717]: I0714 22:11:26.088858 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7590d62a-2596-4304-a0ca-452c325546ae-whisker-ca-bundle\") pod \"whisker-85b59d4bd5-p6dr7\" (UID: \"7590d62a-2596-4304-a0ca-452c325546ae\") " pod="calico-system/whisker-85b59d4bd5-p6dr7" Jul 14 22:11:26.328278 containerd[1574]: time="2025-07-14T22:11:26.328231804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-85b59d4bd5-p6dr7,Uid:7590d62a-2596-4304-a0ca-452c325546ae,Namespace:calico-system,Attempt:0,}" Jul 14 22:11:26.495080 systemd-networkd[1245]: cali96bd94874f5: Link UP Jul 14 22:11:26.495294 systemd-networkd[1245]: cali96bd94874f5: Gained carrier Jul 14 22:11:26.524096 containerd[1574]: 2025-07-14 22:11:26.361 [INFO][4053] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 14 22:11:26.524096 containerd[1574]: 2025-07-14 22:11:26.377 [INFO][4053] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--85b59d4bd5--p6dr7-eth0 whisker-85b59d4bd5- calico-system 7590d62a-2596-4304-a0ca-452c325546ae 953 0 2025-07-14 22:11:26 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:85b59d4bd5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-85b59d4bd5-p6dr7 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali96bd94874f5 [] [] }} ContainerID="86b814240f1b3ac421d5f258adacf06dc7c744861d25dca7247186c3807df6ec" Namespace="calico-system" Pod="whisker-85b59d4bd5-p6dr7" WorkloadEndpoint="localhost-k8s-whisker--85b59d4bd5--p6dr7-" Jul 14 22:11:26.524096 containerd[1574]: 2025-07-14 22:11:26.377 [INFO][4053] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="86b814240f1b3ac421d5f258adacf06dc7c744861d25dca7247186c3807df6ec" Namespace="calico-system" Pod="whisker-85b59d4bd5-p6dr7" WorkloadEndpoint="localhost-k8s-whisker--85b59d4bd5--p6dr7-eth0" Jul 14 22:11:26.524096 containerd[1574]: 2025-07-14 22:11:26.424 [INFO][4092] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="86b814240f1b3ac421d5f258adacf06dc7c744861d25dca7247186c3807df6ec" HandleID="k8s-pod-network.86b814240f1b3ac421d5f258adacf06dc7c744861d25dca7247186c3807df6ec" Workload="localhost-k8s-whisker--85b59d4bd5--p6dr7-eth0" Jul 14 22:11:26.524096 containerd[1574]: 2025-07-14 22:11:26.425 [INFO][4092] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="86b814240f1b3ac421d5f258adacf06dc7c744861d25dca7247186c3807df6ec" HandleID="k8s-pod-network.86b814240f1b3ac421d5f258adacf06dc7c744861d25dca7247186c3807df6ec" Workload="localhost-k8s-whisker--85b59d4bd5--p6dr7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001302d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-85b59d4bd5-p6dr7", "timestamp":"2025-07-14 22:11:26.424963467 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:11:26.524096 containerd[1574]: 2025-07-14 22:11:26.425 [INFO][4092] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:11:26.524096 containerd[1574]: 2025-07-14 22:11:26.425 [INFO][4092] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:11:26.524096 containerd[1574]: 2025-07-14 22:11:26.425 [INFO][4092] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:11:26.524096 containerd[1574]: 2025-07-14 22:11:26.433 [INFO][4092] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.86b814240f1b3ac421d5f258adacf06dc7c744861d25dca7247186c3807df6ec" host="localhost" Jul 14 22:11:26.524096 containerd[1574]: 2025-07-14 22:11:26.443 [INFO][4092] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:11:26.524096 containerd[1574]: 2025-07-14 22:11:26.457 [INFO][4092] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:11:26.524096 containerd[1574]: 2025-07-14 22:11:26.463 [INFO][4092] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:11:26.524096 containerd[1574]: 2025-07-14 22:11:26.465 [INFO][4092] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:11:26.524096 containerd[1574]: 2025-07-14 22:11:26.465 [INFO][4092] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.86b814240f1b3ac421d5f258adacf06dc7c744861d25dca7247186c3807df6ec" host="localhost" Jul 14 22:11:26.524096 containerd[1574]: 2025-07-14 22:11:26.467 [INFO][4092] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.86b814240f1b3ac421d5f258adacf06dc7c744861d25dca7247186c3807df6ec Jul 14 22:11:26.524096 containerd[1574]: 2025-07-14 22:11:26.471 [INFO][4092] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.86b814240f1b3ac421d5f258adacf06dc7c744861d25dca7247186c3807df6ec" host="localhost" Jul 14 22:11:26.524096 containerd[1574]: 2025-07-14 22:11:26.477 [INFO][4092] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.86b814240f1b3ac421d5f258adacf06dc7c744861d25dca7247186c3807df6ec" host="localhost" Jul 14 22:11:26.524096 containerd[1574]: 2025-07-14 22:11:26.478 [INFO][4092] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.86b814240f1b3ac421d5f258adacf06dc7c744861d25dca7247186c3807df6ec" host="localhost" Jul 14 22:11:26.524096 containerd[1574]: 2025-07-14 22:11:26.478 [INFO][4092] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:11:26.524096 containerd[1574]: 2025-07-14 22:11:26.478 [INFO][4092] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="86b814240f1b3ac421d5f258adacf06dc7c744861d25dca7247186c3807df6ec" HandleID="k8s-pod-network.86b814240f1b3ac421d5f258adacf06dc7c744861d25dca7247186c3807df6ec" Workload="localhost-k8s-whisker--85b59d4bd5--p6dr7-eth0" Jul 14 22:11:26.524730 containerd[1574]: 2025-07-14 22:11:26.485 [INFO][4053] cni-plugin/k8s.go 418: Populated endpoint ContainerID="86b814240f1b3ac421d5f258adacf06dc7c744861d25dca7247186c3807df6ec" Namespace="calico-system" Pod="whisker-85b59d4bd5-p6dr7" WorkloadEndpoint="localhost-k8s-whisker--85b59d4bd5--p6dr7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--85b59d4bd5--p6dr7-eth0", GenerateName:"whisker-85b59d4bd5-", Namespace:"calico-system", SelfLink:"", UID:"7590d62a-2596-4304-a0ca-452c325546ae", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 11, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"85b59d4bd5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-85b59d4bd5-p6dr7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali96bd94874f5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:11:26.524730 containerd[1574]: 2025-07-14 22:11:26.485 [INFO][4053] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="86b814240f1b3ac421d5f258adacf06dc7c744861d25dca7247186c3807df6ec" Namespace="calico-system" Pod="whisker-85b59d4bd5-p6dr7" WorkloadEndpoint="localhost-k8s-whisker--85b59d4bd5--p6dr7-eth0" Jul 14 22:11:26.524730 containerd[1574]: 2025-07-14 22:11:26.485 [INFO][4053] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali96bd94874f5 ContainerID="86b814240f1b3ac421d5f258adacf06dc7c744861d25dca7247186c3807df6ec" Namespace="calico-system" Pod="whisker-85b59d4bd5-p6dr7" WorkloadEndpoint="localhost-k8s-whisker--85b59d4bd5--p6dr7-eth0" Jul 14 22:11:26.524730 containerd[1574]: 2025-07-14 22:11:26.497 [INFO][4053] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="86b814240f1b3ac421d5f258adacf06dc7c744861d25dca7247186c3807df6ec" Namespace="calico-system" Pod="whisker-85b59d4bd5-p6dr7" WorkloadEndpoint="localhost-k8s-whisker--85b59d4bd5--p6dr7-eth0" Jul 14 22:11:26.524730 containerd[1574]: 2025-07-14 22:11:26.498 [INFO][4053] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="86b814240f1b3ac421d5f258adacf06dc7c744861d25dca7247186c3807df6ec" Namespace="calico-system" Pod="whisker-85b59d4bd5-p6dr7" WorkloadEndpoint="localhost-k8s-whisker--85b59d4bd5--p6dr7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--85b59d4bd5--p6dr7-eth0", GenerateName:"whisker-85b59d4bd5-", Namespace:"calico-system", SelfLink:"", UID:"7590d62a-2596-4304-a0ca-452c325546ae", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 11, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"85b59d4bd5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"86b814240f1b3ac421d5f258adacf06dc7c744861d25dca7247186c3807df6ec", Pod:"whisker-85b59d4bd5-p6dr7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali96bd94874f5", MAC:"96:b9:2f:6a:26:85", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:11:26.524730 containerd[1574]: 2025-07-14 22:11:26.511 [INFO][4053] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="86b814240f1b3ac421d5f258adacf06dc7c744861d25dca7247186c3807df6ec" Namespace="calico-system" Pod="whisker-85b59d4bd5-p6dr7" WorkloadEndpoint="localhost-k8s-whisker--85b59d4bd5--p6dr7-eth0" Jul 14 22:11:26.581543 kernel: bpftool[4189]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 14 22:11:26.656693 containerd[1574]: time="2025-07-14T22:11:26.656589446Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:11:26.656693 containerd[1574]: time="2025-07-14T22:11:26.656665790Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:11:26.657281 containerd[1574]: time="2025-07-14T22:11:26.656681701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:11:26.657281 containerd[1574]: time="2025-07-14T22:11:26.656812607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:11:26.696164 systemd-resolved[1461]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:11:26.728961 containerd[1574]: time="2025-07-14T22:11:26.728894187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-85b59d4bd5-p6dr7,Uid:7590d62a-2596-4304-a0ca-452c325546ae,Namespace:calico-system,Attempt:0,} returns sandbox id \"86b814240f1b3ac421d5f258adacf06dc7c744861d25dca7247186c3807df6ec\"" Jul 14 22:11:26.731690 containerd[1574]: time="2025-07-14T22:11:26.731402417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 14 22:11:26.766742 kubelet[2717]: I0714 22:11:26.766694 2717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e58136f5-d596-401c-86d5-192b42224dd4" path="/var/lib/kubelet/pods/e58136f5-d596-401c-86d5-192b42224dd4/volumes" Jul 14 22:11:26.873137 systemd-networkd[1245]: vxlan.calico: Link UP Jul 14 22:11:26.873148 systemd-networkd[1245]: vxlan.calico: Gained carrier Jul 14 22:11:27.555677 systemd-networkd[1245]: cali96bd94874f5: Gained IPv6LL Jul 14 22:11:27.765380 containerd[1574]: time="2025-07-14T22:11:27.765329045Z" level=info msg="StopPodSandbox for \"4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c\"" Jul 14 22:11:27.850541 containerd[1574]: 2025-07-14 22:11:27.810 [INFO][4335] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c" Jul 14 22:11:27.850541 containerd[1574]: 2025-07-14 22:11:27.810 [INFO][4335] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c" iface="eth0" netns="/var/run/netns/cni-54ea76e0-35a7-9697-cbcc-34524f223227" Jul 14 22:11:27.850541 containerd[1574]: 2025-07-14 22:11:27.810 [INFO][4335] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c" iface="eth0" netns="/var/run/netns/cni-54ea76e0-35a7-9697-cbcc-34524f223227" Jul 14 22:11:27.850541 containerd[1574]: 2025-07-14 22:11:27.811 [INFO][4335] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c" iface="eth0" netns="/var/run/netns/cni-54ea76e0-35a7-9697-cbcc-34524f223227" Jul 14 22:11:27.850541 containerd[1574]: 2025-07-14 22:11:27.811 [INFO][4335] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c" Jul 14 22:11:27.850541 containerd[1574]: 2025-07-14 22:11:27.811 [INFO][4335] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c" Jul 14 22:11:27.850541 containerd[1574]: 2025-07-14 22:11:27.837 [INFO][4343] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c" HandleID="k8s-pod-network.4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c" Workload="localhost-k8s-coredns--7c65d6cfc9--57dnv-eth0" Jul 14 22:11:27.850541 containerd[1574]: 2025-07-14 22:11:27.837 [INFO][4343] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:11:27.850541 containerd[1574]: 2025-07-14 22:11:27.837 [INFO][4343] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:11:27.850541 containerd[1574]: 2025-07-14 22:11:27.842 [WARNING][4343] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c" HandleID="k8s-pod-network.4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c" Workload="localhost-k8s-coredns--7c65d6cfc9--57dnv-eth0" Jul 14 22:11:27.850541 containerd[1574]: 2025-07-14 22:11:27.842 [INFO][4343] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c" HandleID="k8s-pod-network.4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c" Workload="localhost-k8s-coredns--7c65d6cfc9--57dnv-eth0" Jul 14 22:11:27.850541 containerd[1574]: 2025-07-14 22:11:27.844 [INFO][4343] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:11:27.850541 containerd[1574]: 2025-07-14 22:11:27.847 [INFO][4335] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c" Jul 14 22:11:27.851114 containerd[1574]: time="2025-07-14T22:11:27.850762032Z" level=info msg="TearDown network for sandbox \"4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c\" successfully" Jul 14 22:11:27.851114 containerd[1574]: time="2025-07-14T22:11:27.850797719Z" level=info msg="StopPodSandbox for \"4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c\" returns successfully" Jul 14 22:11:27.851692 kubelet[2717]: E0714 22:11:27.851224 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:11:27.852057 containerd[1574]: time="2025-07-14T22:11:27.851859285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-57dnv,Uid:9c73f2d5-835f-459b-8d41-d88723278569,Namespace:kube-system,Attempt:1,}" Jul 14 22:11:27.853670 systemd[1]: run-netns-cni\x2d54ea76e0\x2d35a7\x2d9697\x2dcbcc\x2d34524f223227.mount: Deactivated successfully. Jul 14 22:11:27.970828 systemd-networkd[1245]: cali0eaa16e2ed4: Link UP Jul 14 22:11:27.971447 systemd-networkd[1245]: cali0eaa16e2ed4: Gained carrier Jul 14 22:11:27.986776 containerd[1574]: 2025-07-14 22:11:27.906 [INFO][4355] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--57dnv-eth0 coredns-7c65d6cfc9- kube-system 9c73f2d5-835f-459b-8d41-d88723278569 963 0 2025-07-14 22:10:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-57dnv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0eaa16e2ed4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="472b6959c8ff7a0e3c9c2c6ea04e78ddce9512f9f70fe4d0f71cc22a138704c2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-57dnv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--57dnv-" Jul 14 22:11:27.986776 containerd[1574]: 2025-07-14 22:11:27.906 [INFO][4355] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="472b6959c8ff7a0e3c9c2c6ea04e78ddce9512f9f70fe4d0f71cc22a138704c2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-57dnv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--57dnv-eth0" Jul 14 22:11:27.986776 containerd[1574]: 2025-07-14 22:11:27.934 [INFO][4366] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="472b6959c8ff7a0e3c9c2c6ea04e78ddce9512f9f70fe4d0f71cc22a138704c2" HandleID="k8s-pod-network.472b6959c8ff7a0e3c9c2c6ea04e78ddce9512f9f70fe4d0f71cc22a138704c2" Workload="localhost-k8s-coredns--7c65d6cfc9--57dnv-eth0" Jul 14 22:11:27.986776 containerd[1574]: 2025-07-14 22:11:27.934 [INFO][4366] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="472b6959c8ff7a0e3c9c2c6ea04e78ddce9512f9f70fe4d0f71cc22a138704c2" HandleID="k8s-pod-network.472b6959c8ff7a0e3c9c2c6ea04e78ddce9512f9f70fe4d0f71cc22a138704c2" Workload="localhost-k8s-coredns--7c65d6cfc9--57dnv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ae180), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-57dnv", "timestamp":"2025-07-14 22:11:27.934415615 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:11:27.986776 containerd[1574]: 2025-07-14 22:11:27.934 [INFO][4366] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:11:27.986776 containerd[1574]: 2025-07-14 22:11:27.934 [INFO][4366] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:11:27.986776 containerd[1574]: 2025-07-14 22:11:27.934 [INFO][4366] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:11:27.986776 containerd[1574]: 2025-07-14 22:11:27.940 [INFO][4366] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.472b6959c8ff7a0e3c9c2c6ea04e78ddce9512f9f70fe4d0f71cc22a138704c2" host="localhost" Jul 14 22:11:27.986776 containerd[1574]: 2025-07-14 22:11:27.944 [INFO][4366] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:11:27.986776 containerd[1574]: 2025-07-14 22:11:27.948 [INFO][4366] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:11:27.986776 containerd[1574]: 2025-07-14 22:11:27.950 [INFO][4366] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:11:27.986776 containerd[1574]: 2025-07-14 22:11:27.952 [INFO][4366] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:11:27.986776 containerd[1574]: 2025-07-14 22:11:27.952 [INFO][4366] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.472b6959c8ff7a0e3c9c2c6ea04e78ddce9512f9f70fe4d0f71cc22a138704c2" host="localhost" Jul 14 22:11:27.986776 containerd[1574]: 2025-07-14 22:11:27.953 [INFO][4366] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.472b6959c8ff7a0e3c9c2c6ea04e78ddce9512f9f70fe4d0f71cc22a138704c2 Jul 14 22:11:27.986776 containerd[1574]: 2025-07-14 22:11:27.959 [INFO][4366] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.472b6959c8ff7a0e3c9c2c6ea04e78ddce9512f9f70fe4d0f71cc22a138704c2" host="localhost" Jul 14 22:11:27.986776 containerd[1574]: 2025-07-14 22:11:27.964 [INFO][4366] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.472b6959c8ff7a0e3c9c2c6ea04e78ddce9512f9f70fe4d0f71cc22a138704c2" host="localhost" Jul 14 22:11:27.986776 containerd[1574]: 2025-07-14 22:11:27.964 [INFO][4366] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.472b6959c8ff7a0e3c9c2c6ea04e78ddce9512f9f70fe4d0f71cc22a138704c2" host="localhost" Jul 14 22:11:27.986776 containerd[1574]: 2025-07-14 22:11:27.964 [INFO][4366] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:11:27.986776 containerd[1574]: 2025-07-14 22:11:27.964 [INFO][4366] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="472b6959c8ff7a0e3c9c2c6ea04e78ddce9512f9f70fe4d0f71cc22a138704c2" HandleID="k8s-pod-network.472b6959c8ff7a0e3c9c2c6ea04e78ddce9512f9f70fe4d0f71cc22a138704c2" Workload="localhost-k8s-coredns--7c65d6cfc9--57dnv-eth0" Jul 14 22:11:27.987447 containerd[1574]: 2025-07-14 22:11:27.968 [INFO][4355] cni-plugin/k8s.go 418: Populated endpoint ContainerID="472b6959c8ff7a0e3c9c2c6ea04e78ddce9512f9f70fe4d0f71cc22a138704c2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-57dnv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--57dnv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--57dnv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"9c73f2d5-835f-459b-8d41-d88723278569", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 10, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-57dnv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0eaa16e2ed4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:11:27.987447 containerd[1574]: 2025-07-14 22:11:27.968 [INFO][4355] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="472b6959c8ff7a0e3c9c2c6ea04e78ddce9512f9f70fe4d0f71cc22a138704c2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-57dnv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--57dnv-eth0" Jul 14 22:11:27.987447 containerd[1574]: 2025-07-14 22:11:27.968 [INFO][4355] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0eaa16e2ed4 ContainerID="472b6959c8ff7a0e3c9c2c6ea04e78ddce9512f9f70fe4d0f71cc22a138704c2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-57dnv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--57dnv-eth0" Jul 14 22:11:27.987447 containerd[1574]: 2025-07-14 22:11:27.971 [INFO][4355] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="472b6959c8ff7a0e3c9c2c6ea04e78ddce9512f9f70fe4d0f71cc22a138704c2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-57dnv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--57dnv-eth0" Jul 14 22:11:27.987447 containerd[1574]: 2025-07-14 22:11:27.972 [INFO][4355] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="472b6959c8ff7a0e3c9c2c6ea04e78ddce9512f9f70fe4d0f71cc22a138704c2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-57dnv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--57dnv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--57dnv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"9c73f2d5-835f-459b-8d41-d88723278569", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 10, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"472b6959c8ff7a0e3c9c2c6ea04e78ddce9512f9f70fe4d0f71cc22a138704c2", Pod:"coredns-7c65d6cfc9-57dnv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0eaa16e2ed4", MAC:"12:fd:d9:b5:76:a4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:11:27.987447 containerd[1574]: 2025-07-14 22:11:27.982 [INFO][4355] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="472b6959c8ff7a0e3c9c2c6ea04e78ddce9512f9f70fe4d0f71cc22a138704c2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-57dnv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--57dnv-eth0" Jul 14 22:11:28.005530 containerd[1574]: time="2025-07-14T22:11:28.005412622Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:11:28.005530 containerd[1574]: time="2025-07-14T22:11:28.005482865Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:11:28.005530 containerd[1574]: time="2025-07-14T22:11:28.005514254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:11:28.005752 containerd[1574]: time="2025-07-14T22:11:28.005622539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:11:28.034480 systemd-resolved[1461]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:11:28.063700 containerd[1574]: time="2025-07-14T22:11:28.063646043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-57dnv,Uid:9c73f2d5-835f-459b-8d41-d88723278569,Namespace:kube-system,Attempt:1,} returns sandbox id \"472b6959c8ff7a0e3c9c2c6ea04e78ddce9512f9f70fe4d0f71cc22a138704c2\"" Jul 14 22:11:28.064581 kubelet[2717]: E0714 22:11:28.064550 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:11:28.067221 containerd[1574]: time="2025-07-14T22:11:28.067190543Z" level=info msg="CreateContainer within sandbox \"472b6959c8ff7a0e3c9c2c6ea04e78ddce9512f9f70fe4d0f71cc22a138704c2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 14 22:11:28.092221 containerd[1574]: time="2025-07-14T22:11:28.092157137Z" level=info msg="CreateContainer within sandbox \"472b6959c8ff7a0e3c9c2c6ea04e78ddce9512f9f70fe4d0f71cc22a138704c2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9a06cfcf6bd41e9bb151a337f928a3079984cbfddc0b5bdaa3a15eadfb44b9b6\"" Jul 14 22:11:28.092936 containerd[1574]: time="2025-07-14T22:11:28.092867451Z" level=info msg="StartContainer for \"9a06cfcf6bd41e9bb151a337f928a3079984cbfddc0b5bdaa3a15eadfb44b9b6\"" Jul 14 22:11:28.131718 systemd-networkd[1245]: vxlan.calico: Gained IPv6LL Jul 14 22:11:28.195475 containerd[1574]: time="2025-07-14T22:11:28.195424907Z" level=info msg="StartContainer for \"9a06cfcf6bd41e9bb151a337f928a3079984cbfddc0b5bdaa3a15eadfb44b9b6\" returns successfully" Jul 14 22:11:28.761563 containerd[1574]: time="2025-07-14T22:11:28.761478706Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:11:28.764693 containerd[1574]: time="2025-07-14T22:11:28.764655141Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 14 22:11:28.765206 containerd[1574]: time="2025-07-14T22:11:28.765171346Z" level=info msg="StopPodSandbox for \"232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba\"" Jul 14 22:11:28.767808 containerd[1574]: time="2025-07-14T22:11:28.767657607Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:11:28.772362 containerd[1574]: time="2025-07-14T22:11:28.772267320Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:11:28.773180 containerd[1574]: time="2025-07-14T22:11:28.773130051Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 2.041582732s" Jul 14 22:11:28.773233 containerd[1574]: time="2025-07-14T22:11:28.773177882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 14 22:11:28.775684 containerd[1574]: time="2025-07-14T22:11:28.775632061Z" level=info msg="CreateContainer within sandbox \"86b814240f1b3ac421d5f258adacf06dc7c744861d25dca7247186c3807df6ec\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 14 22:11:28.794471 containerd[1574]: time="2025-07-14T22:11:28.794420017Z" level=info msg="CreateContainer within sandbox \"86b814240f1b3ac421d5f258adacf06dc7c744861d25dca7247186c3807df6ec\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"f6dcf59db0d47a8ff9bbac179ad21a4ccdfeec64665868832ec99a99f88140b8\"" Jul 14 22:11:28.795140 containerd[1574]: time="2025-07-14T22:11:28.794938847Z" level=info msg="StartContainer for \"f6dcf59db0d47a8ff9bbac179ad21a4ccdfeec64665868832ec99a99f88140b8\"" Jul 14 22:11:28.859064 containerd[1574]: 2025-07-14 22:11:28.817 [INFO][4479] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba" Jul 14 22:11:28.859064 containerd[1574]: 2025-07-14 22:11:28.817 [INFO][4479] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba" iface="eth0" netns="/var/run/netns/cni-07064345-03f7-1a93-f58a-e79711a91711" Jul 14 22:11:28.859064 containerd[1574]: 2025-07-14 22:11:28.817 [INFO][4479] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba" iface="eth0" netns="/var/run/netns/cni-07064345-03f7-1a93-f58a-e79711a91711" Jul 14 22:11:28.859064 containerd[1574]: 2025-07-14 22:11:28.818 [INFO][4479] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba" iface="eth0" netns="/var/run/netns/cni-07064345-03f7-1a93-f58a-e79711a91711" Jul 14 22:11:28.859064 containerd[1574]: 2025-07-14 22:11:28.818 [INFO][4479] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba" Jul 14 22:11:28.859064 containerd[1574]: 2025-07-14 22:11:28.818 [INFO][4479] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba" Jul 14 22:11:28.859064 containerd[1574]: 2025-07-14 22:11:28.844 [INFO][4501] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba" HandleID="k8s-pod-network.232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba" Workload="localhost-k8s-calico--kube--controllers--b8d99f984--kmljb-eth0" Jul 14 22:11:28.859064 containerd[1574]: 2025-07-14 22:11:28.844 [INFO][4501] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:11:28.859064 containerd[1574]: 2025-07-14 22:11:28.844 [INFO][4501] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:11:28.859064 containerd[1574]: 2025-07-14 22:11:28.850 [WARNING][4501] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba" HandleID="k8s-pod-network.232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba" Workload="localhost-k8s-calico--kube--controllers--b8d99f984--kmljb-eth0" Jul 14 22:11:28.859064 containerd[1574]: 2025-07-14 22:11:28.850 [INFO][4501] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba" HandleID="k8s-pod-network.232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba" Workload="localhost-k8s-calico--kube--controllers--b8d99f984--kmljb-eth0" Jul 14 22:11:28.859064 containerd[1574]: 2025-07-14 22:11:28.852 [INFO][4501] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:11:28.859064 containerd[1574]: 2025-07-14 22:11:28.856 [INFO][4479] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba" Jul 14 22:11:28.859592 containerd[1574]: time="2025-07-14T22:11:28.859250838Z" level=info msg="TearDown network for sandbox \"232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba\" successfully" Jul 14 22:11:28.859592 containerd[1574]: time="2025-07-14T22:11:28.859299079Z" level=info msg="StopPodSandbox for \"232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba\" returns successfully" Jul 14 22:11:28.860283 containerd[1574]: time="2025-07-14T22:11:28.860230700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b8d99f984-kmljb,Uid:2b64c079-46df-4d78-82e9-6807a889d4ac,Namespace:calico-system,Attempt:1,}" Jul 14 22:11:28.872212 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1114284974.mount: Deactivated successfully. Jul 14 22:11:28.872946 systemd[1]: run-netns-cni\x2d07064345\x2d03f7\x2d1a93\x2df58a\x2de79711a91711.mount: Deactivated successfully. Jul 14 22:11:28.876187 containerd[1574]: time="2025-07-14T22:11:28.876152398Z" level=info msg="StartContainer for \"f6dcf59db0d47a8ff9bbac179ad21a4ccdfeec64665868832ec99a99f88140b8\" returns successfully" Jul 14 22:11:28.877751 containerd[1574]: time="2025-07-14T22:11:28.877712566Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 14 22:11:28.987566 kubelet[2717]: E0714 22:11:28.987531 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:11:28.989140 systemd-networkd[1245]: caliede15d390e4: Link UP Jul 14 22:11:28.991637 systemd-networkd[1245]: caliede15d390e4: Gained carrier Jul 14 22:11:29.004088 kubelet[2717]: I0714 22:11:29.004019 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-57dnv" podStartSLOduration=44.003994282 podStartE2EDuration="44.003994282s" podCreationTimestamp="2025-07-14 22:10:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:11:29.002023476 +0000 UTC m=+48.320057722" watchObservedRunningTime="2025-07-14 22:11:29.003994282 +0000 UTC m=+48.322028528" Jul 14 22:11:29.008797 containerd[1574]: 2025-07-14 22:11:28.922 [INFO][4533] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--b8d99f984--kmljb-eth0 calico-kube-controllers-b8d99f984- calico-system 2b64c079-46df-4d78-82e9-6807a889d4ac 978 0 2025-07-14 22:11:03 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:b8d99f984 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-b8d99f984-kmljb eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliede15d390e4 [] [] }} ContainerID="5b0b47633a5179be4bc9db04750d7c0c728abed50968559bdb259b3a94f93332" Namespace="calico-system" Pod="calico-kube-controllers-b8d99f984-kmljb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b8d99f984--kmljb-" Jul 14 22:11:29.008797 containerd[1574]: 2025-07-14 22:11:28.923 [INFO][4533] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5b0b47633a5179be4bc9db04750d7c0c728abed50968559bdb259b3a94f93332" Namespace="calico-system" Pod="calico-kube-controllers-b8d99f984-kmljb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b8d99f984--kmljb-eth0" Jul 14 22:11:29.008797 containerd[1574]: 2025-07-14 22:11:28.948 [INFO][4547] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5b0b47633a5179be4bc9db04750d7c0c728abed50968559bdb259b3a94f93332" HandleID="k8s-pod-network.5b0b47633a5179be4bc9db04750d7c0c728abed50968559bdb259b3a94f93332" Workload="localhost-k8s-calico--kube--controllers--b8d99f984--kmljb-eth0" Jul 14 22:11:29.008797 containerd[1574]: 2025-07-14 22:11:28.948 [INFO][4547] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5b0b47633a5179be4bc9db04750d7c0c728abed50968559bdb259b3a94f93332" HandleID="k8s-pod-network.5b0b47633a5179be4bc9db04750d7c0c728abed50968559bdb259b3a94f93332" Workload="localhost-k8s-calico--kube--controllers--b8d99f984--kmljb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001356f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-b8d99f984-kmljb", "timestamp":"2025-07-14 22:11:28.948697432 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:11:29.008797 containerd[1574]: 2025-07-14 22:11:28.949 [INFO][4547] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:11:29.008797 containerd[1574]: 2025-07-14 22:11:28.949 [INFO][4547] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:11:29.008797 containerd[1574]: 2025-07-14 22:11:28.949 [INFO][4547] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:11:29.008797 containerd[1574]: 2025-07-14 22:11:28.955 [INFO][4547] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5b0b47633a5179be4bc9db04750d7c0c728abed50968559bdb259b3a94f93332" host="localhost" Jul 14 22:11:29.008797 containerd[1574]: 2025-07-14 22:11:28.960 [INFO][4547] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:11:29.008797 containerd[1574]: 2025-07-14 22:11:28.964 [INFO][4547] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:11:29.008797 containerd[1574]: 2025-07-14 22:11:28.967 [INFO][4547] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:11:29.008797 containerd[1574]: 2025-07-14 22:11:28.969 [INFO][4547] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:11:29.008797 containerd[1574]: 2025-07-14 22:11:28.969 [INFO][4547] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5b0b47633a5179be4bc9db04750d7c0c728abed50968559bdb259b3a94f93332" host="localhost" Jul 14 22:11:29.008797 containerd[1574]: 2025-07-14 22:11:28.971 [INFO][4547] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5b0b47633a5179be4bc9db04750d7c0c728abed50968559bdb259b3a94f93332 Jul 14 22:11:29.008797 containerd[1574]: 2025-07-14 22:11:28.975 [INFO][4547] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5b0b47633a5179be4bc9db04750d7c0c728abed50968559bdb259b3a94f93332" host="localhost" Jul 14 22:11:29.008797 containerd[1574]: 2025-07-14 22:11:28.981 [INFO][4547] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.5b0b47633a5179be4bc9db04750d7c0c728abed50968559bdb259b3a94f93332" host="localhost" Jul 14 22:11:29.008797 containerd[1574]: 2025-07-14 22:11:28.982 [INFO][4547] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.5b0b47633a5179be4bc9db04750d7c0c728abed50968559bdb259b3a94f93332" host="localhost" Jul 14 22:11:29.008797 containerd[1574]: 2025-07-14 22:11:28.982 [INFO][4547] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:11:29.008797 containerd[1574]: 2025-07-14 22:11:28.982 [INFO][4547] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="5b0b47633a5179be4bc9db04750d7c0c728abed50968559bdb259b3a94f93332" HandleID="k8s-pod-network.5b0b47633a5179be4bc9db04750d7c0c728abed50968559bdb259b3a94f93332" Workload="localhost-k8s-calico--kube--controllers--b8d99f984--kmljb-eth0" Jul 14 22:11:29.010032 containerd[1574]: 2025-07-14 22:11:28.985 [INFO][4533] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5b0b47633a5179be4bc9db04750d7c0c728abed50968559bdb259b3a94f93332" Namespace="calico-system" Pod="calico-kube-controllers-b8d99f984-kmljb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b8d99f984--kmljb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--b8d99f984--kmljb-eth0", GenerateName:"calico-kube-controllers-b8d99f984-", Namespace:"calico-system", SelfLink:"", UID:"2b64c079-46df-4d78-82e9-6807a889d4ac", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 11, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b8d99f984", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-b8d99f984-kmljb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliede15d390e4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:11:29.010032 containerd[1574]: 2025-07-14 22:11:28.985 [INFO][4533] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="5b0b47633a5179be4bc9db04750d7c0c728abed50968559bdb259b3a94f93332" Namespace="calico-system" Pod="calico-kube-controllers-b8d99f984-kmljb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b8d99f984--kmljb-eth0" Jul 14 22:11:29.010032 containerd[1574]: 2025-07-14 22:11:28.985 [INFO][4533] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliede15d390e4 ContainerID="5b0b47633a5179be4bc9db04750d7c0c728abed50968559bdb259b3a94f93332" Namespace="calico-system" Pod="calico-kube-controllers-b8d99f984-kmljb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b8d99f984--kmljb-eth0" Jul 14 22:11:29.010032 containerd[1574]: 2025-07-14 22:11:28.992 [INFO][4533] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5b0b47633a5179be4bc9db04750d7c0c728abed50968559bdb259b3a94f93332" Namespace="calico-system" Pod="calico-kube-controllers-b8d99f984-kmljb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b8d99f984--kmljb-eth0" Jul 14 22:11:29.010032 containerd[1574]: 2025-07-14 22:11:28.993 [INFO][4533] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5b0b47633a5179be4bc9db04750d7c0c728abed50968559bdb259b3a94f93332" Namespace="calico-system" Pod="calico-kube-controllers-b8d99f984-kmljb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b8d99f984--kmljb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--b8d99f984--kmljb-eth0", GenerateName:"calico-kube-controllers-b8d99f984-", Namespace:"calico-system", SelfLink:"", UID:"2b64c079-46df-4d78-82e9-6807a889d4ac", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 11, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b8d99f984", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5b0b47633a5179be4bc9db04750d7c0c728abed50968559bdb259b3a94f93332", Pod:"calico-kube-controllers-b8d99f984-kmljb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliede15d390e4", MAC:"72:60:93:9a:da:30", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:11:29.010032 containerd[1574]: 2025-07-14 22:11:29.005 [INFO][4533] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5b0b47633a5179be4bc9db04750d7c0c728abed50968559bdb259b3a94f93332" Namespace="calico-system" Pod="calico-kube-controllers-b8d99f984-kmljb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b8d99f984--kmljb-eth0" Jul 14 22:11:29.041235 containerd[1574]: time="2025-07-14T22:11:29.039968572Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:11:29.041235 containerd[1574]: time="2025-07-14T22:11:29.040040097Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:11:29.041235 containerd[1574]: time="2025-07-14T22:11:29.040054645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:11:29.041235 containerd[1574]: time="2025-07-14T22:11:29.040164933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:11:29.068646 systemd-resolved[1461]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:11:29.094421 containerd[1574]: time="2025-07-14T22:11:29.094377378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b8d99f984-kmljb,Uid:2b64c079-46df-4d78-82e9-6807a889d4ac,Namespace:calico-system,Attempt:1,} returns sandbox id \"5b0b47633a5179be4bc9db04750d7c0c728abed50968559bdb259b3a94f93332\"" Jul 14 22:11:29.411687 systemd-networkd[1245]: cali0eaa16e2ed4: Gained IPv6LL Jul 14 22:11:29.765568 containerd[1574]: time="2025-07-14T22:11:29.765360541Z" level=info msg="StopPodSandbox for \"5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2\"" Jul 14 22:11:29.766247 containerd[1574]: time="2025-07-14T22:11:29.765967368Z" level=info msg="StopPodSandbox for \"3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0\"" Jul 14 22:11:29.766866 containerd[1574]: time="2025-07-14T22:11:29.766798860Z" level=info msg="StopPodSandbox for \"108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c\"" Jul 14 22:11:29.870582 containerd[1574]: 2025-07-14 22:11:29.818 [INFO][4639] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2" Jul 14 22:11:29.870582 containerd[1574]: 2025-07-14 22:11:29.819 [INFO][4639] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2" iface="eth0" netns="/var/run/netns/cni-1a9d8584-b1b7-d0dd-8eac-9431f70c9b3b" Jul 14 22:11:29.870582 containerd[1574]: 2025-07-14 22:11:29.819 [INFO][4639] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2" iface="eth0" netns="/var/run/netns/cni-1a9d8584-b1b7-d0dd-8eac-9431f70c9b3b" Jul 14 22:11:29.870582 containerd[1574]: 2025-07-14 22:11:29.819 [INFO][4639] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2" iface="eth0" netns="/var/run/netns/cni-1a9d8584-b1b7-d0dd-8eac-9431f70c9b3b" Jul 14 22:11:29.870582 containerd[1574]: 2025-07-14 22:11:29.819 [INFO][4639] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2" Jul 14 22:11:29.870582 containerd[1574]: 2025-07-14 22:11:29.819 [INFO][4639] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2" Jul 14 22:11:29.870582 containerd[1574]: 2025-07-14 22:11:29.851 [INFO][4666] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2" HandleID="k8s-pod-network.5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2" Workload="localhost-k8s-calico--apiserver--5b9b695b56--szjht-eth0" Jul 14 22:11:29.870582 containerd[1574]: 2025-07-14 22:11:29.851 [INFO][4666] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:11:29.870582 containerd[1574]: 2025-07-14 22:11:29.852 [INFO][4666] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:11:29.870582 containerd[1574]: 2025-07-14 22:11:29.860 [WARNING][4666] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2" HandleID="k8s-pod-network.5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2" Workload="localhost-k8s-calico--apiserver--5b9b695b56--szjht-eth0" Jul 14 22:11:29.870582 containerd[1574]: 2025-07-14 22:11:29.860 [INFO][4666] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2" HandleID="k8s-pod-network.5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2" Workload="localhost-k8s-calico--apiserver--5b9b695b56--szjht-eth0" Jul 14 22:11:29.870582 containerd[1574]: 2025-07-14 22:11:29.861 [INFO][4666] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:11:29.870582 containerd[1574]: 2025-07-14 22:11:29.865 [INFO][4639] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2" Jul 14 22:11:29.872757 containerd[1574]: time="2025-07-14T22:11:29.872708222Z" level=info msg="TearDown network for sandbox \"5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2\" successfully" Jul 14 22:11:29.872920 containerd[1574]: time="2025-07-14T22:11:29.872805356Z" level=info msg="StopPodSandbox for \"5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2\" returns successfully" Jul 14 22:11:29.873915 containerd[1574]: time="2025-07-14T22:11:29.873864739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b9b695b56-szjht,Uid:ce018b19-dfa1-40d7-99a8-a9a4fa008b27,Namespace:calico-apiserver,Attempt:1,}" Jul 14 22:11:29.875629 systemd[1]: run-netns-cni\x2d1a9d8584\x2db1b7\x2dd0dd\x2d8eac\x2d9431f70c9b3b.mount: Deactivated successfully. Jul 14 22:11:29.880103 containerd[1574]: 2025-07-14 22:11:29.838 [INFO][4645] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0" Jul 14 22:11:29.880103 containerd[1574]: 2025-07-14 22:11:29.838 [INFO][4645] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0" iface="eth0" netns="/var/run/netns/cni-172172a5-3a2f-274d-f5db-65f3c4dc0fd2" Jul 14 22:11:29.880103 containerd[1574]: 2025-07-14 22:11:29.838 [INFO][4645] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0" iface="eth0" netns="/var/run/netns/cni-172172a5-3a2f-274d-f5db-65f3c4dc0fd2" Jul 14 22:11:29.880103 containerd[1574]: 2025-07-14 22:11:29.839 [INFO][4645] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0" iface="eth0" netns="/var/run/netns/cni-172172a5-3a2f-274d-f5db-65f3c4dc0fd2" Jul 14 22:11:29.880103 containerd[1574]: 2025-07-14 22:11:29.839 [INFO][4645] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0" Jul 14 22:11:29.880103 containerd[1574]: 2025-07-14 22:11:29.839 [INFO][4645] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0" Jul 14 22:11:29.880103 containerd[1574]: 2025-07-14 22:11:29.864 [INFO][4683] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0" HandleID="k8s-pod-network.3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0" Workload="localhost-k8s-coredns--7c65d6cfc9--swnzl-eth0" Jul 14 22:11:29.880103 containerd[1574]: 2025-07-14 22:11:29.864 [INFO][4683] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:11:29.880103 containerd[1574]: 2025-07-14 22:11:29.864 [INFO][4683] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:11:29.880103 containerd[1574]: 2025-07-14 22:11:29.872 [WARNING][4683] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0" HandleID="k8s-pod-network.3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0" Workload="localhost-k8s-coredns--7c65d6cfc9--swnzl-eth0" Jul 14 22:11:29.880103 containerd[1574]: 2025-07-14 22:11:29.872 [INFO][4683] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0" HandleID="k8s-pod-network.3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0" Workload="localhost-k8s-coredns--7c65d6cfc9--swnzl-eth0" Jul 14 22:11:29.880103 containerd[1574]: 2025-07-14 22:11:29.873 [INFO][4683] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:11:29.880103 containerd[1574]: 2025-07-14 22:11:29.877 [INFO][4645] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0" Jul 14 22:11:29.881122 containerd[1574]: time="2025-07-14T22:11:29.880183165Z" level=info msg="TearDown network for sandbox \"3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0\" successfully" Jul 14 22:11:29.881122 containerd[1574]: time="2025-07-14T22:11:29.880206900Z" level=info msg="StopPodSandbox for \"3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0\" returns successfully" Jul 14 22:11:29.881864 kubelet[2717]: E0714 22:11:29.881606 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:11:29.883196 containerd[1574]: time="2025-07-14T22:11:29.882568255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-swnzl,Uid:386dcad9-cbe7-41a3-a762-3963d0cad867,Namespace:kube-system,Attempt:1,}" Jul 14 22:11:29.883389 systemd[1]: run-netns-cni\x2d172172a5\x2d3a2f\x2d274d\x2df5db\x2d65f3c4dc0fd2.mount: Deactivated successfully. Jul 14 22:11:29.894984 containerd[1574]: 2025-07-14 22:11:29.835 [INFO][4643] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c" Jul 14 22:11:29.894984 containerd[1574]: 2025-07-14 22:11:29.835 [INFO][4643] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c" iface="eth0" netns="/var/run/netns/cni-91e68867-2afb-c221-d765-e7174c91884c" Jul 14 22:11:29.894984 containerd[1574]: 2025-07-14 22:11:29.835 [INFO][4643] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c" iface="eth0" netns="/var/run/netns/cni-91e68867-2afb-c221-d765-e7174c91884c" Jul 14 22:11:29.894984 containerd[1574]: 2025-07-14 22:11:29.836 [INFO][4643] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c" iface="eth0" netns="/var/run/netns/cni-91e68867-2afb-c221-d765-e7174c91884c" Jul 14 22:11:29.894984 containerd[1574]: 2025-07-14 22:11:29.836 [INFO][4643] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c" Jul 14 22:11:29.894984 containerd[1574]: 2025-07-14 22:11:29.836 [INFO][4643] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c" Jul 14 22:11:29.894984 containerd[1574]: 2025-07-14 22:11:29.870 [INFO][4675] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c" HandleID="k8s-pod-network.108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c" Workload="localhost-k8s-calico--apiserver--5b9b695b56--kzv98-eth0" Jul 14 22:11:29.894984 containerd[1574]: 2025-07-14 22:11:29.871 [INFO][4675] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:11:29.894984 containerd[1574]: 2025-07-14 22:11:29.873 [INFO][4675] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:11:29.894984 containerd[1574]: 2025-07-14 22:11:29.883 [WARNING][4675] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c" HandleID="k8s-pod-network.108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c" Workload="localhost-k8s-calico--apiserver--5b9b695b56--kzv98-eth0" Jul 14 22:11:29.894984 containerd[1574]: 2025-07-14 22:11:29.884 [INFO][4675] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c" HandleID="k8s-pod-network.108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c" Workload="localhost-k8s-calico--apiserver--5b9b695b56--kzv98-eth0" Jul 14 22:11:29.894984 containerd[1574]: 2025-07-14 22:11:29.887 [INFO][4675] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:11:29.894984 containerd[1574]: 2025-07-14 22:11:29.891 [INFO][4643] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c" Jul 14 22:11:29.895575 containerd[1574]: time="2025-07-14T22:11:29.895135808Z" level=info msg="TearDown network for sandbox \"108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c\" successfully" Jul 14 22:11:29.895575 containerd[1574]: time="2025-07-14T22:11:29.895167688Z" level=info msg="StopPodSandbox for \"108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c\" returns successfully" Jul 14 22:11:29.895989 containerd[1574]: time="2025-07-14T22:11:29.895946591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b9b695b56-kzv98,Uid:9836a141-78f0-47fd-89e8-6b74ea1f1f07,Namespace:calico-apiserver,Attempt:1,}" Jul 14 22:11:29.899011 systemd[1]: run-netns-cni\x2d91e68867\x2d2afb\x2dc221\x2dd765\x2de7174c91884c.mount: Deactivated successfully. Jul 14 22:11:30.000913 kubelet[2717]: E0714 22:11:30.000880 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:11:30.023323 systemd-networkd[1245]: calia267a7c017a: Link UP Jul 14 22:11:30.026717 systemd-networkd[1245]: calia267a7c017a: Gained carrier Jul 14 22:11:30.040390 containerd[1574]: 2025-07-14 22:11:29.949 [INFO][4706] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--swnzl-eth0 coredns-7c65d6cfc9- kube-system 386dcad9-cbe7-41a3-a762-3963d0cad867 1000 0 2025-07-14 22:10:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-swnzl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia267a7c017a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d067eee1f43ddac1601183bf91d4def883ab9e1d065f842218214de7880f53a2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-swnzl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--swnzl-" Jul 14 22:11:30.040390 containerd[1574]: 2025-07-14 22:11:29.949 [INFO][4706] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d067eee1f43ddac1601183bf91d4def883ab9e1d065f842218214de7880f53a2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-swnzl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--swnzl-eth0" Jul 14 22:11:30.040390 containerd[1574]: 2025-07-14 22:11:29.982 [INFO][4740] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d067eee1f43ddac1601183bf91d4def883ab9e1d065f842218214de7880f53a2" HandleID="k8s-pod-network.d067eee1f43ddac1601183bf91d4def883ab9e1d065f842218214de7880f53a2" Workload="localhost-k8s-coredns--7c65d6cfc9--swnzl-eth0" Jul 14 22:11:30.040390 containerd[1574]: 2025-07-14 22:11:29.982 [INFO][4740] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d067eee1f43ddac1601183bf91d4def883ab9e1d065f842218214de7880f53a2" HandleID="k8s-pod-network.d067eee1f43ddac1601183bf91d4def883ab9e1d065f842218214de7880f53a2" Workload="localhost-k8s-coredns--7c65d6cfc9--swnzl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001b9760), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-swnzl", "timestamp":"2025-07-14 22:11:29.982113443 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:11:30.040390 containerd[1574]: 2025-07-14 22:11:29.982 [INFO][4740] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:11:30.040390 containerd[1574]: 2025-07-14 22:11:29.982 [INFO][4740] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:11:30.040390 containerd[1574]: 2025-07-14 22:11:29.982 [INFO][4740] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:11:30.040390 containerd[1574]: 2025-07-14 22:11:29.989 [INFO][4740] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d067eee1f43ddac1601183bf91d4def883ab9e1d065f842218214de7880f53a2" host="localhost" Jul 14 22:11:30.040390 containerd[1574]: 2025-07-14 22:11:29.995 [INFO][4740] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:11:30.040390 containerd[1574]: 2025-07-14 22:11:29.999 [INFO][4740] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:11:30.040390 containerd[1574]: 2025-07-14 22:11:30.001 [INFO][4740] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:11:30.040390 containerd[1574]: 2025-07-14 22:11:30.003 [INFO][4740] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:11:30.040390 containerd[1574]: 2025-07-14 22:11:30.003 [INFO][4740] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d067eee1f43ddac1601183bf91d4def883ab9e1d065f842218214de7880f53a2" host="localhost" Jul 14 22:11:30.040390 containerd[1574]: 2025-07-14 22:11:30.005 [INFO][4740] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d067eee1f43ddac1601183bf91d4def883ab9e1d065f842218214de7880f53a2 Jul 14 22:11:30.040390 containerd[1574]: 2025-07-14 22:11:30.010 [INFO][4740] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d067eee1f43ddac1601183bf91d4def883ab9e1d065f842218214de7880f53a2" host="localhost" Jul 14 22:11:30.040390 containerd[1574]: 2025-07-14 22:11:30.017 [INFO][4740] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.d067eee1f43ddac1601183bf91d4def883ab9e1d065f842218214de7880f53a2" host="localhost" Jul 14 22:11:30.040390 containerd[1574]: 2025-07-14 22:11:30.017 [INFO][4740] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.d067eee1f43ddac1601183bf91d4def883ab9e1d065f842218214de7880f53a2" host="localhost" Jul 14 22:11:30.040390 containerd[1574]: 2025-07-14 22:11:30.017 [INFO][4740] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:11:30.040390 containerd[1574]: 2025-07-14 22:11:30.017 [INFO][4740] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="d067eee1f43ddac1601183bf91d4def883ab9e1d065f842218214de7880f53a2" HandleID="k8s-pod-network.d067eee1f43ddac1601183bf91d4def883ab9e1d065f842218214de7880f53a2" Workload="localhost-k8s-coredns--7c65d6cfc9--swnzl-eth0" Jul 14 22:11:30.041402 containerd[1574]: 2025-07-14 22:11:30.020 [INFO][4706] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d067eee1f43ddac1601183bf91d4def883ab9e1d065f842218214de7880f53a2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-swnzl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--swnzl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--swnzl-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"386dcad9-cbe7-41a3-a762-3963d0cad867", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 10, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-swnzl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia267a7c017a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:11:30.041402 containerd[1574]: 2025-07-14 22:11:30.021 [INFO][4706] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="d067eee1f43ddac1601183bf91d4def883ab9e1d065f842218214de7880f53a2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-swnzl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--swnzl-eth0" Jul 14 22:11:30.041402 containerd[1574]: 2025-07-14 22:11:30.021 [INFO][4706] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia267a7c017a ContainerID="d067eee1f43ddac1601183bf91d4def883ab9e1d065f842218214de7880f53a2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-swnzl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--swnzl-eth0" Jul 14 22:11:30.041402 containerd[1574]: 2025-07-14 22:11:30.024 [INFO][4706] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d067eee1f43ddac1601183bf91d4def883ab9e1d065f842218214de7880f53a2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-swnzl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--swnzl-eth0" Jul 14 22:11:30.041402 containerd[1574]: 2025-07-14 22:11:30.024 [INFO][4706] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d067eee1f43ddac1601183bf91d4def883ab9e1d065f842218214de7880f53a2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-swnzl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--swnzl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--swnzl-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"386dcad9-cbe7-41a3-a762-3963d0cad867", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 10, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d067eee1f43ddac1601183bf91d4def883ab9e1d065f842218214de7880f53a2", Pod:"coredns-7c65d6cfc9-swnzl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia267a7c017a", MAC:"3e:1e:d4:75:57:44", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:11:30.041402 containerd[1574]: 2025-07-14 22:11:30.036 [INFO][4706] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d067eee1f43ddac1601183bf91d4def883ab9e1d065f842218214de7880f53a2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-swnzl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--swnzl-eth0" Jul 14 22:11:30.070190 containerd[1574]: time="2025-07-14T22:11:30.069941218Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:11:30.070190 containerd[1574]: time="2025-07-14T22:11:30.070020708Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:11:30.070190 containerd[1574]: time="2025-07-14T22:11:30.070045144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:11:30.070419 containerd[1574]: time="2025-07-14T22:11:30.070137048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:11:30.100610 systemd-resolved[1461]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:11:30.128250 systemd-networkd[1245]: calia14962a5d99: Link UP Jul 14 22:11:30.129413 systemd-networkd[1245]: calia14962a5d99: Gained carrier Jul 14 22:11:30.137939 containerd[1574]: time="2025-07-14T22:11:30.137887479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-swnzl,Uid:386dcad9-cbe7-41a3-a762-3963d0cad867,Namespace:kube-system,Attempt:1,} returns sandbox id \"d067eee1f43ddac1601183bf91d4def883ab9e1d065f842218214de7880f53a2\"" Jul 14 22:11:30.139071 kubelet[2717]: E0714 22:11:30.139041 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:11:30.141470 containerd[1574]: time="2025-07-14T22:11:30.141430239Z" level=info msg="CreateContainer within sandbox \"d067eee1f43ddac1601183bf91d4def883ab9e1d065f842218214de7880f53a2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 14 22:11:30.146909 containerd[1574]: 2025-07-14 22:11:29.952 [INFO][4696] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5b9b695b56--szjht-eth0 calico-apiserver-5b9b695b56- calico-apiserver ce018b19-dfa1-40d7-99a8-a9a4fa008b27 999 0 2025-07-14 22:11:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b9b695b56 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5b9b695b56-szjht eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia14962a5d99 [] [] }} ContainerID="5f58e4bbf5cbb6c5ff1f4b890f32ef7207f16206a8a6727692e6f525b723077c" Namespace="calico-apiserver" Pod="calico-apiserver-5b9b695b56-szjht" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b9b695b56--szjht-" Jul 14 22:11:30.146909 containerd[1574]: 2025-07-14 22:11:29.953 [INFO][4696] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5f58e4bbf5cbb6c5ff1f4b890f32ef7207f16206a8a6727692e6f525b723077c" Namespace="calico-apiserver" Pod="calico-apiserver-5b9b695b56-szjht" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b9b695b56--szjht-eth0" Jul 14 22:11:30.146909 containerd[1574]: 2025-07-14 22:11:29.990 [INFO][4745] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5f58e4bbf5cbb6c5ff1f4b890f32ef7207f16206a8a6727692e6f525b723077c" HandleID="k8s-pod-network.5f58e4bbf5cbb6c5ff1f4b890f32ef7207f16206a8a6727692e6f525b723077c" Workload="localhost-k8s-calico--apiserver--5b9b695b56--szjht-eth0" Jul 14 22:11:30.146909 containerd[1574]: 2025-07-14 22:11:29.990 [INFO][4745] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5f58e4bbf5cbb6c5ff1f4b890f32ef7207f16206a8a6727692e6f525b723077c" HandleID="k8s-pod-network.5f58e4bbf5cbb6c5ff1f4b890f32ef7207f16206a8a6727692e6f525b723077c" Workload="localhost-k8s-calico--apiserver--5b9b695b56--szjht-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139740), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5b9b695b56-szjht", "timestamp":"2025-07-14 22:11:29.990025975 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:11:30.146909 containerd[1574]: 2025-07-14 22:11:29.990 [INFO][4745] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:11:30.146909 containerd[1574]: 2025-07-14 22:11:30.017 [INFO][4745] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:11:30.146909 containerd[1574]: 2025-07-14 22:11:30.017 [INFO][4745] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:11:30.146909 containerd[1574]: 2025-07-14 22:11:30.090 [INFO][4745] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5f58e4bbf5cbb6c5ff1f4b890f32ef7207f16206a8a6727692e6f525b723077c" host="localhost" Jul 14 22:11:30.146909 containerd[1574]: 2025-07-14 22:11:30.096 [INFO][4745] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:11:30.146909 containerd[1574]: 2025-07-14 22:11:30.103 [INFO][4745] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:11:30.146909 containerd[1574]: 2025-07-14 22:11:30.104 [INFO][4745] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:11:30.146909 containerd[1574]: 2025-07-14 22:11:30.106 [INFO][4745] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:11:30.146909 containerd[1574]: 2025-07-14 22:11:30.106 [INFO][4745] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5f58e4bbf5cbb6c5ff1f4b890f32ef7207f16206a8a6727692e6f525b723077c" host="localhost" Jul 14 22:11:30.146909 containerd[1574]: 2025-07-14 22:11:30.108 [INFO][4745] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5f58e4bbf5cbb6c5ff1f4b890f32ef7207f16206a8a6727692e6f525b723077c Jul 14 22:11:30.146909 containerd[1574]: 2025-07-14 22:11:30.111 [INFO][4745] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5f58e4bbf5cbb6c5ff1f4b890f32ef7207f16206a8a6727692e6f525b723077c" host="localhost" Jul 14 22:11:30.146909 containerd[1574]: 2025-07-14 22:11:30.118 [INFO][4745] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.5f58e4bbf5cbb6c5ff1f4b890f32ef7207f16206a8a6727692e6f525b723077c" host="localhost" Jul 14 22:11:30.146909 containerd[1574]: 2025-07-14 22:11:30.118 [INFO][4745] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.5f58e4bbf5cbb6c5ff1f4b890f32ef7207f16206a8a6727692e6f525b723077c" host="localhost" Jul 14 22:11:30.146909 containerd[1574]: 2025-07-14 22:11:30.119 [INFO][4745] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:11:30.146909 containerd[1574]: 2025-07-14 22:11:30.119 [INFO][4745] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="5f58e4bbf5cbb6c5ff1f4b890f32ef7207f16206a8a6727692e6f525b723077c" HandleID="k8s-pod-network.5f58e4bbf5cbb6c5ff1f4b890f32ef7207f16206a8a6727692e6f525b723077c" Workload="localhost-k8s-calico--apiserver--5b9b695b56--szjht-eth0" Jul 14 22:11:30.147449 containerd[1574]: 2025-07-14 22:11:30.122 [INFO][4696] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5f58e4bbf5cbb6c5ff1f4b890f32ef7207f16206a8a6727692e6f525b723077c" Namespace="calico-apiserver" Pod="calico-apiserver-5b9b695b56-szjht" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b9b695b56--szjht-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b9b695b56--szjht-eth0", GenerateName:"calico-apiserver-5b9b695b56-", Namespace:"calico-apiserver", SelfLink:"", UID:"ce018b19-dfa1-40d7-99a8-a9a4fa008b27", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 11, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b9b695b56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5b9b695b56-szjht", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia14962a5d99", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:11:30.147449 containerd[1574]: 2025-07-14 22:11:30.122 [INFO][4696] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="5f58e4bbf5cbb6c5ff1f4b890f32ef7207f16206a8a6727692e6f525b723077c" Namespace="calico-apiserver" Pod="calico-apiserver-5b9b695b56-szjht" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b9b695b56--szjht-eth0" Jul 14 22:11:30.147449 containerd[1574]: 2025-07-14 22:11:30.122 [INFO][4696] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia14962a5d99 ContainerID="5f58e4bbf5cbb6c5ff1f4b890f32ef7207f16206a8a6727692e6f525b723077c" Namespace="calico-apiserver" Pod="calico-apiserver-5b9b695b56-szjht" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b9b695b56--szjht-eth0" Jul 14 22:11:30.147449 containerd[1574]: 2025-07-14 22:11:30.130 [INFO][4696] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5f58e4bbf5cbb6c5ff1f4b890f32ef7207f16206a8a6727692e6f525b723077c" Namespace="calico-apiserver" Pod="calico-apiserver-5b9b695b56-szjht" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b9b695b56--szjht-eth0" Jul 14 22:11:30.147449 containerd[1574]: 2025-07-14 22:11:30.130 [INFO][4696] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5f58e4bbf5cbb6c5ff1f4b890f32ef7207f16206a8a6727692e6f525b723077c" Namespace="calico-apiserver" Pod="calico-apiserver-5b9b695b56-szjht" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b9b695b56--szjht-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b9b695b56--szjht-eth0", GenerateName:"calico-apiserver-5b9b695b56-", Namespace:"calico-apiserver", SelfLink:"", UID:"ce018b19-dfa1-40d7-99a8-a9a4fa008b27", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 11, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b9b695b56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5f58e4bbf5cbb6c5ff1f4b890f32ef7207f16206a8a6727692e6f525b723077c", Pod:"calico-apiserver-5b9b695b56-szjht", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia14962a5d99", MAC:"ce:25:9a:34:77:7f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:11:30.147449 containerd[1574]: 2025-07-14 22:11:30.138 [INFO][4696] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5f58e4bbf5cbb6c5ff1f4b890f32ef7207f16206a8a6727692e6f525b723077c" Namespace="calico-apiserver" Pod="calico-apiserver-5b9b695b56-szjht" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b9b695b56--szjht-eth0" Jul 14 22:11:30.164115 containerd[1574]: time="2025-07-14T22:11:30.164082599Z" level=info msg="CreateContainer within sandbox \"d067eee1f43ddac1601183bf91d4def883ab9e1d065f842218214de7880f53a2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ecd74d33cee86b405ace91d89f51353bc6f7718809ebc610d4ec83208400e8e0\"" Jul 14 22:11:30.166031 containerd[1574]: time="2025-07-14T22:11:30.165026184Z" level=info msg="StartContainer for \"ecd74d33cee86b405ace91d89f51353bc6f7718809ebc610d4ec83208400e8e0\"" Jul 14 22:11:30.170524 containerd[1574]: time="2025-07-14T22:11:30.170429042Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:11:30.170627 containerd[1574]: time="2025-07-14T22:11:30.170494216Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:11:30.170627 containerd[1574]: time="2025-07-14T22:11:30.170522129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:11:30.170627 containerd[1574]: time="2025-07-14T22:11:30.170611207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:11:30.213115 systemd-resolved[1461]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:11:30.236002 systemd-networkd[1245]: cali0d1a90f837c: Link UP Jul 14 22:11:30.237041 systemd-networkd[1245]: cali0d1a90f837c: Gained carrier Jul 14 22:11:30.241423 containerd[1574]: time="2025-07-14T22:11:30.241386299Z" level=info msg="StartContainer for \"ecd74d33cee86b405ace91d89f51353bc6f7718809ebc610d4ec83208400e8e0\" returns successfully" Jul 14 22:11:30.256229 containerd[1574]: 2025-07-14 22:11:29.968 [INFO][4719] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5b9b695b56--kzv98-eth0 calico-apiserver-5b9b695b56- calico-apiserver 9836a141-78f0-47fd-89e8-6b74ea1f1f07 1001 0 2025-07-14 22:11:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b9b695b56 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5b9b695b56-kzv98 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0d1a90f837c [] [] }} ContainerID="23aa065068485ee36f453882e481854ea6a9d8e413d0b45be80dc474573094c8" Namespace="calico-apiserver" Pod="calico-apiserver-5b9b695b56-kzv98" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b9b695b56--kzv98-" Jul 14 22:11:30.256229 containerd[1574]: 2025-07-14 22:11:29.968 [INFO][4719] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="23aa065068485ee36f453882e481854ea6a9d8e413d0b45be80dc474573094c8" Namespace="calico-apiserver" Pod="calico-apiserver-5b9b695b56-kzv98" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b9b695b56--kzv98-eth0" Jul 14 22:11:30.256229 containerd[1574]: 2025-07-14 22:11:29.996 [INFO][4756] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="23aa065068485ee36f453882e481854ea6a9d8e413d0b45be80dc474573094c8" HandleID="k8s-pod-network.23aa065068485ee36f453882e481854ea6a9d8e413d0b45be80dc474573094c8" Workload="localhost-k8s-calico--apiserver--5b9b695b56--kzv98-eth0" Jul 14 22:11:30.256229 containerd[1574]: 2025-07-14 22:11:29.996 [INFO][4756] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="23aa065068485ee36f453882e481854ea6a9d8e413d0b45be80dc474573094c8" HandleID="k8s-pod-network.23aa065068485ee36f453882e481854ea6a9d8e413d0b45be80dc474573094c8" Workload="localhost-k8s-calico--apiserver--5b9b695b56--kzv98-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002def20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5b9b695b56-kzv98", "timestamp":"2025-07-14 22:11:29.996155945 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:11:30.256229 containerd[1574]: 2025-07-14 22:11:29.996 [INFO][4756] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:11:30.256229 containerd[1574]: 2025-07-14 22:11:30.119 [INFO][4756] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:11:30.256229 containerd[1574]: 2025-07-14 22:11:30.119 [INFO][4756] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:11:30.256229 containerd[1574]: 2025-07-14 22:11:30.191 [INFO][4756] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.23aa065068485ee36f453882e481854ea6a9d8e413d0b45be80dc474573094c8" host="localhost" Jul 14 22:11:30.256229 containerd[1574]: 2025-07-14 22:11:30.199 [INFO][4756] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:11:30.256229 containerd[1574]: 2025-07-14 22:11:30.204 [INFO][4756] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:11:30.256229 containerd[1574]: 2025-07-14 22:11:30.206 [INFO][4756] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:11:30.256229 containerd[1574]: 2025-07-14 22:11:30.209 [INFO][4756] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:11:30.256229 containerd[1574]: 2025-07-14 22:11:30.209 [INFO][4756] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.23aa065068485ee36f453882e481854ea6a9d8e413d0b45be80dc474573094c8" host="localhost" Jul 14 22:11:30.256229 containerd[1574]: 2025-07-14 22:11:30.212 [INFO][4756] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.23aa065068485ee36f453882e481854ea6a9d8e413d0b45be80dc474573094c8 Jul 14 22:11:30.256229 containerd[1574]: 2025-07-14 22:11:30.216 [INFO][4756] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.23aa065068485ee36f453882e481854ea6a9d8e413d0b45be80dc474573094c8" host="localhost" Jul 14 22:11:30.256229 containerd[1574]: 2025-07-14 22:11:30.224 [INFO][4756] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.23aa065068485ee36f453882e481854ea6a9d8e413d0b45be80dc474573094c8" host="localhost" Jul 14 22:11:30.256229 containerd[1574]: 2025-07-14 22:11:30.224 [INFO][4756] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.23aa065068485ee36f453882e481854ea6a9d8e413d0b45be80dc474573094c8" host="localhost" Jul 14 22:11:30.256229 containerd[1574]: 2025-07-14 22:11:30.224 [INFO][4756] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:11:30.256229 containerd[1574]: 2025-07-14 22:11:30.224 [INFO][4756] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="23aa065068485ee36f453882e481854ea6a9d8e413d0b45be80dc474573094c8" HandleID="k8s-pod-network.23aa065068485ee36f453882e481854ea6a9d8e413d0b45be80dc474573094c8" Workload="localhost-k8s-calico--apiserver--5b9b695b56--kzv98-eth0" Jul 14 22:11:30.258855 containerd[1574]: 2025-07-14 22:11:30.227 [INFO][4719] cni-plugin/k8s.go 418: Populated endpoint ContainerID="23aa065068485ee36f453882e481854ea6a9d8e413d0b45be80dc474573094c8" Namespace="calico-apiserver" Pod="calico-apiserver-5b9b695b56-kzv98" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b9b695b56--kzv98-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b9b695b56--kzv98-eth0", GenerateName:"calico-apiserver-5b9b695b56-", Namespace:"calico-apiserver", SelfLink:"", UID:"9836a141-78f0-47fd-89e8-6b74ea1f1f07", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 11, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b9b695b56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5b9b695b56-kzv98", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0d1a90f837c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:11:30.258855 containerd[1574]: 2025-07-14 22:11:30.227 [INFO][4719] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="23aa065068485ee36f453882e481854ea6a9d8e413d0b45be80dc474573094c8" Namespace="calico-apiserver" Pod="calico-apiserver-5b9b695b56-kzv98" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b9b695b56--kzv98-eth0" Jul 14 22:11:30.258855 containerd[1574]: 2025-07-14 22:11:30.228 [INFO][4719] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0d1a90f837c ContainerID="23aa065068485ee36f453882e481854ea6a9d8e413d0b45be80dc474573094c8" Namespace="calico-apiserver" Pod="calico-apiserver-5b9b695b56-kzv98" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b9b695b56--kzv98-eth0" Jul 14 22:11:30.258855 containerd[1574]: 2025-07-14 22:11:30.237 [INFO][4719] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="23aa065068485ee36f453882e481854ea6a9d8e413d0b45be80dc474573094c8" Namespace="calico-apiserver" Pod="calico-apiserver-5b9b695b56-kzv98" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b9b695b56--kzv98-eth0" Jul 14 22:11:30.258855 containerd[1574]: 2025-07-14 22:11:30.238 [INFO][4719] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="23aa065068485ee36f453882e481854ea6a9d8e413d0b45be80dc474573094c8" Namespace="calico-apiserver" Pod="calico-apiserver-5b9b695b56-kzv98" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b9b695b56--kzv98-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b9b695b56--kzv98-eth0", GenerateName:"calico-apiserver-5b9b695b56-", Namespace:"calico-apiserver", SelfLink:"", UID:"9836a141-78f0-47fd-89e8-6b74ea1f1f07", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 11, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b9b695b56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"23aa065068485ee36f453882e481854ea6a9d8e413d0b45be80dc474573094c8", Pod:"calico-apiserver-5b9b695b56-kzv98", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0d1a90f837c", MAC:"76:df:d7:93:3f:d6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:11:30.258855 containerd[1574]: 2025-07-14 22:11:30.251 [INFO][4719] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="23aa065068485ee36f453882e481854ea6a9d8e413d0b45be80dc474573094c8" Namespace="calico-apiserver" Pod="calico-apiserver-5b9b695b56-kzv98" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b9b695b56--kzv98-eth0" Jul 14 22:11:30.258855 containerd[1574]: time="2025-07-14T22:11:30.257969871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b9b695b56-szjht,Uid:ce018b19-dfa1-40d7-99a8-a9a4fa008b27,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"5f58e4bbf5cbb6c5ff1f4b890f32ef7207f16206a8a6727692e6f525b723077c\"" Jul 14 22:11:30.287558 containerd[1574]: time="2025-07-14T22:11:30.286707932Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:11:30.287558 containerd[1574]: time="2025-07-14T22:11:30.286781681Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:11:30.287558 containerd[1574]: time="2025-07-14T22:11:30.286794054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:11:30.287558 containerd[1574]: time="2025-07-14T22:11:30.286896448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:11:30.319336 systemd-resolved[1461]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:11:30.348034 containerd[1574]: time="2025-07-14T22:11:30.347990591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b9b695b56-kzv98,Uid:9836a141-78f0-47fd-89e8-6b74ea1f1f07,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"23aa065068485ee36f453882e481854ea6a9d8e413d0b45be80dc474573094c8\"" Jul 14 22:11:30.479422 kubelet[2717]: I0714 22:11:30.479377 2717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 22:11:31.006423 kubelet[2717]: E0714 22:11:31.006241 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:11:31.008656 kubelet[2717]: E0714 22:11:31.008623 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:11:31.011797 systemd-networkd[1245]: caliede15d390e4: Gained IPv6LL Jul 14 22:11:31.040069 kubelet[2717]: I0714 22:11:31.039835 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-swnzl" podStartSLOduration=46.039811785 podStartE2EDuration="46.039811785s" podCreationTimestamp="2025-07-14 22:10:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:11:31.023167531 +0000 UTC m=+50.341201777" watchObservedRunningTime="2025-07-14 22:11:31.039811785 +0000 UTC m=+50.357846031" Jul 14 22:11:31.391824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4078936995.mount: Deactivated successfully. Jul 14 22:11:31.459688 systemd-networkd[1245]: calia267a7c017a: Gained IPv6LL Jul 14 22:11:31.498068 containerd[1574]: time="2025-07-14T22:11:31.498024313Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:11:31.498827 containerd[1574]: time="2025-07-14T22:11:31.498762770Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 14 22:11:31.499871 containerd[1574]: time="2025-07-14T22:11:31.499840168Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:11:31.501900 containerd[1574]: time="2025-07-14T22:11:31.501869477Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:11:31.502713 containerd[1574]: time="2025-07-14T22:11:31.502684428Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 2.624946814s" Jul 14 22:11:31.502749 containerd[1574]: time="2025-07-14T22:11:31.502715017Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 14 22:11:31.503753 containerd[1574]: time="2025-07-14T22:11:31.503706562Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 14 22:11:31.504662 containerd[1574]: time="2025-07-14T22:11:31.504635579Z" level=info msg="CreateContainer within sandbox \"86b814240f1b3ac421d5f258adacf06dc7c744861d25dca7247186c3807df6ec\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 14 22:11:31.520083 containerd[1574]: time="2025-07-14T22:11:31.520038847Z" level=info msg="CreateContainer within sandbox \"86b814240f1b3ac421d5f258adacf06dc7c744861d25dca7247186c3807df6ec\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"62d1a3fcb22031fbe7c5e391249d9d7a63565e5fad8913b17768e48e3880b347\"" Jul 14 22:11:31.520596 containerd[1574]: time="2025-07-14T22:11:31.520567918Z" level=info msg="StartContainer for \"62d1a3fcb22031fbe7c5e391249d9d7a63565e5fad8913b17768e48e3880b347\"" Jul 14 22:11:31.597040 containerd[1574]: time="2025-07-14T22:11:31.596989932Z" level=info msg="StartContainer for \"62d1a3fcb22031fbe7c5e391249d9d7a63565e5fad8913b17768e48e3880b347\" returns successfully" Jul 14 22:11:31.715699 systemd-networkd[1245]: calia14962a5d99: Gained IPv6LL Jul 14 22:11:31.764694 containerd[1574]: time="2025-07-14T22:11:31.764618095Z" level=info msg="StopPodSandbox for \"6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f\"" Jul 14 22:11:31.842978 containerd[1574]: 2025-07-14 22:11:31.802 [INFO][5064] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f" Jul 14 22:11:31.842978 containerd[1574]: 2025-07-14 22:11:31.803 [INFO][5064] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f" iface="eth0" netns="/var/run/netns/cni-994cbdf2-0778-0343-cf87-b51dcff5d185" Jul 14 22:11:31.842978 containerd[1574]: 2025-07-14 22:11:31.803 [INFO][5064] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f" iface="eth0" netns="/var/run/netns/cni-994cbdf2-0778-0343-cf87-b51dcff5d185" Jul 14 22:11:31.842978 containerd[1574]: 2025-07-14 22:11:31.803 [INFO][5064] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f" iface="eth0" netns="/var/run/netns/cni-994cbdf2-0778-0343-cf87-b51dcff5d185" Jul 14 22:11:31.842978 containerd[1574]: 2025-07-14 22:11:31.803 [INFO][5064] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f" Jul 14 22:11:31.842978 containerd[1574]: 2025-07-14 22:11:31.803 [INFO][5064] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f" Jul 14 22:11:31.842978 containerd[1574]: 2025-07-14 22:11:31.826 [INFO][5073] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f" HandleID="k8s-pod-network.6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f" Workload="localhost-k8s-goldmane--58fd7646b9--4trbc-eth0" Jul 14 22:11:31.842978 containerd[1574]: 2025-07-14 22:11:31.826 [INFO][5073] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:11:31.842978 containerd[1574]: 2025-07-14 22:11:31.826 [INFO][5073] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:11:31.842978 containerd[1574]: 2025-07-14 22:11:31.834 [WARNING][5073] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f" HandleID="k8s-pod-network.6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f" Workload="localhost-k8s-goldmane--58fd7646b9--4trbc-eth0" Jul 14 22:11:31.842978 containerd[1574]: 2025-07-14 22:11:31.834 [INFO][5073] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f" HandleID="k8s-pod-network.6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f" Workload="localhost-k8s-goldmane--58fd7646b9--4trbc-eth0" Jul 14 22:11:31.842978 containerd[1574]: 2025-07-14 22:11:31.836 [INFO][5073] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:11:31.842978 containerd[1574]: 2025-07-14 22:11:31.839 [INFO][5064] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f" Jul 14 22:11:31.843806 containerd[1574]: time="2025-07-14T22:11:31.843664543Z" level=info msg="TearDown network for sandbox \"6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f\" successfully" Jul 14 22:11:31.843806 containerd[1574]: time="2025-07-14T22:11:31.843700310Z" level=info msg="StopPodSandbox for \"6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f\" returns successfully" Jul 14 22:11:31.844681 containerd[1574]: time="2025-07-14T22:11:31.844636111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-4trbc,Uid:f6ace18e-bbe6-4202-8c10-c8f8fba9e6ed,Namespace:calico-system,Attempt:1,}" Jul 14 22:11:31.878868 systemd[1]: run-netns-cni\x2d994cbdf2\x2d0778\x2d0343\x2dcf87\x2db51dcff5d185.mount: Deactivated successfully. Jul 14 22:11:31.907703 systemd-networkd[1245]: cali0d1a90f837c: Gained IPv6LL Jul 14 22:11:31.959205 systemd-networkd[1245]: cali4f7640eaf01: Link UP Jul 14 22:11:31.959864 systemd-networkd[1245]: cali4f7640eaf01: Gained carrier Jul 14 22:11:31.974119 containerd[1574]: 2025-07-14 22:11:31.896 [INFO][5081] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--58fd7646b9--4trbc-eth0 goldmane-58fd7646b9- calico-system f6ace18e-bbe6-4202-8c10-c8f8fba9e6ed 1039 0 2025-07-14 22:11:02 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-58fd7646b9-4trbc eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali4f7640eaf01 [] [] }} ContainerID="5d4ba493c31eb49f0a445ac8751402e5ef1fe9b324f6bb6e7a96024bd70d9830" Namespace="calico-system" Pod="goldmane-58fd7646b9-4trbc" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--4trbc-" Jul 14 22:11:31.974119 containerd[1574]: 2025-07-14 22:11:31.896 [INFO][5081] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5d4ba493c31eb49f0a445ac8751402e5ef1fe9b324f6bb6e7a96024bd70d9830" Namespace="calico-system" Pod="goldmane-58fd7646b9-4trbc" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--4trbc-eth0" Jul 14 22:11:31.974119 containerd[1574]: 2025-07-14 22:11:31.924 [INFO][5096] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5d4ba493c31eb49f0a445ac8751402e5ef1fe9b324f6bb6e7a96024bd70d9830" HandleID="k8s-pod-network.5d4ba493c31eb49f0a445ac8751402e5ef1fe9b324f6bb6e7a96024bd70d9830" Workload="localhost-k8s-goldmane--58fd7646b9--4trbc-eth0" Jul 14 22:11:31.974119 containerd[1574]: 2025-07-14 22:11:31.924 [INFO][5096] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5d4ba493c31eb49f0a445ac8751402e5ef1fe9b324f6bb6e7a96024bd70d9830" HandleID="k8s-pod-network.5d4ba493c31eb49f0a445ac8751402e5ef1fe9b324f6bb6e7a96024bd70d9830" Workload="localhost-k8s-goldmane--58fd7646b9--4trbc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e320), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-58fd7646b9-4trbc", "timestamp":"2025-07-14 22:11:31.924456784 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:11:31.974119 containerd[1574]: 2025-07-14 22:11:31.924 [INFO][5096] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:11:31.974119 containerd[1574]: 2025-07-14 22:11:31.924 [INFO][5096] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:11:31.974119 containerd[1574]: 2025-07-14 22:11:31.924 [INFO][5096] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:11:31.974119 containerd[1574]: 2025-07-14 22:11:31.930 [INFO][5096] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5d4ba493c31eb49f0a445ac8751402e5ef1fe9b324f6bb6e7a96024bd70d9830" host="localhost" Jul 14 22:11:31.974119 containerd[1574]: 2025-07-14 22:11:31.935 [INFO][5096] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:11:31.974119 containerd[1574]: 2025-07-14 22:11:31.939 [INFO][5096] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:11:31.974119 containerd[1574]: 2025-07-14 22:11:31.940 [INFO][5096] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:11:31.974119 containerd[1574]: 2025-07-14 22:11:31.943 [INFO][5096] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:11:31.974119 containerd[1574]: 2025-07-14 22:11:31.943 [INFO][5096] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5d4ba493c31eb49f0a445ac8751402e5ef1fe9b324f6bb6e7a96024bd70d9830" host="localhost" Jul 14 22:11:31.974119 containerd[1574]: 2025-07-14 22:11:31.944 [INFO][5096] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5d4ba493c31eb49f0a445ac8751402e5ef1fe9b324f6bb6e7a96024bd70d9830 Jul 14 22:11:31.974119 containerd[1574]: 2025-07-14 22:11:31.948 [INFO][5096] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5d4ba493c31eb49f0a445ac8751402e5ef1fe9b324f6bb6e7a96024bd70d9830" host="localhost" Jul 14 22:11:31.974119 containerd[1574]: 2025-07-14 22:11:31.954 [INFO][5096] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.5d4ba493c31eb49f0a445ac8751402e5ef1fe9b324f6bb6e7a96024bd70d9830" host="localhost" Jul 14 22:11:31.974119 containerd[1574]: 2025-07-14 22:11:31.954 [INFO][5096] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.5d4ba493c31eb49f0a445ac8751402e5ef1fe9b324f6bb6e7a96024bd70d9830" host="localhost" Jul 14 22:11:31.974119 containerd[1574]: 2025-07-14 22:11:31.954 [INFO][5096] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:11:31.974119 containerd[1574]: 2025-07-14 22:11:31.954 [INFO][5096] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="5d4ba493c31eb49f0a445ac8751402e5ef1fe9b324f6bb6e7a96024bd70d9830" HandleID="k8s-pod-network.5d4ba493c31eb49f0a445ac8751402e5ef1fe9b324f6bb6e7a96024bd70d9830" Workload="localhost-k8s-goldmane--58fd7646b9--4trbc-eth0" Jul 14 22:11:31.975570 containerd[1574]: 2025-07-14 22:11:31.957 [INFO][5081] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5d4ba493c31eb49f0a445ac8751402e5ef1fe9b324f6bb6e7a96024bd70d9830" Namespace="calico-system" Pod="goldmane-58fd7646b9-4trbc" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--4trbc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--4trbc-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"f6ace18e-bbe6-4202-8c10-c8f8fba9e6ed", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 11, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-58fd7646b9-4trbc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4f7640eaf01", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:11:31.975570 containerd[1574]: 2025-07-14 22:11:31.957 [INFO][5081] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="5d4ba493c31eb49f0a445ac8751402e5ef1fe9b324f6bb6e7a96024bd70d9830" Namespace="calico-system" Pod="goldmane-58fd7646b9-4trbc" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--4trbc-eth0" Jul 14 22:11:31.975570 containerd[1574]: 2025-07-14 22:11:31.957 [INFO][5081] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4f7640eaf01 ContainerID="5d4ba493c31eb49f0a445ac8751402e5ef1fe9b324f6bb6e7a96024bd70d9830" Namespace="calico-system" Pod="goldmane-58fd7646b9-4trbc" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--4trbc-eth0" Jul 14 22:11:31.975570 containerd[1574]: 2025-07-14 22:11:31.959 [INFO][5081] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5d4ba493c31eb49f0a445ac8751402e5ef1fe9b324f6bb6e7a96024bd70d9830" Namespace="calico-system" Pod="goldmane-58fd7646b9-4trbc" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--4trbc-eth0" Jul 14 22:11:31.975570 containerd[1574]: 2025-07-14 22:11:31.960 [INFO][5081] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5d4ba493c31eb49f0a445ac8751402e5ef1fe9b324f6bb6e7a96024bd70d9830" Namespace="calico-system" Pod="goldmane-58fd7646b9-4trbc" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--4trbc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--4trbc-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"f6ace18e-bbe6-4202-8c10-c8f8fba9e6ed", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 11, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5d4ba493c31eb49f0a445ac8751402e5ef1fe9b324f6bb6e7a96024bd70d9830", Pod:"goldmane-58fd7646b9-4trbc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4f7640eaf01", MAC:"da:2f:d6:9a:04:59", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:11:31.975570 containerd[1574]: 2025-07-14 22:11:31.970 [INFO][5081] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5d4ba493c31eb49f0a445ac8751402e5ef1fe9b324f6bb6e7a96024bd70d9830" Namespace="calico-system" Pod="goldmane-58fd7646b9-4trbc" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--4trbc-eth0" Jul 14 22:11:31.994864 containerd[1574]: time="2025-07-14T22:11:31.994759852Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:11:31.994864 containerd[1574]: time="2025-07-14T22:11:31.994822341Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:11:31.994864 containerd[1574]: time="2025-07-14T22:11:31.994842669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:11:31.995046 containerd[1574]: time="2025-07-14T22:11:31.994937829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:11:32.015741 kubelet[2717]: E0714 22:11:32.015374 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:11:32.021809 systemd-resolved[1461]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:11:32.054240 containerd[1574]: time="2025-07-14T22:11:32.054149699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-4trbc,Uid:f6ace18e-bbe6-4202-8c10-c8f8fba9e6ed,Namespace:calico-system,Attempt:1,} returns sandbox id \"5d4ba493c31eb49f0a445ac8751402e5ef1fe9b324f6bb6e7a96024bd70d9830\"" Jul 14 22:11:32.765276 containerd[1574]: time="2025-07-14T22:11:32.765159463Z" level=info msg="StopPodSandbox for \"c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802\"" Jul 14 22:11:33.017814 kubelet[2717]: E0714 22:11:33.017688 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:11:33.253012 kubelet[2717]: I0714 22:11:33.252557 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-85b59d4bd5-p6dr7" podStartSLOduration=2.479388923 podStartE2EDuration="7.252539167s" podCreationTimestamp="2025-07-14 22:11:26 +0000 UTC" firstStartedPulling="2025-07-14 22:11:26.730432163 +0000 UTC m=+46.048466409" lastFinishedPulling="2025-07-14 22:11:31.503582407 +0000 UTC m=+50.821616653" observedRunningTime="2025-07-14 22:11:32.027391341 +0000 UTC m=+51.345425587" watchObservedRunningTime="2025-07-14 22:11:33.252539167 +0000 UTC m=+52.570573413" Jul 14 22:11:33.338890 containerd[1574]: 2025-07-14 22:11:33.253 [INFO][5174] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802" Jul 14 22:11:33.338890 containerd[1574]: 2025-07-14 22:11:33.253 [INFO][5174] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802" iface="eth0" netns="/var/run/netns/cni-7b68efc8-125d-ed60-9cf9-0eecb032fb20" Jul 14 22:11:33.338890 containerd[1574]: 2025-07-14 22:11:33.253 [INFO][5174] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802" iface="eth0" netns="/var/run/netns/cni-7b68efc8-125d-ed60-9cf9-0eecb032fb20" Jul 14 22:11:33.338890 containerd[1574]: 2025-07-14 22:11:33.254 [INFO][5174] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802" iface="eth0" netns="/var/run/netns/cni-7b68efc8-125d-ed60-9cf9-0eecb032fb20" Jul 14 22:11:33.338890 containerd[1574]: 2025-07-14 22:11:33.254 [INFO][5174] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802" Jul 14 22:11:33.338890 containerd[1574]: 2025-07-14 22:11:33.254 [INFO][5174] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802" Jul 14 22:11:33.338890 containerd[1574]: 2025-07-14 22:11:33.284 [INFO][5184] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802" HandleID="k8s-pod-network.c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802" Workload="localhost-k8s-csi--node--driver--bhk87-eth0" Jul 14 22:11:33.338890 containerd[1574]: 2025-07-14 22:11:33.284 [INFO][5184] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:11:33.338890 containerd[1574]: 2025-07-14 22:11:33.284 [INFO][5184] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:11:33.338890 containerd[1574]: 2025-07-14 22:11:33.315 [WARNING][5184] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802" HandleID="k8s-pod-network.c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802" Workload="localhost-k8s-csi--node--driver--bhk87-eth0" Jul 14 22:11:33.338890 containerd[1574]: 2025-07-14 22:11:33.315 [INFO][5184] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802" HandleID="k8s-pod-network.c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802" Workload="localhost-k8s-csi--node--driver--bhk87-eth0" Jul 14 22:11:33.338890 containerd[1574]: 2025-07-14 22:11:33.322 [INFO][5184] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:11:33.338890 containerd[1574]: 2025-07-14 22:11:33.335 [INFO][5174] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802" Jul 14 22:11:33.341972 containerd[1574]: time="2025-07-14T22:11:33.339086885Z" level=info msg="TearDown network for sandbox \"c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802\" successfully" Jul 14 22:11:33.341972 containerd[1574]: time="2025-07-14T22:11:33.339125639Z" level=info msg="StopPodSandbox for \"c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802\" returns successfully" Jul 14 22:11:33.341972 containerd[1574]: time="2025-07-14T22:11:33.339992359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bhk87,Uid:7829a7f5-eec5-446a-bd2c-44244faa0a80,Namespace:calico-system,Attempt:1,}" Jul 14 22:11:33.342896 systemd[1]: run-netns-cni\x2d7b68efc8\x2d125d\x2ded60\x2d9cf9\x2d0eecb032fb20.mount: Deactivated successfully. Jul 14 22:11:33.507729 systemd-networkd[1245]: cali4f7640eaf01: Gained IPv6LL Jul 14 22:11:34.255974 systemd-networkd[1245]: calic0c36317d83: Link UP Jul 14 22:11:34.257277 systemd-networkd[1245]: calic0c36317d83: Gained carrier Jul 14 22:11:34.277571 containerd[1574]: 2025-07-14 22:11:34.189 [INFO][5193] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--bhk87-eth0 csi-node-driver- calico-system 7829a7f5-eec5-446a-bd2c-44244faa0a80 1057 0 2025-07-14 22:11:03 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-bhk87 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic0c36317d83 [] [] }} ContainerID="3bf14cc9fe55944cf8b380cbeca62559997ade3c05f1d6b4d6bdd733b8cad016" Namespace="calico-system" Pod="csi-node-driver-bhk87" WorkloadEndpoint="localhost-k8s-csi--node--driver--bhk87-" Jul 14 22:11:34.277571 containerd[1574]: 2025-07-14 22:11:34.189 [INFO][5193] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3bf14cc9fe55944cf8b380cbeca62559997ade3c05f1d6b4d6bdd733b8cad016" Namespace="calico-system" Pod="csi-node-driver-bhk87" WorkloadEndpoint="localhost-k8s-csi--node--driver--bhk87-eth0" Jul 14 22:11:34.277571 containerd[1574]: 2025-07-14 22:11:34.214 [INFO][5205] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3bf14cc9fe55944cf8b380cbeca62559997ade3c05f1d6b4d6bdd733b8cad016" HandleID="k8s-pod-network.3bf14cc9fe55944cf8b380cbeca62559997ade3c05f1d6b4d6bdd733b8cad016" Workload="localhost-k8s-csi--node--driver--bhk87-eth0" Jul 14 22:11:34.277571 containerd[1574]: 2025-07-14 22:11:34.214 [INFO][5205] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3bf14cc9fe55944cf8b380cbeca62559997ade3c05f1d6b4d6bdd733b8cad016" HandleID="k8s-pod-network.3bf14cc9fe55944cf8b380cbeca62559997ade3c05f1d6b4d6bdd733b8cad016" Workload="localhost-k8s-csi--node--driver--bhk87-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fc10), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-bhk87", "timestamp":"2025-07-14 22:11:34.214003884 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:11:34.277571 containerd[1574]: 2025-07-14 22:11:34.214 [INFO][5205] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:11:34.277571 containerd[1574]: 2025-07-14 22:11:34.214 [INFO][5205] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:11:34.277571 containerd[1574]: 2025-07-14 22:11:34.214 [INFO][5205] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:11:34.277571 containerd[1574]: 2025-07-14 22:11:34.219 [INFO][5205] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3bf14cc9fe55944cf8b380cbeca62559997ade3c05f1d6b4d6bdd733b8cad016" host="localhost" Jul 14 22:11:34.277571 containerd[1574]: 2025-07-14 22:11:34.224 [INFO][5205] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:11:34.277571 containerd[1574]: 2025-07-14 22:11:34.229 [INFO][5205] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:11:34.277571 containerd[1574]: 2025-07-14 22:11:34.233 [INFO][5205] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:11:34.277571 containerd[1574]: 2025-07-14 22:11:34.235 [INFO][5205] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:11:34.277571 containerd[1574]: 2025-07-14 22:11:34.235 [INFO][5205] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3bf14cc9fe55944cf8b380cbeca62559997ade3c05f1d6b4d6bdd733b8cad016" host="localhost" Jul 14 22:11:34.277571 containerd[1574]: 2025-07-14 22:11:34.236 [INFO][5205] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3bf14cc9fe55944cf8b380cbeca62559997ade3c05f1d6b4d6bdd733b8cad016 Jul 14 22:11:34.277571 containerd[1574]: 2025-07-14 22:11:34.241 [INFO][5205] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3bf14cc9fe55944cf8b380cbeca62559997ade3c05f1d6b4d6bdd733b8cad016" host="localhost" Jul 14 22:11:34.277571 containerd[1574]: 2025-07-14 22:11:34.248 [INFO][5205] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.3bf14cc9fe55944cf8b380cbeca62559997ade3c05f1d6b4d6bdd733b8cad016" host="localhost" Jul 14 22:11:34.277571 containerd[1574]: 2025-07-14 22:11:34.248 [INFO][5205] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.3bf14cc9fe55944cf8b380cbeca62559997ade3c05f1d6b4d6bdd733b8cad016" host="localhost" Jul 14 22:11:34.277571 containerd[1574]: 2025-07-14 22:11:34.248 [INFO][5205] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:11:34.277571 containerd[1574]: 2025-07-14 22:11:34.248 [INFO][5205] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="3bf14cc9fe55944cf8b380cbeca62559997ade3c05f1d6b4d6bdd733b8cad016" HandleID="k8s-pod-network.3bf14cc9fe55944cf8b380cbeca62559997ade3c05f1d6b4d6bdd733b8cad016" Workload="localhost-k8s-csi--node--driver--bhk87-eth0" Jul 14 22:11:34.278485 containerd[1574]: 2025-07-14 22:11:34.253 [INFO][5193] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3bf14cc9fe55944cf8b380cbeca62559997ade3c05f1d6b4d6bdd733b8cad016" Namespace="calico-system" Pod="csi-node-driver-bhk87" WorkloadEndpoint="localhost-k8s-csi--node--driver--bhk87-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--bhk87-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7829a7f5-eec5-446a-bd2c-44244faa0a80", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 11, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-bhk87", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic0c36317d83", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:11:34.278485 containerd[1574]: 2025-07-14 22:11:34.253 [INFO][5193] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="3bf14cc9fe55944cf8b380cbeca62559997ade3c05f1d6b4d6bdd733b8cad016" Namespace="calico-system" Pod="csi-node-driver-bhk87" WorkloadEndpoint="localhost-k8s-csi--node--driver--bhk87-eth0" Jul 14 22:11:34.278485 containerd[1574]: 2025-07-14 22:11:34.253 [INFO][5193] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic0c36317d83 ContainerID="3bf14cc9fe55944cf8b380cbeca62559997ade3c05f1d6b4d6bdd733b8cad016" Namespace="calico-system" Pod="csi-node-driver-bhk87" WorkloadEndpoint="localhost-k8s-csi--node--driver--bhk87-eth0" Jul 14 22:11:34.278485 containerd[1574]: 2025-07-14 22:11:34.257 [INFO][5193] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3bf14cc9fe55944cf8b380cbeca62559997ade3c05f1d6b4d6bdd733b8cad016" Namespace="calico-system" Pod="csi-node-driver-bhk87" WorkloadEndpoint="localhost-k8s-csi--node--driver--bhk87-eth0" Jul 14 22:11:34.278485 containerd[1574]: 2025-07-14 22:11:34.258 [INFO][5193] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3bf14cc9fe55944cf8b380cbeca62559997ade3c05f1d6b4d6bdd733b8cad016" Namespace="calico-system" Pod="csi-node-driver-bhk87" WorkloadEndpoint="localhost-k8s-csi--node--driver--bhk87-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--bhk87-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7829a7f5-eec5-446a-bd2c-44244faa0a80", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 11, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3bf14cc9fe55944cf8b380cbeca62559997ade3c05f1d6b4d6bdd733b8cad016", Pod:"csi-node-driver-bhk87", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic0c36317d83", MAC:"16:48:0e:e3:f8:04", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:11:34.278485 containerd[1574]: 2025-07-14 22:11:34.270 [INFO][5193] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3bf14cc9fe55944cf8b380cbeca62559997ade3c05f1d6b4d6bdd733b8cad016" Namespace="calico-system" Pod="csi-node-driver-bhk87" WorkloadEndpoint="localhost-k8s-csi--node--driver--bhk87-eth0" Jul 14 22:11:34.297665 containerd[1574]: time="2025-07-14T22:11:34.297065847Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:11:34.297665 containerd[1574]: time="2025-07-14T22:11:34.297140709Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:11:34.297665 containerd[1574]: time="2025-07-14T22:11:34.297155257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:11:34.297665 containerd[1574]: time="2025-07-14T22:11:34.297266407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:11:34.325891 systemd-resolved[1461]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:11:34.338980 containerd[1574]: time="2025-07-14T22:11:34.338942313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bhk87,Uid:7829a7f5-eec5-446a-bd2c-44244faa0a80,Namespace:calico-system,Attempt:1,} returns sandbox id \"3bf14cc9fe55944cf8b380cbeca62559997ade3c05f1d6b4d6bdd733b8cad016\"" Jul 14 22:11:35.491663 systemd-networkd[1245]: calic0c36317d83: Gained IPv6LL Jul 14 22:11:37.139918 containerd[1574]: time="2025-07-14T22:11:37.139445304Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:11:37.151583 containerd[1574]: time="2025-07-14T22:11:37.151454189Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 14 22:11:37.178176 containerd[1574]: time="2025-07-14T22:11:37.178107678Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:11:37.205686 containerd[1574]: time="2025-07-14T22:11:37.205601951Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:11:37.206320 containerd[1574]: time="2025-07-14T22:11:37.206271078Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 5.702534769s" Jul 14 22:11:37.206360 containerd[1574]: time="2025-07-14T22:11:37.206322595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 14 22:11:37.207909 containerd[1574]: time="2025-07-14T22:11:37.207860588Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 14 22:11:37.214397 containerd[1574]: time="2025-07-14T22:11:37.214363475Z" level=info msg="CreateContainer within sandbox \"5b0b47633a5179be4bc9db04750d7c0c728abed50968559bdb259b3a94f93332\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 14 22:11:37.265916 containerd[1574]: time="2025-07-14T22:11:37.265858843Z" level=info msg="CreateContainer within sandbox \"5b0b47633a5179be4bc9db04750d7c0c728abed50968559bdb259b3a94f93332\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"2676b880f12192875ebb1757aa21a079bd89e95bc9eed8bb1f0d75239f9cc2d0\"" Jul 14 22:11:37.266791 containerd[1574]: time="2025-07-14T22:11:37.266735834Z" level=info msg="StartContainer for \"2676b880f12192875ebb1757aa21a079bd89e95bc9eed8bb1f0d75239f9cc2d0\"" Jul 14 22:11:37.419751 containerd[1574]: time="2025-07-14T22:11:37.419565074Z" level=info msg="StartContainer for \"2676b880f12192875ebb1757aa21a079bd89e95bc9eed8bb1f0d75239f9cc2d0\" returns successfully" Jul 14 22:11:38.096450 kubelet[2717]: I0714 22:11:38.095827 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-b8d99f984-kmljb" podStartSLOduration=26.984636741 podStartE2EDuration="35.095802839s" podCreationTimestamp="2025-07-14 22:11:03 +0000 UTC" firstStartedPulling="2025-07-14 22:11:29.095930424 +0000 UTC m=+48.413964670" lastFinishedPulling="2025-07-14 22:11:37.207096522 +0000 UTC m=+56.525130768" observedRunningTime="2025-07-14 22:11:38.047558851 +0000 UTC m=+57.365593097" watchObservedRunningTime="2025-07-14 22:11:38.095802839 +0000 UTC m=+57.413837085" Jul 14 22:11:40.759853 containerd[1574]: time="2025-07-14T22:11:40.759809474Z" level=info msg="StopPodSandbox for \"5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2\"" Jul 14 22:11:40.940972 containerd[1574]: 2025-07-14 22:11:40.899 [WARNING][5363] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b9b695b56--szjht-eth0", GenerateName:"calico-apiserver-5b9b695b56-", Namespace:"calico-apiserver", SelfLink:"", UID:"ce018b19-dfa1-40d7-99a8-a9a4fa008b27", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 11, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b9b695b56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5f58e4bbf5cbb6c5ff1f4b890f32ef7207f16206a8a6727692e6f525b723077c", Pod:"calico-apiserver-5b9b695b56-szjht", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia14962a5d99", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:11:40.940972 containerd[1574]: 2025-07-14 22:11:40.900 [INFO][5363] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2" Jul 14 22:11:40.940972 containerd[1574]: 2025-07-14 22:11:40.900 [INFO][5363] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2" iface="eth0" netns="" Jul 14 22:11:40.940972 containerd[1574]: 2025-07-14 22:11:40.900 [INFO][5363] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2" Jul 14 22:11:40.940972 containerd[1574]: 2025-07-14 22:11:40.900 [INFO][5363] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2" Jul 14 22:11:40.940972 containerd[1574]: 2025-07-14 22:11:40.925 [INFO][5373] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2" HandleID="k8s-pod-network.5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2" Workload="localhost-k8s-calico--apiserver--5b9b695b56--szjht-eth0" Jul 14 22:11:40.940972 containerd[1574]: 2025-07-14 22:11:40.925 [INFO][5373] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:11:40.940972 containerd[1574]: 2025-07-14 22:11:40.925 [INFO][5373] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:11:40.940972 containerd[1574]: 2025-07-14 22:11:40.931 [WARNING][5373] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2" HandleID="k8s-pod-network.5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2" Workload="localhost-k8s-calico--apiserver--5b9b695b56--szjht-eth0" Jul 14 22:11:40.940972 containerd[1574]: 2025-07-14 22:11:40.931 [INFO][5373] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2" HandleID="k8s-pod-network.5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2" Workload="localhost-k8s-calico--apiserver--5b9b695b56--szjht-eth0" Jul 14 22:11:40.940972 containerd[1574]: 2025-07-14 22:11:40.932 [INFO][5373] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:11:40.940972 containerd[1574]: 2025-07-14 22:11:40.937 [INFO][5363] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2" Jul 14 22:11:40.941579 containerd[1574]: time="2025-07-14T22:11:40.941035343Z" level=info msg="TearDown network for sandbox \"5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2\" successfully" Jul 14 22:11:40.941579 containerd[1574]: time="2025-07-14T22:11:40.941071031Z" level=info msg="StopPodSandbox for \"5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2\" returns successfully" Jul 14 22:11:40.941979 containerd[1574]: time="2025-07-14T22:11:40.941936482Z" level=info msg="RemovePodSandbox for \"5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2\"" Jul 14 22:11:40.944265 containerd[1574]: time="2025-07-14T22:11:40.944235668Z" level=info msg="Forcibly stopping sandbox \"5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2\"" Jul 14 22:11:41.141620 containerd[1574]: 2025-07-14 22:11:41.001 [WARNING][5391] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b9b695b56--szjht-eth0", GenerateName:"calico-apiserver-5b9b695b56-", Namespace:"calico-apiserver", SelfLink:"", UID:"ce018b19-dfa1-40d7-99a8-a9a4fa008b27", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 11, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b9b695b56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5f58e4bbf5cbb6c5ff1f4b890f32ef7207f16206a8a6727692e6f525b723077c", Pod:"calico-apiserver-5b9b695b56-szjht", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia14962a5d99", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:11:41.141620 containerd[1574]: 2025-07-14 22:11:41.003 [INFO][5391] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2" Jul 14 22:11:41.141620 containerd[1574]: 2025-07-14 22:11:41.003 [INFO][5391] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2" iface="eth0" netns="" Jul 14 22:11:41.141620 containerd[1574]: 2025-07-14 22:11:41.003 [INFO][5391] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2" Jul 14 22:11:41.141620 containerd[1574]: 2025-07-14 22:11:41.003 [INFO][5391] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2" Jul 14 22:11:41.141620 containerd[1574]: 2025-07-14 22:11:41.053 [INFO][5400] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2" HandleID="k8s-pod-network.5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2" Workload="localhost-k8s-calico--apiserver--5b9b695b56--szjht-eth0" Jul 14 22:11:41.141620 containerd[1574]: 2025-07-14 22:11:41.053 [INFO][5400] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:11:41.141620 containerd[1574]: 2025-07-14 22:11:41.053 [INFO][5400] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:11:41.141620 containerd[1574]: 2025-07-14 22:11:41.058 [WARNING][5400] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2" HandleID="k8s-pod-network.5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2" Workload="localhost-k8s-calico--apiserver--5b9b695b56--szjht-eth0" Jul 14 22:11:41.141620 containerd[1574]: 2025-07-14 22:11:41.058 [INFO][5400] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2" HandleID="k8s-pod-network.5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2" Workload="localhost-k8s-calico--apiserver--5b9b695b56--szjht-eth0" Jul 14 22:11:41.141620 containerd[1574]: 2025-07-14 22:11:41.136 [INFO][5400] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:11:41.141620 containerd[1574]: 2025-07-14 22:11:41.138 [INFO][5391] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2" Jul 14 22:11:41.142108 containerd[1574]: time="2025-07-14T22:11:41.141659717Z" level=info msg="TearDown network for sandbox \"5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2\" successfully" Jul 14 22:11:41.573573 containerd[1574]: time="2025-07-14T22:11:41.573432319Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:11:41.573573 containerd[1574]: time="2025-07-14T22:11:41.573562816Z" level=info msg="RemovePodSandbox \"5e1ad0f3550d23e7e751178561254c0776752c21173c7c46d3207125e77be4a2\" returns successfully" Jul 14 22:11:41.575240 containerd[1574]: time="2025-07-14T22:11:41.575183949Z" level=info msg="StopPodSandbox for \"232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba\"" Jul 14 22:11:41.598302 containerd[1574]: time="2025-07-14T22:11:41.598245816Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:11:41.599345 containerd[1574]: time="2025-07-14T22:11:41.599275658Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 14 22:11:41.601651 containerd[1574]: time="2025-07-14T22:11:41.601555028Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:11:41.608181 containerd[1574]: time="2025-07-14T22:11:41.608122505Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:11:41.610487 containerd[1574]: time="2025-07-14T22:11:41.610315752Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 4.40241682s" Jul 14 22:11:41.610625 containerd[1574]: time="2025-07-14T22:11:41.610484872Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 14 22:11:41.614008 containerd[1574]: time="2025-07-14T22:11:41.613967002Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 14 22:11:41.615095 containerd[1574]: time="2025-07-14T22:11:41.615060154Z" level=info msg="CreateContainer within sandbox \"5f58e4bbf5cbb6c5ff1f4b890f32ef7207f16206a8a6727692e6f525b723077c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 14 22:11:41.685793 containerd[1574]: 2025-07-14 22:11:41.642 [WARNING][5418] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--b8d99f984--kmljb-eth0", GenerateName:"calico-kube-controllers-b8d99f984-", Namespace:"calico-system", SelfLink:"", UID:"2b64c079-46df-4d78-82e9-6807a889d4ac", ResourceVersion:"1084", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 11, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b8d99f984", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5b0b47633a5179be4bc9db04750d7c0c728abed50968559bdb259b3a94f93332", Pod:"calico-kube-controllers-b8d99f984-kmljb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliede15d390e4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:11:41.685793 containerd[1574]: 2025-07-14 22:11:41.642 [INFO][5418] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba" Jul 14 22:11:41.685793 containerd[1574]: 2025-07-14 22:11:41.642 [INFO][5418] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba" iface="eth0" netns="" Jul 14 22:11:41.685793 containerd[1574]: 2025-07-14 22:11:41.642 [INFO][5418] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba" Jul 14 22:11:41.685793 containerd[1574]: 2025-07-14 22:11:41.642 [INFO][5418] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba" Jul 14 22:11:41.685793 containerd[1574]: 2025-07-14 22:11:41.671 [INFO][5430] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba" HandleID="k8s-pod-network.232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba" Workload="localhost-k8s-calico--kube--controllers--b8d99f984--kmljb-eth0" Jul 14 22:11:41.685793 containerd[1574]: 2025-07-14 22:11:41.671 [INFO][5430] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:11:41.685793 containerd[1574]: 2025-07-14 22:11:41.671 [INFO][5430] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:11:41.685793 containerd[1574]: 2025-07-14 22:11:41.676 [WARNING][5430] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba" HandleID="k8s-pod-network.232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba" Workload="localhost-k8s-calico--kube--controllers--b8d99f984--kmljb-eth0" Jul 14 22:11:41.685793 containerd[1574]: 2025-07-14 22:11:41.676 [INFO][5430] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba" HandleID="k8s-pod-network.232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba" Workload="localhost-k8s-calico--kube--controllers--b8d99f984--kmljb-eth0" Jul 14 22:11:41.685793 containerd[1574]: 2025-07-14 22:11:41.678 [INFO][5430] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:11:41.685793 containerd[1574]: 2025-07-14 22:11:41.681 [INFO][5418] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba" Jul 14 22:11:41.686964 containerd[1574]: time="2025-07-14T22:11:41.685829789Z" level=info msg="TearDown network for sandbox \"232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba\" successfully" Jul 14 22:11:41.686964 containerd[1574]: time="2025-07-14T22:11:41.685860808Z" level=info msg="StopPodSandbox for \"232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba\" returns successfully" Jul 14 22:11:41.686964 containerd[1574]: time="2025-07-14T22:11:41.686487956Z" level=info msg="RemovePodSandbox for \"232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba\"" Jul 14 22:11:41.686964 containerd[1574]: time="2025-07-14T22:11:41.686551807Z" level=info msg="Forcibly stopping sandbox \"232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba\"" Jul 14 22:11:41.811864 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount648022058.mount: Deactivated successfully. Jul 14 22:11:41.820983 containerd[1574]: 2025-07-14 22:11:41.774 [WARNING][5450] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--b8d99f984--kmljb-eth0", GenerateName:"calico-kube-controllers-b8d99f984-", Namespace:"calico-system", SelfLink:"", UID:"2b64c079-46df-4d78-82e9-6807a889d4ac", ResourceVersion:"1084", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 11, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b8d99f984", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5b0b47633a5179be4bc9db04750d7c0c728abed50968559bdb259b3a94f93332", Pod:"calico-kube-controllers-b8d99f984-kmljb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliede15d390e4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:11:41.820983 containerd[1574]: 2025-07-14 22:11:41.774 [INFO][5450] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba" Jul 14 22:11:41.820983 containerd[1574]: 2025-07-14 22:11:41.774 [INFO][5450] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba" iface="eth0" netns="" Jul 14 22:11:41.820983 containerd[1574]: 2025-07-14 22:11:41.774 [INFO][5450] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba" Jul 14 22:11:41.820983 containerd[1574]: 2025-07-14 22:11:41.774 [INFO][5450] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba" Jul 14 22:11:41.820983 containerd[1574]: 2025-07-14 22:11:41.805 [INFO][5459] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba" HandleID="k8s-pod-network.232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba" Workload="localhost-k8s-calico--kube--controllers--b8d99f984--kmljb-eth0" Jul 14 22:11:41.820983 containerd[1574]: 2025-07-14 22:11:41.805 [INFO][5459] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:11:41.820983 containerd[1574]: 2025-07-14 22:11:41.805 [INFO][5459] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:11:41.820983 containerd[1574]: 2025-07-14 22:11:41.813 [WARNING][5459] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba" HandleID="k8s-pod-network.232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba" Workload="localhost-k8s-calico--kube--controllers--b8d99f984--kmljb-eth0" Jul 14 22:11:41.820983 containerd[1574]: 2025-07-14 22:11:41.813 [INFO][5459] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba" HandleID="k8s-pod-network.232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba" Workload="localhost-k8s-calico--kube--controllers--b8d99f984--kmljb-eth0" Jul 14 22:11:41.820983 containerd[1574]: 2025-07-14 22:11:41.815 [INFO][5459] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:11:41.820983 containerd[1574]: 2025-07-14 22:11:41.818 [INFO][5450] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba" Jul 14 22:11:41.821779 containerd[1574]: time="2025-07-14T22:11:41.821037410Z" level=info msg="TearDown network for sandbox \"232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba\" successfully" Jul 14 22:11:42.294811 containerd[1574]: time="2025-07-14T22:11:42.294620981Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:11:42.294811 containerd[1574]: time="2025-07-14T22:11:42.294705842Z" level=info msg="RemovePodSandbox \"232a8fd839e575883c2db7e625eff7739ec04f0733054829e51cc210eeb3fdba\" returns successfully" Jul 14 22:11:42.294811 containerd[1574]: time="2025-07-14T22:11:42.294758281Z" level=info msg="CreateContainer within sandbox \"5f58e4bbf5cbb6c5ff1f4b890f32ef7207f16206a8a6727692e6f525b723077c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"614f392d96ad515dde269f104df3d116b55f3856252bb3a66b01e053bedb6dc0\"" Jul 14 22:11:42.295600 containerd[1574]: time="2025-07-14T22:11:42.295555512Z" level=info msg="StartContainer for \"614f392d96ad515dde269f104df3d116b55f3856252bb3a66b01e053bedb6dc0\"" Jul 14 22:11:42.296533 containerd[1574]: time="2025-07-14T22:11:42.296492148Z" level=info msg="StopPodSandbox for \"108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c\"" Jul 14 22:11:42.387360 systemd[1]: run-containerd-runc-k8s.io-614f392d96ad515dde269f104df3d116b55f3856252bb3a66b01e053bedb6dc0-runc.fJHib2.mount: Deactivated successfully. Jul 14 22:11:42.568328 containerd[1574]: time="2025-07-14T22:11:42.568269463Z" level=info msg="StartContainer for \"614f392d96ad515dde269f104df3d116b55f3856252bb3a66b01e053bedb6dc0\" returns successfully" Jul 14 22:11:42.688029 containerd[1574]: 2025-07-14 22:11:42.636 [WARNING][5479] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b9b695b56--kzv98-eth0", GenerateName:"calico-apiserver-5b9b695b56-", Namespace:"calico-apiserver", SelfLink:"", UID:"9836a141-78f0-47fd-89e8-6b74ea1f1f07", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 11, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b9b695b56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"23aa065068485ee36f453882e481854ea6a9d8e413d0b45be80dc474573094c8", Pod:"calico-apiserver-5b9b695b56-kzv98", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0d1a90f837c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:11:42.688029 containerd[1574]: 2025-07-14 22:11:42.636 [INFO][5479] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c" Jul 14 22:11:42.688029 containerd[1574]: 2025-07-14 22:11:42.636 [INFO][5479] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c" iface="eth0" netns="" Jul 14 22:11:42.688029 containerd[1574]: 2025-07-14 22:11:42.636 [INFO][5479] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c" Jul 14 22:11:42.688029 containerd[1574]: 2025-07-14 22:11:42.636 [INFO][5479] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c" Jul 14 22:11:42.688029 containerd[1574]: 2025-07-14 22:11:42.667 [INFO][5527] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c" HandleID="k8s-pod-network.108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c" Workload="localhost-k8s-calico--apiserver--5b9b695b56--kzv98-eth0" Jul 14 22:11:42.688029 containerd[1574]: 2025-07-14 22:11:42.667 [INFO][5527] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:11:42.688029 containerd[1574]: 2025-07-14 22:11:42.668 [INFO][5527] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:11:42.688029 containerd[1574]: 2025-07-14 22:11:42.674 [WARNING][5527] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c" HandleID="k8s-pod-network.108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c" Workload="localhost-k8s-calico--apiserver--5b9b695b56--kzv98-eth0" Jul 14 22:11:42.688029 containerd[1574]: 2025-07-14 22:11:42.674 [INFO][5527] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c" HandleID="k8s-pod-network.108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c" Workload="localhost-k8s-calico--apiserver--5b9b695b56--kzv98-eth0" Jul 14 22:11:42.688029 containerd[1574]: 2025-07-14 22:11:42.676 [INFO][5527] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:11:42.688029 containerd[1574]: 2025-07-14 22:11:42.680 [INFO][5479] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c" Jul 14 22:11:42.691272 containerd[1574]: time="2025-07-14T22:11:42.688080339Z" level=info msg="TearDown network for sandbox \"108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c\" successfully" Jul 14 22:11:42.691272 containerd[1574]: time="2025-07-14T22:11:42.688106368Z" level=info msg="StopPodSandbox for \"108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c\" returns successfully" Jul 14 22:11:42.691272 containerd[1574]: time="2025-07-14T22:11:42.689538553Z" level=info msg="RemovePodSandbox for \"108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c\"" Jul 14 22:11:42.691272 containerd[1574]: time="2025-07-14T22:11:42.689795701Z" level=info msg="Forcibly stopping sandbox \"108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c\"" Jul 14 22:11:42.865459 containerd[1574]: 2025-07-14 22:11:42.826 [WARNING][5546] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b9b695b56--kzv98-eth0", GenerateName:"calico-apiserver-5b9b695b56-", Namespace:"calico-apiserver", SelfLink:"", UID:"9836a141-78f0-47fd-89e8-6b74ea1f1f07", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 11, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b9b695b56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"23aa065068485ee36f453882e481854ea6a9d8e413d0b45be80dc474573094c8", Pod:"calico-apiserver-5b9b695b56-kzv98", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0d1a90f837c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:11:42.865459 containerd[1574]: 2025-07-14 22:11:42.826 [INFO][5546] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c" Jul 14 22:11:42.865459 containerd[1574]: 2025-07-14 22:11:42.826 [INFO][5546] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c" iface="eth0" netns="" Jul 14 22:11:42.865459 containerd[1574]: 2025-07-14 22:11:42.826 [INFO][5546] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c" Jul 14 22:11:42.865459 containerd[1574]: 2025-07-14 22:11:42.826 [INFO][5546] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c" Jul 14 22:11:42.865459 containerd[1574]: 2025-07-14 22:11:42.851 [INFO][5555] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c" HandleID="k8s-pod-network.108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c" Workload="localhost-k8s-calico--apiserver--5b9b695b56--kzv98-eth0" Jul 14 22:11:42.865459 containerd[1574]: 2025-07-14 22:11:42.851 [INFO][5555] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:11:42.865459 containerd[1574]: 2025-07-14 22:11:42.851 [INFO][5555] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:11:42.865459 containerd[1574]: 2025-07-14 22:11:42.856 [WARNING][5555] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c" HandleID="k8s-pod-network.108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c" Workload="localhost-k8s-calico--apiserver--5b9b695b56--kzv98-eth0" Jul 14 22:11:42.865459 containerd[1574]: 2025-07-14 22:11:42.856 [INFO][5555] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c" HandleID="k8s-pod-network.108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c" Workload="localhost-k8s-calico--apiserver--5b9b695b56--kzv98-eth0" Jul 14 22:11:42.865459 containerd[1574]: 2025-07-14 22:11:42.857 [INFO][5555] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:11:42.865459 containerd[1574]: 2025-07-14 22:11:42.861 [INFO][5546] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c" Jul 14 22:11:42.869055 containerd[1574]: time="2025-07-14T22:11:42.865483263Z" level=info msg="TearDown network for sandbox \"108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c\" successfully" Jul 14 22:11:42.999414 containerd[1574]: time="2025-07-14T22:11:42.999338992Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:11:42.999626 containerd[1574]: time="2025-07-14T22:11:42.999576943Z" level=info msg="RemovePodSandbox \"108525e0d6353e1ad89add285de5b087d69722ddb941218197453d229417856c\" returns successfully" Jul 14 22:11:43.000316 containerd[1574]: time="2025-07-14T22:11:43.000280116Z" level=info msg="StopPodSandbox for \"4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c\"" Jul 14 22:11:43.038580 containerd[1574]: time="2025-07-14T22:11:43.038522707Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:11:43.042212 containerd[1574]: time="2025-07-14T22:11:43.042125177Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 14 22:11:43.046278 containerd[1574]: time="2025-07-14T22:11:43.046234788Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 1.432091071s" Jul 14 22:11:43.046327 containerd[1574]: time="2025-07-14T22:11:43.046280815Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 14 22:11:43.048708 containerd[1574]: time="2025-07-14T22:11:43.048672920Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 14 22:11:43.049811 containerd[1574]: time="2025-07-14T22:11:43.049786421Z" level=info msg="CreateContainer within sandbox \"23aa065068485ee36f453882e481854ea6a9d8e413d0b45be80dc474573094c8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 14 22:11:43.083382 containerd[1574]: 2025-07-14 22:11:43.038 [WARNING][5572] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--57dnv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"9c73f2d5-835f-459b-8d41-d88723278569", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 10, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"472b6959c8ff7a0e3c9c2c6ea04e78ddce9512f9f70fe4d0f71cc22a138704c2", Pod:"coredns-7c65d6cfc9-57dnv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0eaa16e2ed4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:11:43.083382 containerd[1574]: 2025-07-14 22:11:43.038 [INFO][5572] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c" Jul 14 22:11:43.083382 containerd[1574]: 2025-07-14 22:11:43.038 [INFO][5572] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c" iface="eth0" netns="" Jul 14 22:11:43.083382 containerd[1574]: 2025-07-14 22:11:43.038 [INFO][5572] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c" Jul 14 22:11:43.083382 containerd[1574]: 2025-07-14 22:11:43.038 [INFO][5572] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c" Jul 14 22:11:43.083382 containerd[1574]: 2025-07-14 22:11:43.066 [INFO][5581] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c" HandleID="k8s-pod-network.4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c" Workload="localhost-k8s-coredns--7c65d6cfc9--57dnv-eth0" Jul 14 22:11:43.083382 containerd[1574]: 2025-07-14 22:11:43.067 [INFO][5581] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:11:43.083382 containerd[1574]: 2025-07-14 22:11:43.067 [INFO][5581] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:11:43.083382 containerd[1574]: 2025-07-14 22:11:43.073 [WARNING][5581] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c" HandleID="k8s-pod-network.4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c" Workload="localhost-k8s-coredns--7c65d6cfc9--57dnv-eth0" Jul 14 22:11:43.083382 containerd[1574]: 2025-07-14 22:11:43.073 [INFO][5581] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c" HandleID="k8s-pod-network.4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c" Workload="localhost-k8s-coredns--7c65d6cfc9--57dnv-eth0" Jul 14 22:11:43.083382 containerd[1574]: 2025-07-14 22:11:43.076 [INFO][5581] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:11:43.083382 containerd[1574]: 2025-07-14 22:11:43.080 [INFO][5572] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c" Jul 14 22:11:43.083912 containerd[1574]: time="2025-07-14T22:11:43.083482914Z" level=info msg="TearDown network for sandbox \"4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c\" successfully" Jul 14 22:11:43.083912 containerd[1574]: time="2025-07-14T22:11:43.083657425Z" level=info msg="StopPodSandbox for \"4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c\" returns successfully" Jul 14 22:11:43.085537 containerd[1574]: time="2025-07-14T22:11:43.084278403Z" level=info msg="RemovePodSandbox for \"4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c\"" Jul 14 22:11:43.085537 containerd[1574]: time="2025-07-14T22:11:43.084314270Z" level=info msg="Forcibly stopping sandbox \"4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c\"" Jul 14 22:11:43.245179 kubelet[2717]: I0714 22:11:43.244248 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5b9b695b56-szjht" podStartSLOduration=31.891836487 podStartE2EDuration="43.244218732s" podCreationTimestamp="2025-07-14 22:11:00 +0000 UTC" firstStartedPulling="2025-07-14 22:11:30.25934381 +0000 UTC m=+49.577378056" lastFinishedPulling="2025-07-14 22:11:41.611726055 +0000 UTC m=+60.929760301" observedRunningTime="2025-07-14 22:11:43.243555515 +0000 UTC m=+62.561589781" watchObservedRunningTime="2025-07-14 22:11:43.244218732 +0000 UTC m=+62.562252978" Jul 14 22:11:43.420277 containerd[1574]: 2025-07-14 22:11:43.379 [WARNING][5598] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--57dnv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"9c73f2d5-835f-459b-8d41-d88723278569", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 10, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"472b6959c8ff7a0e3c9c2c6ea04e78ddce9512f9f70fe4d0f71cc22a138704c2", Pod:"coredns-7c65d6cfc9-57dnv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0eaa16e2ed4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:11:43.420277 containerd[1574]: 2025-07-14 22:11:43.380 [INFO][5598] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c" Jul 14 22:11:43.420277 containerd[1574]: 2025-07-14 22:11:43.380 [INFO][5598] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c" iface="eth0" netns="" Jul 14 22:11:43.420277 containerd[1574]: 2025-07-14 22:11:43.380 [INFO][5598] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c" Jul 14 22:11:43.420277 containerd[1574]: 2025-07-14 22:11:43.380 [INFO][5598] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c" Jul 14 22:11:43.420277 containerd[1574]: 2025-07-14 22:11:43.404 [INFO][5608] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c" HandleID="k8s-pod-network.4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c" Workload="localhost-k8s-coredns--7c65d6cfc9--57dnv-eth0" Jul 14 22:11:43.420277 containerd[1574]: 2025-07-14 22:11:43.404 [INFO][5608] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:11:43.420277 containerd[1574]: 2025-07-14 22:11:43.404 [INFO][5608] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:11:43.420277 containerd[1574]: 2025-07-14 22:11:43.413 [WARNING][5608] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c" HandleID="k8s-pod-network.4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c" Workload="localhost-k8s-coredns--7c65d6cfc9--57dnv-eth0" Jul 14 22:11:43.420277 containerd[1574]: 2025-07-14 22:11:43.413 [INFO][5608] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c" HandleID="k8s-pod-network.4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c" Workload="localhost-k8s-coredns--7c65d6cfc9--57dnv-eth0" Jul 14 22:11:43.420277 containerd[1574]: 2025-07-14 22:11:43.414 [INFO][5608] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:11:43.420277 containerd[1574]: 2025-07-14 22:11:43.417 [INFO][5598] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c" Jul 14 22:11:43.420702 containerd[1574]: time="2025-07-14T22:11:43.420284322Z" level=info msg="TearDown network for sandbox \"4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c\" successfully" Jul 14 22:11:43.602912 containerd[1574]: time="2025-07-14T22:11:43.602858121Z" level=info msg="CreateContainer within sandbox \"23aa065068485ee36f453882e481854ea6a9d8e413d0b45be80dc474573094c8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9cc3feb568ec2347aebd78919c2474dbf0cf7e7709157171954bfd379afd3862\"" Jul 14 22:11:43.604156 containerd[1574]: time="2025-07-14T22:11:43.604122939Z" level=info msg="StartContainer for \"9cc3feb568ec2347aebd78919c2474dbf0cf7e7709157171954bfd379afd3862\"" Jul 14 22:11:43.712458 containerd[1574]: time="2025-07-14T22:11:43.712406827Z" level=info msg="StartContainer for \"9cc3feb568ec2347aebd78919c2474dbf0cf7e7709157171954bfd379afd3862\" returns successfully" Jul 14 22:11:43.733225 containerd[1574]: time="2025-07-14T22:11:43.733142357Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:11:43.733358 containerd[1574]: time="2025-07-14T22:11:43.733257186Z" level=info msg="RemovePodSandbox \"4f692078fd425d16b4adc7f0e33cc559a7c19e12780bf5a0a9579aaae531344c\" returns successfully" Jul 14 22:11:43.734060 containerd[1574]: time="2025-07-14T22:11:43.734027135Z" level=info msg="StopPodSandbox for \"c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802\"" Jul 14 22:11:43.837173 containerd[1574]: 2025-07-14 22:11:43.776 [WARNING][5666] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--bhk87-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7829a7f5-eec5-446a-bd2c-44244faa0a80", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 11, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3bf14cc9fe55944cf8b380cbeca62559997ade3c05f1d6b4d6bdd733b8cad016", Pod:"csi-node-driver-bhk87", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic0c36317d83", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:11:43.837173 containerd[1574]: 2025-07-14 22:11:43.777 [INFO][5666] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802" Jul 14 22:11:43.837173 containerd[1574]: 2025-07-14 22:11:43.777 [INFO][5666] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802" iface="eth0" netns="" Jul 14 22:11:43.837173 containerd[1574]: 2025-07-14 22:11:43.777 [INFO][5666] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802" Jul 14 22:11:43.837173 containerd[1574]: 2025-07-14 22:11:43.777 [INFO][5666] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802" Jul 14 22:11:43.837173 containerd[1574]: 2025-07-14 22:11:43.813 [INFO][5674] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802" HandleID="k8s-pod-network.c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802" Workload="localhost-k8s-csi--node--driver--bhk87-eth0" Jul 14 22:11:43.837173 containerd[1574]: 2025-07-14 22:11:43.813 [INFO][5674] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:11:43.837173 containerd[1574]: 2025-07-14 22:11:43.814 [INFO][5674] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:11:43.837173 containerd[1574]: 2025-07-14 22:11:43.821 [WARNING][5674] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802" HandleID="k8s-pod-network.c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802" Workload="localhost-k8s-csi--node--driver--bhk87-eth0" Jul 14 22:11:43.837173 containerd[1574]: 2025-07-14 22:11:43.821 [INFO][5674] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802" HandleID="k8s-pod-network.c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802" Workload="localhost-k8s-csi--node--driver--bhk87-eth0" Jul 14 22:11:43.837173 containerd[1574]: 2025-07-14 22:11:43.826 [INFO][5674] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:11:43.837173 containerd[1574]: 2025-07-14 22:11:43.834 [INFO][5666] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802" Jul 14 22:11:43.838252 containerd[1574]: time="2025-07-14T22:11:43.837847642Z" level=info msg="TearDown network for sandbox \"c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802\" successfully" Jul 14 22:11:43.838252 containerd[1574]: time="2025-07-14T22:11:43.837879351Z" level=info msg="StopPodSandbox for \"c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802\" returns successfully" Jul 14 22:11:43.842576 containerd[1574]: time="2025-07-14T22:11:43.839817115Z" level=info msg="RemovePodSandbox for \"c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802\"" Jul 14 22:11:43.842576 containerd[1574]: time="2025-07-14T22:11:43.839849346Z" level=info msg="Forcibly stopping sandbox \"c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802\"" Jul 14 22:11:44.003879 containerd[1574]: 2025-07-14 22:11:43.918 [WARNING][5691] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--bhk87-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7829a7f5-eec5-446a-bd2c-44244faa0a80", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 11, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3bf14cc9fe55944cf8b380cbeca62559997ade3c05f1d6b4d6bdd733b8cad016", Pod:"csi-node-driver-bhk87", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic0c36317d83", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:11:44.003879 containerd[1574]: 2025-07-14 22:11:43.918 [INFO][5691] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802" Jul 14 22:11:44.003879 containerd[1574]: 2025-07-14 22:11:43.918 [INFO][5691] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802" iface="eth0" netns="" Jul 14 22:11:44.003879 containerd[1574]: 2025-07-14 22:11:43.918 [INFO][5691] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802" Jul 14 22:11:44.003879 containerd[1574]: 2025-07-14 22:11:43.918 [INFO][5691] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802" Jul 14 22:11:44.003879 containerd[1574]: 2025-07-14 22:11:43.976 [INFO][5701] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802" HandleID="k8s-pod-network.c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802" Workload="localhost-k8s-csi--node--driver--bhk87-eth0" Jul 14 22:11:44.003879 containerd[1574]: 2025-07-14 22:11:43.977 [INFO][5701] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:11:44.003879 containerd[1574]: 2025-07-14 22:11:43.977 [INFO][5701] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:11:44.003879 containerd[1574]: 2025-07-14 22:11:43.994 [WARNING][5701] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802" HandleID="k8s-pod-network.c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802" Workload="localhost-k8s-csi--node--driver--bhk87-eth0" Jul 14 22:11:44.003879 containerd[1574]: 2025-07-14 22:11:43.995 [INFO][5701] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802" HandleID="k8s-pod-network.c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802" Workload="localhost-k8s-csi--node--driver--bhk87-eth0" Jul 14 22:11:44.003879 containerd[1574]: 2025-07-14 22:11:43.997 [INFO][5701] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:11:44.003879 containerd[1574]: 2025-07-14 22:11:44.000 [INFO][5691] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802" Jul 14 22:11:44.003879 containerd[1574]: time="2025-07-14T22:11:44.003722472Z" level=info msg="TearDown network for sandbox \"c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802\" successfully" Jul 14 22:11:44.466742 containerd[1574]: time="2025-07-14T22:11:44.466636328Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:11:44.466742 containerd[1574]: time="2025-07-14T22:11:44.466734324Z" level=info msg="RemovePodSandbox \"c586c0a2c02edbbeb1ab92223f54d0adb04ae161357b7f5580349af11b752802\" returns successfully" Jul 14 22:11:44.467612 containerd[1574]: time="2025-07-14T22:11:44.467581171Z" level=info msg="StopPodSandbox for \"6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797\"" Jul 14 22:11:44.566995 containerd[1574]: 2025-07-14 22:11:44.514 [WARNING][5723] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797" WorkloadEndpoint="localhost-k8s-whisker--f4c76c779--rgtfh-eth0" Jul 14 22:11:44.566995 containerd[1574]: 2025-07-14 22:11:44.514 [INFO][5723] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797" Jul 14 22:11:44.566995 containerd[1574]: 2025-07-14 22:11:44.514 [INFO][5723] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797" iface="eth0" netns="" Jul 14 22:11:44.566995 containerd[1574]: 2025-07-14 22:11:44.514 [INFO][5723] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797" Jul 14 22:11:44.566995 containerd[1574]: 2025-07-14 22:11:44.514 [INFO][5723] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797" Jul 14 22:11:44.566995 containerd[1574]: 2025-07-14 22:11:44.544 [INFO][5731] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797" HandleID="k8s-pod-network.6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797" Workload="localhost-k8s-whisker--f4c76c779--rgtfh-eth0" Jul 14 22:11:44.566995 containerd[1574]: 2025-07-14 22:11:44.545 [INFO][5731] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:11:44.566995 containerd[1574]: 2025-07-14 22:11:44.545 [INFO][5731] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:11:44.566995 containerd[1574]: 2025-07-14 22:11:44.553 [WARNING][5731] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797" HandleID="k8s-pod-network.6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797" Workload="localhost-k8s-whisker--f4c76c779--rgtfh-eth0" Jul 14 22:11:44.566995 containerd[1574]: 2025-07-14 22:11:44.554 [INFO][5731] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797" HandleID="k8s-pod-network.6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797" Workload="localhost-k8s-whisker--f4c76c779--rgtfh-eth0" Jul 14 22:11:44.566995 containerd[1574]: 2025-07-14 22:11:44.558 [INFO][5731] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:11:44.566995 containerd[1574]: 2025-07-14 22:11:44.563 [INFO][5723] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797" Jul 14 22:11:44.567550 containerd[1574]: time="2025-07-14T22:11:44.567050857Z" level=info msg="TearDown network for sandbox \"6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797\" successfully" Jul 14 22:11:44.567550 containerd[1574]: time="2025-07-14T22:11:44.567089780Z" level=info msg="StopPodSandbox for \"6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797\" returns successfully" Jul 14 22:11:44.567762 containerd[1574]: time="2025-07-14T22:11:44.567717009Z" level=info msg="RemovePodSandbox for \"6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797\"" Jul 14 22:11:44.567812 containerd[1574]: time="2025-07-14T22:11:44.567765571Z" level=info msg="Forcibly stopping sandbox \"6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797\"" Jul 14 22:11:44.666775 containerd[1574]: 2025-07-14 22:11:44.616 [WARNING][5749] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797" WorkloadEndpoint="localhost-k8s-whisker--f4c76c779--rgtfh-eth0" Jul 14 22:11:44.666775 containerd[1574]: 2025-07-14 22:11:44.616 [INFO][5749] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797" Jul 14 22:11:44.666775 containerd[1574]: 2025-07-14 22:11:44.617 [INFO][5749] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797" iface="eth0" netns="" Jul 14 22:11:44.666775 containerd[1574]: 2025-07-14 22:11:44.617 [INFO][5749] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797" Jul 14 22:11:44.666775 containerd[1574]: 2025-07-14 22:11:44.617 [INFO][5749] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797" Jul 14 22:11:44.666775 containerd[1574]: 2025-07-14 22:11:44.651 [INFO][5757] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797" HandleID="k8s-pod-network.6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797" Workload="localhost-k8s-whisker--f4c76c779--rgtfh-eth0" Jul 14 22:11:44.666775 containerd[1574]: 2025-07-14 22:11:44.651 [INFO][5757] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:11:44.666775 containerd[1574]: 2025-07-14 22:11:44.651 [INFO][5757] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:11:44.666775 containerd[1574]: 2025-07-14 22:11:44.657 [WARNING][5757] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797" HandleID="k8s-pod-network.6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797" Workload="localhost-k8s-whisker--f4c76c779--rgtfh-eth0" Jul 14 22:11:44.666775 containerd[1574]: 2025-07-14 22:11:44.657 [INFO][5757] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797" HandleID="k8s-pod-network.6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797" Workload="localhost-k8s-whisker--f4c76c779--rgtfh-eth0" Jul 14 22:11:44.666775 containerd[1574]: 2025-07-14 22:11:44.659 [INFO][5757] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:11:44.666775 containerd[1574]: 2025-07-14 22:11:44.663 [INFO][5749] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797" Jul 14 22:11:44.667494 containerd[1574]: time="2025-07-14T22:11:44.666835539Z" level=info msg="TearDown network for sandbox \"6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797\" successfully" Jul 14 22:11:44.671903 containerd[1574]: time="2025-07-14T22:11:44.671850167Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:11:44.671974 containerd[1574]: time="2025-07-14T22:11:44.671933886Z" level=info msg="RemovePodSandbox \"6a7074fbee4af3c5ef7cfbfeb59df9f50c51ab2a8746380f2a12b15d7c6e3797\" returns successfully" Jul 14 22:11:44.672597 containerd[1574]: time="2025-07-14T22:11:44.672553581Z" level=info msg="StopPodSandbox for \"3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0\"" Jul 14 22:11:44.849802 containerd[1574]: 2025-07-14 22:11:44.752 [WARNING][5775] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--swnzl-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"386dcad9-cbe7-41a3-a762-3963d0cad867", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 10, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d067eee1f43ddac1601183bf91d4def883ab9e1d065f842218214de7880f53a2", Pod:"coredns-7c65d6cfc9-swnzl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia267a7c017a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:11:44.849802 containerd[1574]: 2025-07-14 22:11:44.753 [INFO][5775] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0" Jul 14 22:11:44.849802 containerd[1574]: 2025-07-14 22:11:44.753 [INFO][5775] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0" iface="eth0" netns="" Jul 14 22:11:44.849802 containerd[1574]: 2025-07-14 22:11:44.753 [INFO][5775] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0" Jul 14 22:11:44.849802 containerd[1574]: 2025-07-14 22:11:44.753 [INFO][5775] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0" Jul 14 22:11:44.849802 containerd[1574]: 2025-07-14 22:11:44.827 [INFO][5784] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0" HandleID="k8s-pod-network.3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0" Workload="localhost-k8s-coredns--7c65d6cfc9--swnzl-eth0" Jul 14 22:11:44.849802 containerd[1574]: 2025-07-14 22:11:44.828 [INFO][5784] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:11:44.849802 containerd[1574]: 2025-07-14 22:11:44.828 [INFO][5784] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:11:44.849802 containerd[1574]: 2025-07-14 22:11:44.837 [WARNING][5784] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0" HandleID="k8s-pod-network.3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0" Workload="localhost-k8s-coredns--7c65d6cfc9--swnzl-eth0" Jul 14 22:11:44.849802 containerd[1574]: 2025-07-14 22:11:44.837 [INFO][5784] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0" HandleID="k8s-pod-network.3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0" Workload="localhost-k8s-coredns--7c65d6cfc9--swnzl-eth0" Jul 14 22:11:44.849802 containerd[1574]: 2025-07-14 22:11:44.839 [INFO][5784] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:11:44.849802 containerd[1574]: 2025-07-14 22:11:44.844 [INFO][5775] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0" Jul 14 22:11:44.851569 containerd[1574]: time="2025-07-14T22:11:44.850309616Z" level=info msg="TearDown network for sandbox \"3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0\" successfully" Jul 14 22:11:44.851569 containerd[1574]: time="2025-07-14T22:11:44.850344753Z" level=info msg="StopPodSandbox for \"3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0\" returns successfully" Jul 14 22:11:44.855121 containerd[1574]: time="2025-07-14T22:11:44.855091353Z" level=info msg="RemovePodSandbox for \"3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0\"" Jul 14 22:11:44.856187 containerd[1574]: time="2025-07-14T22:11:44.855678697Z" level=info msg="Forcibly stopping sandbox \"3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0\"" Jul 14 22:11:44.926683 kubelet[2717]: I0714 22:11:44.926605 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5b9b695b56-kzv98" podStartSLOduration=32.228955124 podStartE2EDuration="44.926562821s" podCreationTimestamp="2025-07-14 22:11:00 +0000 UTC" firstStartedPulling="2025-07-14 22:11:30.349457776 +0000 UTC m=+49.667492022" lastFinishedPulling="2025-07-14 22:11:43.047065483 +0000 UTC m=+62.365099719" observedRunningTime="2025-07-14 22:11:44.235547068 +0000 UTC m=+63.553581314" watchObservedRunningTime="2025-07-14 22:11:44.926562821 +0000 UTC m=+64.244597067" Jul 14 22:11:44.974061 containerd[1574]: 2025-07-14 22:11:44.900 [WARNING][5802] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--swnzl-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"386dcad9-cbe7-41a3-a762-3963d0cad867", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 10, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d067eee1f43ddac1601183bf91d4def883ab9e1d065f842218214de7880f53a2", Pod:"coredns-7c65d6cfc9-swnzl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia267a7c017a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:11:44.974061 containerd[1574]: 2025-07-14 22:11:44.900 [INFO][5802] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0" Jul 14 22:11:44.974061 containerd[1574]: 2025-07-14 22:11:44.901 [INFO][5802] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0" iface="eth0" netns="" Jul 14 22:11:44.974061 containerd[1574]: 2025-07-14 22:11:44.901 [INFO][5802] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0" Jul 14 22:11:44.974061 containerd[1574]: 2025-07-14 22:11:44.901 [INFO][5802] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0" Jul 14 22:11:44.974061 containerd[1574]: 2025-07-14 22:11:44.956 [INFO][5810] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0" HandleID="k8s-pod-network.3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0" Workload="localhost-k8s-coredns--7c65d6cfc9--swnzl-eth0" Jul 14 22:11:44.974061 containerd[1574]: 2025-07-14 22:11:44.956 [INFO][5810] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:11:44.974061 containerd[1574]: 2025-07-14 22:11:44.958 [INFO][5810] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:11:44.974061 containerd[1574]: 2025-07-14 22:11:44.964 [WARNING][5810] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0" HandleID="k8s-pod-network.3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0" Workload="localhost-k8s-coredns--7c65d6cfc9--swnzl-eth0" Jul 14 22:11:44.974061 containerd[1574]: 2025-07-14 22:11:44.964 [INFO][5810] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0" HandleID="k8s-pod-network.3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0" Workload="localhost-k8s-coredns--7c65d6cfc9--swnzl-eth0" Jul 14 22:11:44.974061 containerd[1574]: 2025-07-14 22:11:44.967 [INFO][5810] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:11:44.974061 containerd[1574]: 2025-07-14 22:11:44.970 [INFO][5802] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0" Jul 14 22:11:44.974657 containerd[1574]: time="2025-07-14T22:11:44.974110276Z" level=info msg="TearDown network for sandbox \"3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0\" successfully" Jul 14 22:11:44.980566 containerd[1574]: time="2025-07-14T22:11:44.980490725Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:11:44.980652 containerd[1574]: time="2025-07-14T22:11:44.980614128Z" level=info msg="RemovePodSandbox \"3bb48f8ca8ea9ac8fb9fed7316afad7a7df9b266d5ffb9c03e51c125bddcf2f0\" returns successfully" Jul 14 22:11:44.981242 containerd[1574]: time="2025-07-14T22:11:44.981204167Z" level=info msg="StopPodSandbox for \"6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f\"" Jul 14 22:11:45.070975 containerd[1574]: 2025-07-14 22:11:45.026 [WARNING][5829] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--4trbc-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"f6ace18e-bbe6-4202-8c10-c8f8fba9e6ed", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 11, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5d4ba493c31eb49f0a445ac8751402e5ef1fe9b324f6bb6e7a96024bd70d9830", Pod:"goldmane-58fd7646b9-4trbc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4f7640eaf01", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:11:45.070975 containerd[1574]: 2025-07-14 22:11:45.026 [INFO][5829] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f" Jul 14 22:11:45.070975 containerd[1574]: 2025-07-14 22:11:45.026 [INFO][5829] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f" iface="eth0" netns="" Jul 14 22:11:45.070975 containerd[1574]: 2025-07-14 22:11:45.026 [INFO][5829] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f" Jul 14 22:11:45.070975 containerd[1574]: 2025-07-14 22:11:45.026 [INFO][5829] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f" Jul 14 22:11:45.070975 containerd[1574]: 2025-07-14 22:11:45.052 [INFO][5838] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f" HandleID="k8s-pod-network.6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f" Workload="localhost-k8s-goldmane--58fd7646b9--4trbc-eth0" Jul 14 22:11:45.070975 containerd[1574]: 2025-07-14 22:11:45.052 [INFO][5838] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:11:45.070975 containerd[1574]: 2025-07-14 22:11:45.052 [INFO][5838] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:11:45.070975 containerd[1574]: 2025-07-14 22:11:45.059 [WARNING][5838] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f" HandleID="k8s-pod-network.6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f" Workload="localhost-k8s-goldmane--58fd7646b9--4trbc-eth0" Jul 14 22:11:45.070975 containerd[1574]: 2025-07-14 22:11:45.059 [INFO][5838] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f" HandleID="k8s-pod-network.6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f" Workload="localhost-k8s-goldmane--58fd7646b9--4trbc-eth0" Jul 14 22:11:45.070975 containerd[1574]: 2025-07-14 22:11:45.060 [INFO][5838] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:11:45.070975 containerd[1574]: 2025-07-14 22:11:45.064 [INFO][5829] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f" Jul 14 22:11:45.106981 containerd[1574]: time="2025-07-14T22:11:45.071567565Z" level=info msg="TearDown network for sandbox \"6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f\" successfully" Jul 14 22:11:45.106981 containerd[1574]: time="2025-07-14T22:11:45.071598605Z" level=info msg="StopPodSandbox for \"6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f\" returns successfully" Jul 14 22:11:45.106981 containerd[1574]: time="2025-07-14T22:11:45.073714026Z" level=info msg="RemovePodSandbox for \"6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f\"" Jul 14 22:11:45.106981 containerd[1574]: time="2025-07-14T22:11:45.073754924Z" level=info msg="Forcibly stopping sandbox \"6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f\"" Jul 14 22:11:45.107141 kubelet[2717]: I0714 22:11:45.075018 2717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 22:11:45.244266 containerd[1574]: 2025-07-14 22:11:45.194 [WARNING][5856] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--4trbc-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"f6ace18e-bbe6-4202-8c10-c8f8fba9e6ed", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 11, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5d4ba493c31eb49f0a445ac8751402e5ef1fe9b324f6bb6e7a96024bd70d9830", Pod:"goldmane-58fd7646b9-4trbc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4f7640eaf01", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:11:45.244266 containerd[1574]: 2025-07-14 22:11:45.194 [INFO][5856] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f" Jul 14 22:11:45.244266 containerd[1574]: 2025-07-14 22:11:45.194 [INFO][5856] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f" iface="eth0" netns="" Jul 14 22:11:45.244266 containerd[1574]: 2025-07-14 22:11:45.194 [INFO][5856] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f" Jul 14 22:11:45.244266 containerd[1574]: 2025-07-14 22:11:45.195 [INFO][5856] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f" Jul 14 22:11:45.244266 containerd[1574]: 2025-07-14 22:11:45.228 [INFO][5864] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f" HandleID="k8s-pod-network.6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f" Workload="localhost-k8s-goldmane--58fd7646b9--4trbc-eth0" Jul 14 22:11:45.244266 containerd[1574]: 2025-07-14 22:11:45.228 [INFO][5864] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:11:45.244266 containerd[1574]: 2025-07-14 22:11:45.228 [INFO][5864] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:11:45.244266 containerd[1574]: 2025-07-14 22:11:45.236 [WARNING][5864] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f" HandleID="k8s-pod-network.6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f" Workload="localhost-k8s-goldmane--58fd7646b9--4trbc-eth0" Jul 14 22:11:45.244266 containerd[1574]: 2025-07-14 22:11:45.236 [INFO][5864] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f" HandleID="k8s-pod-network.6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f" Workload="localhost-k8s-goldmane--58fd7646b9--4trbc-eth0" Jul 14 22:11:45.244266 containerd[1574]: 2025-07-14 22:11:45.237 [INFO][5864] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:11:45.244266 containerd[1574]: 2025-07-14 22:11:45.241 [INFO][5856] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f" Jul 14 22:11:45.244718 containerd[1574]: time="2025-07-14T22:11:45.244301134Z" level=info msg="TearDown network for sandbox \"6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f\" successfully" Jul 14 22:11:45.303935 containerd[1574]: time="2025-07-14T22:11:45.303861514Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:11:45.304082 containerd[1574]: time="2025-07-14T22:11:45.303962005Z" level=info msg="RemovePodSandbox \"6d314826529a4b1fc86f17b4589531e6a9a698af46c9b7d98cda2b4e4a9e141f\" returns successfully" Jul 14 22:11:45.818399 systemd[1]: run-containerd-runc-k8s.io-2676b880f12192875ebb1757aa21a079bd89e95bc9eed8bb1f0d75239f9cc2d0-runc.EL7V4F.mount: Deactivated successfully. Jul 14 22:11:46.418552 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1017415951.mount: Deactivated successfully. Jul 14 22:11:47.233677 containerd[1574]: time="2025-07-14T22:11:47.233590929Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:11:47.246034 containerd[1574]: time="2025-07-14T22:11:47.245943672Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 14 22:11:47.247494 containerd[1574]: time="2025-07-14T22:11:47.247405806Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:11:47.257140 containerd[1574]: time="2025-07-14T22:11:47.257093441Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:11:47.258820 containerd[1574]: time="2025-07-14T22:11:47.258771816Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 4.210059922s" Jul 14 22:11:47.258820 containerd[1574]: time="2025-07-14T22:11:47.258807744Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 14 22:11:47.264662 containerd[1574]: time="2025-07-14T22:11:47.264615300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 14 22:11:47.265838 containerd[1574]: time="2025-07-14T22:11:47.265812901Z" level=info msg="CreateContainer within sandbox \"5d4ba493c31eb49f0a445ac8751402e5ef1fe9b324f6bb6e7a96024bd70d9830\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 14 22:11:47.282082 containerd[1574]: time="2025-07-14T22:11:47.282022791Z" level=info msg="CreateContainer within sandbox \"5d4ba493c31eb49f0a445ac8751402e5ef1fe9b324f6bb6e7a96024bd70d9830\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"318c46ee819bc5790570b9391f96e04384ce27156201fdbe76fc7bb5f526dcaa\"" Jul 14 22:11:47.288675 containerd[1574]: time="2025-07-14T22:11:47.285721727Z" level=info msg="StartContainer for \"318c46ee819bc5790570b9391f96e04384ce27156201fdbe76fc7bb5f526dcaa\"" Jul 14 22:11:47.414213 containerd[1574]: time="2025-07-14T22:11:47.414089909Z" level=info msg="StartContainer for \"318c46ee819bc5790570b9391f96e04384ce27156201fdbe76fc7bb5f526dcaa\" returns successfully" Jul 14 22:11:48.106539 kubelet[2717]: I0714 22:11:48.104559 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-4trbc" podStartSLOduration=30.896003477 podStartE2EDuration="46.104496541s" podCreationTimestamp="2025-07-14 22:11:02 +0000 UTC" firstStartedPulling="2025-07-14 22:11:32.055242417 +0000 UTC m=+51.373276663" lastFinishedPulling="2025-07-14 22:11:47.263735471 +0000 UTC m=+66.581769727" observedRunningTime="2025-07-14 22:11:48.102227185 +0000 UTC m=+67.420261431" watchObservedRunningTime="2025-07-14 22:11:48.104496541 +0000 UTC m=+67.422530787" Jul 14 22:11:49.814537 containerd[1574]: time="2025-07-14T22:11:49.814435203Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:11:49.834892 containerd[1574]: time="2025-07-14T22:11:49.834814150Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 14 22:11:49.846328 containerd[1574]: time="2025-07-14T22:11:49.846257221Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:11:49.854845 containerd[1574]: time="2025-07-14T22:11:49.854796985Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:11:49.855591 containerd[1574]: time="2025-07-14T22:11:49.855288407Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 2.590632971s" Jul 14 22:11:49.855591 containerd[1574]: time="2025-07-14T22:11:49.855328523Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 14 22:11:49.857415 containerd[1574]: time="2025-07-14T22:11:49.857379575Z" level=info msg="CreateContainer within sandbox \"3bf14cc9fe55944cf8b380cbeca62559997ade3c05f1d6b4d6bdd733b8cad016\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 14 22:11:49.895129 containerd[1574]: time="2025-07-14T22:11:49.895074838Z" level=info msg="CreateContainer within sandbox \"3bf14cc9fe55944cf8b380cbeca62559997ade3c05f1d6b4d6bdd733b8cad016\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"902681eb7ecfb23ab9c9056689eacc0f43bae3e145360d9ca9d3a88fe1bc3c4a\"" Jul 14 22:11:49.895840 containerd[1574]: time="2025-07-14T22:11:49.895786478Z" level=info msg="StartContainer for \"902681eb7ecfb23ab9c9056689eacc0f43bae3e145360d9ca9d3a88fe1bc3c4a\"" Jul 14 22:11:50.030436 containerd[1574]: time="2025-07-14T22:11:50.030310490Z" level=info msg="StartContainer for \"902681eb7ecfb23ab9c9056689eacc0f43bae3e145360d9ca9d3a88fe1bc3c4a\" returns successfully" Jul 14 22:11:50.031882 containerd[1574]: time="2025-07-14T22:11:50.031687954Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 14 22:11:52.852759 systemd[1]: Started sshd@7-10.0.0.59:22-10.0.0.1:55278.service - OpenSSH per-connection server daemon (10.0.0.1:55278). Jul 14 22:11:52.934541 containerd[1574]: time="2025-07-14T22:11:52.934137000Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:11:52.939962 containerd[1574]: time="2025-07-14T22:11:52.938370576Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 14 22:11:52.944329 containerd[1574]: time="2025-07-14T22:11:52.944238753Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:11:52.947648 containerd[1574]: time="2025-07-14T22:11:52.947588872Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:11:52.950228 containerd[1574]: time="2025-07-14T22:11:52.949709317Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 2.917991457s" Jul 14 22:11:52.950228 containerd[1574]: time="2025-07-14T22:11:52.949772106Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 14 22:11:52.954265 containerd[1574]: time="2025-07-14T22:11:52.954092116Z" level=info msg="CreateContainer within sandbox \"3bf14cc9fe55944cf8b380cbeca62559997ade3c05f1d6b4d6bdd733b8cad016\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 14 22:11:52.979088 containerd[1574]: time="2025-07-14T22:11:52.978928417Z" level=info msg="CreateContainer within sandbox \"3bf14cc9fe55944cf8b380cbeca62559997ade3c05f1d6b4d6bdd733b8cad016\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"978cd2df782a99a243a4a0cea63f9b4875688f10b0cb1340e41d63238780dd8b\"" Jul 14 22:11:52.981561 containerd[1574]: time="2025-07-14T22:11:52.979934166Z" level=info msg="StartContainer for \"978cd2df782a99a243a4a0cea63f9b4875688f10b0cb1340e41d63238780dd8b\"" Jul 14 22:11:52.990822 sshd[6061]: Accepted publickey for core from 10.0.0.1 port 55278 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:11:52.995285 sshd[6061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:11:53.012036 systemd-logind[1545]: New session 8 of user core. Jul 14 22:11:53.027873 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 14 22:11:53.112472 containerd[1574]: time="2025-07-14T22:11:53.112345175Z" level=info msg="StartContainer for \"978cd2df782a99a243a4a0cea63f9b4875688f10b0cb1340e41d63238780dd8b\" returns successfully" Jul 14 22:11:53.475943 sshd[6061]: pam_unix(sshd:session): session closed for user core Jul 14 22:11:53.480957 systemd[1]: sshd@7-10.0.0.59:22-10.0.0.1:55278.service: Deactivated successfully. Jul 14 22:11:53.483816 systemd[1]: session-8.scope: Deactivated successfully. Jul 14 22:11:53.486549 systemd-logind[1545]: Session 8 logged out. Waiting for processes to exit. Jul 14 22:11:53.487685 systemd-logind[1545]: Removed session 8. Jul 14 22:11:54.009379 kubelet[2717]: I0714 22:11:54.009328 2717 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 14 22:11:54.009379 kubelet[2717]: I0714 22:11:54.009389 2717 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 14 22:11:54.181039 kubelet[2717]: I0714 22:11:54.180958 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-bhk87" podStartSLOduration=32.569992497 podStartE2EDuration="51.180922735s" podCreationTimestamp="2025-07-14 22:11:03 +0000 UTC" firstStartedPulling="2025-07-14 22:11:34.339854671 +0000 UTC m=+53.657888917" lastFinishedPulling="2025-07-14 22:11:52.950784909 +0000 UTC m=+72.268819155" observedRunningTime="2025-07-14 22:11:54.173259716 +0000 UTC m=+73.491293982" watchObservedRunningTime="2025-07-14 22:11:54.180922735 +0000 UTC m=+73.498956981" Jul 14 22:11:58.494839 systemd[1]: Started sshd@8-10.0.0.59:22-10.0.0.1:55292.service - OpenSSH per-connection server daemon (10.0.0.1:55292). Jul 14 22:11:58.537312 sshd[6139]: Accepted publickey for core from 10.0.0.1 port 55292 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:11:58.539240 sshd[6139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:11:58.543864 systemd-logind[1545]: New session 9 of user core. Jul 14 22:11:58.555820 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 14 22:11:58.701993 sshd[6139]: pam_unix(sshd:session): session closed for user core Jul 14 22:11:58.706433 systemd[1]: sshd@8-10.0.0.59:22-10.0.0.1:55292.service: Deactivated successfully. Jul 14 22:11:58.709032 systemd-logind[1545]: Session 9 logged out. Waiting for processes to exit. Jul 14 22:11:58.709104 systemd[1]: session-9.scope: Deactivated successfully. Jul 14 22:11:58.710243 systemd-logind[1545]: Removed session 9. Jul 14 22:11:59.766607 kubelet[2717]: E0714 22:11:59.766566 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:12:03.714810 systemd[1]: Started sshd@9-10.0.0.59:22-10.0.0.1:49350.service - OpenSSH per-connection server daemon (10.0.0.1:49350). Jul 14 22:12:03.754256 sshd[6178]: Accepted publickey for core from 10.0.0.1 port 49350 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:12:03.755973 sshd[6178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:12:03.760594 systemd-logind[1545]: New session 10 of user core. Jul 14 22:12:03.768842 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 14 22:12:04.550975 sshd[6178]: pam_unix(sshd:session): session closed for user core Jul 14 22:12:04.557348 systemd[1]: sshd@9-10.0.0.59:22-10.0.0.1:49350.service: Deactivated successfully. Jul 14 22:12:04.560429 systemd[1]: session-10.scope: Deactivated successfully. Jul 14 22:12:04.561324 systemd-logind[1545]: Session 10 logged out. Waiting for processes to exit. Jul 14 22:12:04.563020 systemd-logind[1545]: Removed session 10. Jul 14 22:12:04.995617 systemd-resolved[1461]: Under memory pressure, flushing caches. Jul 14 22:12:04.995675 systemd-resolved[1461]: Flushed all caches. Jul 14 22:12:04.997526 systemd-journald[1157]: Under memory pressure, flushing caches. Jul 14 22:12:06.832013 kubelet[2717]: I0714 22:12:06.831960 2717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 22:12:09.559735 systemd[1]: Started sshd@10-10.0.0.59:22-10.0.0.1:42322.service - OpenSSH per-connection server daemon (10.0.0.1:42322). Jul 14 22:12:09.604984 sshd[6202]: Accepted publickey for core from 10.0.0.1 port 42322 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:12:09.608210 sshd[6202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:12:09.613224 systemd-logind[1545]: New session 11 of user core. Jul 14 22:12:09.623873 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 14 22:12:09.764717 kubelet[2717]: E0714 22:12:09.764674 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:12:09.800036 sshd[6202]: pam_unix(sshd:session): session closed for user core Jul 14 22:12:09.806049 systemd[1]: sshd@10-10.0.0.59:22-10.0.0.1:42322.service: Deactivated successfully. Jul 14 22:12:09.809210 systemd-logind[1545]: Session 11 logged out. Waiting for processes to exit. Jul 14 22:12:09.809283 systemd[1]: session-11.scope: Deactivated successfully. Jul 14 22:12:09.810842 systemd-logind[1545]: Removed session 11. Jul 14 22:12:13.765033 kubelet[2717]: E0714 22:12:13.764978 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:12:13.769524 kubelet[2717]: E0714 22:12:13.767880 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:12:14.805714 systemd[1]: Started sshd@11-10.0.0.59:22-10.0.0.1:42324.service - OpenSSH per-connection server daemon (10.0.0.1:42324). Jul 14 22:12:14.843641 sshd[6220]: Accepted publickey for core from 10.0.0.1 port 42324 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:12:14.845463 sshd[6220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:12:14.849498 systemd-logind[1545]: New session 12 of user core. Jul 14 22:12:14.854798 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 14 22:12:15.024914 sshd[6220]: pam_unix(sshd:session): session closed for user core Jul 14 22:12:15.034248 systemd[1]: Started sshd@12-10.0.0.59:22-10.0.0.1:42330.service - OpenSSH per-connection server daemon (10.0.0.1:42330). Jul 14 22:12:15.037335 systemd[1]: sshd@11-10.0.0.59:22-10.0.0.1:42324.service: Deactivated successfully. Jul 14 22:12:15.042976 systemd[1]: session-12.scope: Deactivated successfully. Jul 14 22:12:15.048012 systemd-logind[1545]: Session 12 logged out. Waiting for processes to exit. Jul 14 22:12:15.051557 systemd-logind[1545]: Removed session 12. Jul 14 22:12:15.079379 sshd[6233]: Accepted publickey for core from 10.0.0.1 port 42330 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:12:15.081051 sshd[6233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:12:15.091574 systemd-logind[1545]: New session 13 of user core. Jul 14 22:12:15.100888 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 14 22:12:15.266385 sshd[6233]: pam_unix(sshd:session): session closed for user core Jul 14 22:12:15.284915 systemd[1]: Started sshd@13-10.0.0.59:22-10.0.0.1:42340.service - OpenSSH per-connection server daemon (10.0.0.1:42340). Jul 14 22:12:15.285684 systemd[1]: sshd@12-10.0.0.59:22-10.0.0.1:42330.service: Deactivated successfully. Jul 14 22:12:15.290017 systemd[1]: session-13.scope: Deactivated successfully. Jul 14 22:12:15.292449 systemd-logind[1545]: Session 13 logged out. Waiting for processes to exit. Jul 14 22:12:15.293609 systemd-logind[1545]: Removed session 13. Jul 14 22:12:15.321079 sshd[6247]: Accepted publickey for core from 10.0.0.1 port 42340 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:12:15.322742 sshd[6247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:12:15.326834 systemd-logind[1545]: New session 14 of user core. Jul 14 22:12:15.340775 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 14 22:12:15.451122 sshd[6247]: pam_unix(sshd:session): session closed for user core Jul 14 22:12:15.455156 systemd[1]: sshd@13-10.0.0.59:22-10.0.0.1:42340.service: Deactivated successfully. Jul 14 22:12:15.458070 systemd[1]: session-14.scope: Deactivated successfully. Jul 14 22:12:15.459083 systemd-logind[1545]: Session 14 logged out. Waiting for processes to exit. Jul 14 22:12:15.460279 systemd-logind[1545]: Removed session 14. Jul 14 22:12:20.458826 systemd[1]: Started sshd@14-10.0.0.59:22-10.0.0.1:53166.service - OpenSSH per-connection server daemon (10.0.0.1:53166). Jul 14 22:12:20.500977 sshd[6307]: Accepted publickey for core from 10.0.0.1 port 53166 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:12:20.503135 sshd[6307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:12:20.507490 systemd-logind[1545]: New session 15 of user core. Jul 14 22:12:20.516740 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 14 22:12:20.641556 sshd[6307]: pam_unix(sshd:session): session closed for user core Jul 14 22:12:20.645484 systemd[1]: sshd@14-10.0.0.59:22-10.0.0.1:53166.service: Deactivated successfully. Jul 14 22:12:20.648341 systemd-logind[1545]: Session 15 logged out. Waiting for processes to exit. Jul 14 22:12:20.648479 systemd[1]: session-15.scope: Deactivated successfully. Jul 14 22:12:20.649900 systemd-logind[1545]: Removed session 15. Jul 14 22:12:25.654736 systemd[1]: Started sshd@15-10.0.0.59:22-10.0.0.1:53170.service - OpenSSH per-connection server daemon (10.0.0.1:53170). Jul 14 22:12:25.690111 sshd[6327]: Accepted publickey for core from 10.0.0.1 port 53170 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:12:25.692084 sshd[6327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:12:25.696741 systemd-logind[1545]: New session 16 of user core. Jul 14 22:12:25.704907 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 14 22:12:25.817573 sshd[6327]: pam_unix(sshd:session): session closed for user core Jul 14 22:12:25.821549 systemd[1]: sshd@15-10.0.0.59:22-10.0.0.1:53170.service: Deactivated successfully. Jul 14 22:12:25.824078 systemd-logind[1545]: Session 16 logged out. Waiting for processes to exit. Jul 14 22:12:25.824175 systemd[1]: session-16.scope: Deactivated successfully. Jul 14 22:12:25.825480 systemd-logind[1545]: Removed session 16. Jul 14 22:12:27.766723 kubelet[2717]: E0714 22:12:27.766491 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:12:30.827843 systemd[1]: Started sshd@16-10.0.0.59:22-10.0.0.1:41972.service - OpenSSH per-connection server daemon (10.0.0.1:41972). Jul 14 22:12:30.886752 sshd[6364]: Accepted publickey for core from 10.0.0.1 port 41972 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:12:30.889491 sshd[6364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:12:30.893957 systemd-logind[1545]: New session 17 of user core. Jul 14 22:12:30.902752 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 14 22:12:31.086536 sshd[6364]: pam_unix(sshd:session): session closed for user core Jul 14 22:12:31.092964 systemd[1]: sshd@16-10.0.0.59:22-10.0.0.1:41972.service: Deactivated successfully. Jul 14 22:12:31.095160 systemd[1]: session-17.scope: Deactivated successfully. Jul 14 22:12:31.097604 systemd-logind[1545]: Session 17 logged out. Waiting for processes to exit. Jul 14 22:12:31.099625 systemd-logind[1545]: Removed session 17. Jul 14 22:12:36.099843 systemd[1]: Started sshd@17-10.0.0.59:22-10.0.0.1:41984.service - OpenSSH per-connection server daemon (10.0.0.1:41984). Jul 14 22:12:36.135085 sshd[6380]: Accepted publickey for core from 10.0.0.1 port 41984 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:12:36.136969 sshd[6380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:12:36.141875 systemd-logind[1545]: New session 18 of user core. Jul 14 22:12:36.152818 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 14 22:12:36.278200 sshd[6380]: pam_unix(sshd:session): session closed for user core Jul 14 22:12:36.285795 systemd[1]: Started sshd@18-10.0.0.59:22-10.0.0.1:41986.service - OpenSSH per-connection server daemon (10.0.0.1:41986). Jul 14 22:12:36.286350 systemd[1]: sshd@17-10.0.0.59:22-10.0.0.1:41984.service: Deactivated successfully. Jul 14 22:12:36.293000 systemd[1]: session-18.scope: Deactivated successfully. Jul 14 22:12:36.294863 systemd-logind[1545]: Session 18 logged out. Waiting for processes to exit. Jul 14 22:12:36.297238 systemd-logind[1545]: Removed session 18. Jul 14 22:12:36.322370 sshd[6392]: Accepted publickey for core from 10.0.0.1 port 41986 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:12:36.324326 sshd[6392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:12:36.329804 systemd-logind[1545]: New session 19 of user core. Jul 14 22:12:36.338976 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 14 22:12:39.764532 kubelet[2717]: E0714 22:12:39.764426 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:12:45.764810 kubelet[2717]: E0714 22:12:45.764762 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:12:46.804384 sshd[6392]: pam_unix(sshd:session): session closed for user core Jul 14 22:12:46.810803 systemd[1]: Started sshd@19-10.0.0.59:22-10.0.0.1:46838.service - OpenSSH per-connection server daemon (10.0.0.1:46838). Jul 14 22:12:46.812319 systemd[1]: sshd@18-10.0.0.59:22-10.0.0.1:41986.service: Deactivated successfully. Jul 14 22:12:46.816316 systemd[1]: session-19.scope: Deactivated successfully. Jul 14 22:12:46.817038 systemd-logind[1545]: Session 19 logged out. Waiting for processes to exit. Jul 14 22:12:46.819816 systemd-logind[1545]: Removed session 19. Jul 14 22:12:46.851231 sshd[6452]: Accepted publickey for core from 10.0.0.1 port 46838 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:12:46.853397 sshd[6452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:12:46.857837 systemd-logind[1545]: New session 20 of user core. Jul 14 22:12:46.866796 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 14 22:13:09.292800 sshd[6452]: pam_unix(sshd:session): session closed for user core Jul 14 22:13:09.300973 systemd[1]: Started sshd@20-10.0.0.59:22-10.0.0.1:47462.service - OpenSSH per-connection server daemon (10.0.0.1:47462). Jul 14 22:13:09.301814 systemd[1]: sshd@19-10.0.0.59:22-10.0.0.1:46838.service: Deactivated successfully. Jul 14 22:13:09.307038 systemd-logind[1545]: Session 20 logged out. Waiting for processes to exit. Jul 14 22:13:09.308236 systemd[1]: session-20.scope: Deactivated successfully. Jul 14 22:13:09.309264 systemd-logind[1545]: Removed session 20. Jul 14 22:13:09.355562 sshd[6563]: Accepted publickey for core from 10.0.0.1 port 47462 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:13:09.357562 sshd[6563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:13:09.362026 systemd-logind[1545]: New session 21 of user core. Jul 14 22:13:09.371832 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 14 22:13:09.855682 sshd[6563]: pam_unix(sshd:session): session closed for user core Jul 14 22:13:09.868178 systemd[1]: Started sshd@21-10.0.0.59:22-10.0.0.1:47478.service - OpenSSH per-connection server daemon (10.0.0.1:47478). Jul 14 22:13:09.868889 systemd[1]: sshd@20-10.0.0.59:22-10.0.0.1:47462.service: Deactivated successfully. Jul 14 22:13:09.871235 systemd[1]: session-21.scope: Deactivated successfully. Jul 14 22:13:09.874152 systemd-logind[1545]: Session 21 logged out. Waiting for processes to exit. Jul 14 22:13:09.875562 systemd-logind[1545]: Removed session 21. Jul 14 22:13:09.904225 sshd[6579]: Accepted publickey for core from 10.0.0.1 port 47478 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:13:09.906147 sshd[6579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:13:09.911535 systemd-logind[1545]: New session 22 of user core. Jul 14 22:13:09.921895 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 14 22:13:10.044582 sshd[6579]: pam_unix(sshd:session): session closed for user core Jul 14 22:13:10.048924 systemd[1]: sshd@21-10.0.0.59:22-10.0.0.1:47478.service: Deactivated successfully. Jul 14 22:13:10.051608 systemd-logind[1545]: Session 22 logged out. Waiting for processes to exit. Jul 14 22:13:10.051699 systemd[1]: session-22.scope: Deactivated successfully. Jul 14 22:13:10.052662 systemd-logind[1545]: Removed session 22. Jul 14 22:13:13.765037 kubelet[2717]: E0714 22:13:13.764990 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:13:15.055763 systemd[1]: Started sshd@22-10.0.0.59:22-10.0.0.1:47480.service - OpenSSH per-connection server daemon (10.0.0.1:47480). Jul 14 22:13:15.090110 sshd[6597]: Accepted publickey for core from 10.0.0.1 port 47480 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:13:15.091816 sshd[6597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:13:15.096139 systemd-logind[1545]: New session 23 of user core. Jul 14 22:13:15.106824 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 14 22:13:15.212340 sshd[6597]: pam_unix(sshd:session): session closed for user core Jul 14 22:13:15.215966 systemd[1]: sshd@22-10.0.0.59:22-10.0.0.1:47480.service: Deactivated successfully. Jul 14 22:13:15.218189 systemd-logind[1545]: Session 23 logged out. Waiting for processes to exit. Jul 14 22:13:15.218268 systemd[1]: session-23.scope: Deactivated successfully. Jul 14 22:13:15.219296 systemd-logind[1545]: Removed session 23. Jul 14 22:13:20.231931 systemd[1]: Started sshd@23-10.0.0.59:22-10.0.0.1:41292.service - OpenSSH per-connection server daemon (10.0.0.1:41292). Jul 14 22:13:20.268471 sshd[6657]: Accepted publickey for core from 10.0.0.1 port 41292 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:13:20.270160 sshd[6657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:13:20.274781 systemd-logind[1545]: New session 24 of user core. Jul 14 22:13:20.288874 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 14 22:13:20.406305 sshd[6657]: pam_unix(sshd:session): session closed for user core Jul 14 22:13:20.411691 systemd[1]: sshd@23-10.0.0.59:22-10.0.0.1:41292.service: Deactivated successfully. Jul 14 22:13:20.414576 systemd-logind[1545]: Session 24 logged out. Waiting for processes to exit. Jul 14 22:13:20.414638 systemd[1]: session-24.scope: Deactivated successfully. Jul 14 22:13:20.416013 systemd-logind[1545]: Removed session 24. Jul 14 22:13:24.764312 kubelet[2717]: E0714 22:13:24.764264 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:13:24.764854 kubelet[2717]: E0714 22:13:24.764264 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:13:25.417723 systemd[1]: Started sshd@24-10.0.0.59:22-10.0.0.1:41302.service - OpenSSH per-connection server daemon (10.0.0.1:41302). Jul 14 22:13:25.451287 sshd[6673]: Accepted publickey for core from 10.0.0.1 port 41302 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:13:25.452842 sshd[6673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:13:25.456668 systemd-logind[1545]: New session 25 of user core. Jul 14 22:13:25.464757 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 14 22:13:25.573110 sshd[6673]: pam_unix(sshd:session): session closed for user core Jul 14 22:13:25.577495 systemd[1]: sshd@24-10.0.0.59:22-10.0.0.1:41302.service: Deactivated successfully. Jul 14 22:13:25.580150 systemd-logind[1545]: Session 25 logged out. Waiting for processes to exit. Jul 14 22:13:25.580272 systemd[1]: session-25.scope: Deactivated successfully. Jul 14 22:13:25.581320 systemd-logind[1545]: Removed session 25. Jul 14 22:13:29.764923 kubelet[2717]: E0714 22:13:29.764871 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:13:30.592807 systemd[1]: Started sshd@25-10.0.0.59:22-10.0.0.1:47990.service - OpenSSH per-connection server daemon (10.0.0.1:47990). Jul 14 22:13:30.630185 sshd[6704]: Accepted publickey for core from 10.0.0.1 port 47990 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:13:30.630343 sshd[6704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:13:30.634585 systemd-logind[1545]: New session 26 of user core. Jul 14 22:13:30.641783 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 14 22:13:30.838679 sshd[6704]: pam_unix(sshd:session): session closed for user core Jul 14 22:13:30.843811 systemd[1]: sshd@25-10.0.0.59:22-10.0.0.1:47990.service: Deactivated successfully. Jul 14 22:13:30.847488 systemd[1]: session-26.scope: Deactivated successfully. Jul 14 22:13:30.848817 systemd-logind[1545]: Session 26 logged out. Waiting for processes to exit. Jul 14 22:13:30.850078 systemd-logind[1545]: Removed session 26.