Jul 14 22:19:56.874491 kernel: Linux version 6.6.97-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jul 14 20:23:49 -00 2025 Jul 14 22:19:56.874534 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bfa97d577a2baa7448b0ab2cae71f1606bd0084ffae5b72cc7eef5122a2ca497 Jul 14 22:19:56.874547 kernel: BIOS-provided physical RAM map: Jul 14 22:19:56.874566 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 14 22:19:56.874573 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 14 22:19:56.874580 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 14 22:19:56.874588 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jul 14 22:19:56.874596 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jul 14 22:19:56.874603 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 14 22:19:56.874612 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 14 22:19:56.874618 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 14 22:19:56.874629 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 14 22:19:56.874635 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 14 22:19:56.874641 kernel: NX (Execute Disable) protection: active Jul 14 22:19:56.874649 kernel: APIC: Static calls initialized Jul 14 22:19:56.874659 kernel: SMBIOS 2.8 present. Jul 14 22:19:56.874666 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jul 14 22:19:56.874672 kernel: Hypervisor detected: KVM Jul 14 22:19:56.874679 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 14 22:19:56.874686 kernel: kvm-clock: using sched offset of 2226653004 cycles Jul 14 22:19:56.874693 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 14 22:19:56.874700 kernel: tsc: Detected 2794.748 MHz processor Jul 14 22:19:56.874707 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 14 22:19:56.874714 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 14 22:19:56.874724 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jul 14 22:19:56.874731 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 14 22:19:56.874738 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 14 22:19:56.874745 kernel: Using GB pages for direct mapping Jul 14 22:19:56.874751 kernel: ACPI: Early table checksum verification disabled Jul 14 22:19:56.874758 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jul 14 22:19:56.874765 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:19:56.874772 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:19:56.874779 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:19:56.874788 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jul 14 22:19:56.874795 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:19:56.874802 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:19:56.874809 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:19:56.874816 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:19:56.874822 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jul 14 22:19:56.874829 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jul 14 22:19:56.874840 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jul 14 22:19:56.874850 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jul 14 22:19:56.874857 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jul 14 22:19:56.874864 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jul 14 22:19:56.874871 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jul 14 22:19:56.874878 kernel: No NUMA configuration found Jul 14 22:19:56.874885 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jul 14 22:19:56.874895 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jul 14 22:19:56.874902 kernel: Zone ranges: Jul 14 22:19:56.874909 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 14 22:19:56.874917 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jul 14 22:19:56.874924 kernel: Normal empty Jul 14 22:19:56.874931 kernel: Movable zone start for each node Jul 14 22:19:56.874938 kernel: Early memory node ranges Jul 14 22:19:56.874945 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 14 22:19:56.874952 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jul 14 22:19:56.874959 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jul 14 22:19:56.874969 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 14 22:19:56.874976 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 14 22:19:56.874983 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jul 14 22:19:56.874990 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 14 22:19:56.874998 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 14 22:19:56.875005 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 14 22:19:56.875012 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 14 22:19:56.875019 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 14 22:19:56.875026 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 14 22:19:56.875036 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 14 22:19:56.875043 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 14 22:19:56.875050 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 14 22:19:56.875065 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 14 22:19:56.875072 kernel: TSC deadline timer available Jul 14 22:19:56.875079 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jul 14 22:19:56.875086 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 14 22:19:56.875094 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 14 22:19:56.875101 kernel: kvm-guest: setup PV sched yield Jul 14 22:19:56.875111 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 14 22:19:56.875118 kernel: Booting paravirtualized kernel on KVM Jul 14 22:19:56.875125 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 14 22:19:56.875133 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 14 22:19:56.875140 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u524288 Jul 14 22:19:56.875147 kernel: pcpu-alloc: s197096 r8192 d32280 u524288 alloc=1*2097152 Jul 14 22:19:56.875154 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 14 22:19:56.875161 kernel: kvm-guest: PV spinlocks enabled Jul 14 22:19:56.875168 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 14 22:19:56.875180 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bfa97d577a2baa7448b0ab2cae71f1606bd0084ffae5b72cc7eef5122a2ca497 Jul 14 22:19:56.875187 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 14 22:19:56.875195 kernel: random: crng init done Jul 14 22:19:56.875202 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 14 22:19:56.875209 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 14 22:19:56.875216 kernel: Fallback order for Node 0: 0 Jul 14 22:19:56.875223 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jul 14 22:19:56.875230 kernel: Policy zone: DMA32 Jul 14 22:19:56.875237 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 14 22:19:56.875248 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42876K init, 2316K bss, 136900K reserved, 0K cma-reserved) Jul 14 22:19:56.875255 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 14 22:19:56.875262 kernel: ftrace: allocating 37970 entries in 149 pages Jul 14 22:19:56.875269 kernel: ftrace: allocated 149 pages with 4 groups Jul 14 22:19:56.875276 kernel: Dynamic Preempt: voluntary Jul 14 22:19:56.875284 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 14 22:19:56.875291 kernel: rcu: RCU event tracing is enabled. Jul 14 22:19:56.875299 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 14 22:19:56.875306 kernel: Trampoline variant of Tasks RCU enabled. Jul 14 22:19:56.875316 kernel: Rude variant of Tasks RCU enabled. Jul 14 22:19:56.875323 kernel: Tracing variant of Tasks RCU enabled. Jul 14 22:19:56.875330 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 14 22:19:56.875337 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 14 22:19:56.875345 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 14 22:19:56.875352 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 14 22:19:56.875359 kernel: Console: colour VGA+ 80x25 Jul 14 22:19:56.875366 kernel: printk: console [ttyS0] enabled Jul 14 22:19:56.875373 kernel: ACPI: Core revision 20230628 Jul 14 22:19:56.875401 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 14 22:19:56.875416 kernel: APIC: Switch to symmetric I/O mode setup Jul 14 22:19:56.875424 kernel: x2apic enabled Jul 14 22:19:56.875431 kernel: APIC: Switched APIC routing to: physical x2apic Jul 14 22:19:56.875438 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 14 22:19:56.875445 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 14 22:19:56.875453 kernel: kvm-guest: setup PV IPIs Jul 14 22:19:56.875475 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 14 22:19:56.875483 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 14 22:19:56.875491 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jul 14 22:19:56.875498 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 14 22:19:56.875506 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 14 22:19:56.875516 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 14 22:19:56.875523 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 14 22:19:56.875531 kernel: Spectre V2 : Mitigation: Retpolines Jul 14 22:19:56.875539 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 14 22:19:56.875546 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 14 22:19:56.875567 kernel: RETBleed: Mitigation: untrained return thunk Jul 14 22:19:56.875575 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 14 22:19:56.875583 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 14 22:19:56.875590 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 14 22:19:56.875599 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 14 22:19:56.875606 kernel: x86/bugs: return thunk changed Jul 14 22:19:56.875614 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 14 22:19:56.875621 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 14 22:19:56.875631 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 14 22:19:56.875639 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 14 22:19:56.875646 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 14 22:19:56.875654 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 14 22:19:56.875662 kernel: Freeing SMP alternatives memory: 32K Jul 14 22:19:56.875669 kernel: pid_max: default: 32768 minimum: 301 Jul 14 22:19:56.875677 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 14 22:19:56.875684 kernel: landlock: Up and running. Jul 14 22:19:56.875692 kernel: SELinux: Initializing. Jul 14 22:19:56.875703 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 14 22:19:56.875710 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 14 22:19:56.875718 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 14 22:19:56.875725 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 14 22:19:56.875733 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 14 22:19:56.875741 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 14 22:19:56.875748 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 14 22:19:56.875756 kernel: ... version: 0 Jul 14 22:19:56.875763 kernel: ... bit width: 48 Jul 14 22:19:56.875773 kernel: ... generic registers: 6 Jul 14 22:19:56.875781 kernel: ... value mask: 0000ffffffffffff Jul 14 22:19:56.875788 kernel: ... max period: 00007fffffffffff Jul 14 22:19:56.875796 kernel: ... fixed-purpose events: 0 Jul 14 22:19:56.875803 kernel: ... event mask: 000000000000003f Jul 14 22:19:56.875811 kernel: signal: max sigframe size: 1776 Jul 14 22:19:56.875818 kernel: rcu: Hierarchical SRCU implementation. Jul 14 22:19:56.875826 kernel: rcu: Max phase no-delay instances is 400. Jul 14 22:19:56.875833 kernel: smp: Bringing up secondary CPUs ... Jul 14 22:19:56.875844 kernel: smpboot: x86: Booting SMP configuration: Jul 14 22:19:56.875851 kernel: .... node #0, CPUs: #1 #2 #3 Jul 14 22:19:56.875859 kernel: smp: Brought up 1 node, 4 CPUs Jul 14 22:19:56.875866 kernel: smpboot: Max logical packages: 1 Jul 14 22:19:56.875874 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jul 14 22:19:56.875881 kernel: devtmpfs: initialized Jul 14 22:19:56.875888 kernel: x86/mm: Memory block size: 128MB Jul 14 22:19:56.875896 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 14 22:19:56.875904 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 14 22:19:56.875914 kernel: pinctrl core: initialized pinctrl subsystem Jul 14 22:19:56.875922 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 14 22:19:56.875929 kernel: audit: initializing netlink subsys (disabled) Jul 14 22:19:56.875937 kernel: audit: type=2000 audit(1752531597.122:1): state=initialized audit_enabled=0 res=1 Jul 14 22:19:56.875944 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 14 22:19:56.875951 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 14 22:19:56.875959 kernel: cpuidle: using governor menu Jul 14 22:19:56.875966 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 14 22:19:56.875974 kernel: dca service started, version 1.12.1 Jul 14 22:19:56.875984 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jul 14 22:19:56.875992 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jul 14 22:19:56.875999 kernel: PCI: Using configuration type 1 for base access Jul 14 22:19:56.876007 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 14 22:19:56.876014 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 14 22:19:56.876022 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 14 22:19:56.876030 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 14 22:19:56.876037 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 14 22:19:56.876045 kernel: ACPI: Added _OSI(Module Device) Jul 14 22:19:56.876062 kernel: ACPI: Added _OSI(Processor Device) Jul 14 22:19:56.876069 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 14 22:19:56.876077 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 14 22:19:56.876084 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 14 22:19:56.876092 kernel: ACPI: Interpreter enabled Jul 14 22:19:56.876100 kernel: ACPI: PM: (supports S0 S3 S5) Jul 14 22:19:56.876107 kernel: ACPI: Using IOAPIC for interrupt routing Jul 14 22:19:56.876115 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 14 22:19:56.876123 kernel: PCI: Using E820 reservations for host bridge windows Jul 14 22:19:56.876133 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 14 22:19:56.876141 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 14 22:19:56.876319 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 14 22:19:56.876449 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 14 22:19:56.876603 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 14 22:19:56.876614 kernel: PCI host bridge to bus 0000:00 Jul 14 22:19:56.876737 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 14 22:19:56.876851 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 14 22:19:56.876962 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 14 22:19:56.877083 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jul 14 22:19:56.877195 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 14 22:19:56.877305 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jul 14 22:19:56.877415 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 14 22:19:56.877570 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jul 14 22:19:56.877716 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jul 14 22:19:56.877841 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jul 14 22:19:56.877964 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jul 14 22:19:56.878094 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jul 14 22:19:56.878217 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 14 22:19:56.878353 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jul 14 22:19:56.878481 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jul 14 22:19:56.878642 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jul 14 22:19:56.878763 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jul 14 22:19:56.878893 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jul 14 22:19:56.879014 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jul 14 22:19:56.879145 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jul 14 22:19:56.879265 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jul 14 22:19:56.879397 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jul 14 22:19:56.879519 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jul 14 22:19:56.879669 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jul 14 22:19:56.879790 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jul 14 22:19:56.879911 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jul 14 22:19:56.880041 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jul 14 22:19:56.880173 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 14 22:19:56.880306 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jul 14 22:19:56.880426 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jul 14 22:19:56.880637 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jul 14 22:19:56.880780 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jul 14 22:19:56.880899 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jul 14 22:19:56.880909 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 14 22:19:56.880917 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 14 22:19:56.880929 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 14 22:19:56.880937 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 14 22:19:56.880944 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 14 22:19:56.880952 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 14 22:19:56.880959 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 14 22:19:56.880967 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 14 22:19:56.880974 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 14 22:19:56.880982 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 14 22:19:56.880989 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 14 22:19:56.881000 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 14 22:19:56.881007 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 14 22:19:56.881015 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 14 22:19:56.881022 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 14 22:19:56.881030 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 14 22:19:56.881037 kernel: iommu: Default domain type: Translated Jul 14 22:19:56.881045 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 14 22:19:56.881061 kernel: PCI: Using ACPI for IRQ routing Jul 14 22:19:56.881069 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 14 22:19:56.881079 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 14 22:19:56.881087 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jul 14 22:19:56.881207 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 14 22:19:56.881325 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 14 22:19:56.881444 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 14 22:19:56.881454 kernel: vgaarb: loaded Jul 14 22:19:56.881461 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 14 22:19:56.881469 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 14 22:19:56.881480 kernel: clocksource: Switched to clocksource kvm-clock Jul 14 22:19:56.881488 kernel: VFS: Disk quotas dquot_6.6.0 Jul 14 22:19:56.881495 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 14 22:19:56.881503 kernel: pnp: PnP ACPI init Jul 14 22:19:56.881654 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 14 22:19:56.881666 kernel: pnp: PnP ACPI: found 6 devices Jul 14 22:19:56.881674 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 14 22:19:56.881681 kernel: NET: Registered PF_INET protocol family Jul 14 22:19:56.881694 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 14 22:19:56.881701 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 14 22:19:56.881709 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 14 22:19:56.881717 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 14 22:19:56.881724 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 14 22:19:56.881732 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 14 22:19:56.881740 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 14 22:19:56.881747 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 14 22:19:56.881755 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 14 22:19:56.881765 kernel: NET: Registered PF_XDP protocol family Jul 14 22:19:56.881877 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 14 22:19:56.881987 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 14 22:19:56.882105 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 14 22:19:56.882216 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jul 14 22:19:56.882324 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 14 22:19:56.882433 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jul 14 22:19:56.882443 kernel: PCI: CLS 0 bytes, default 64 Jul 14 22:19:56.882454 kernel: Initialise system trusted keyrings Jul 14 22:19:56.882462 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 14 22:19:56.882470 kernel: Key type asymmetric registered Jul 14 22:19:56.882477 kernel: Asymmetric key parser 'x509' registered Jul 14 22:19:56.882485 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 14 22:19:56.882492 kernel: io scheduler mq-deadline registered Jul 14 22:19:56.882500 kernel: io scheduler kyber registered Jul 14 22:19:56.882508 kernel: io scheduler bfq registered Jul 14 22:19:56.882516 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 14 22:19:56.882526 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 14 22:19:56.882536 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 14 22:19:56.882544 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 14 22:19:56.882602 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 14 22:19:56.882610 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 14 22:19:56.882617 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 14 22:19:56.882625 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 14 22:19:56.882632 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 14 22:19:56.882761 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 14 22:19:56.882776 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 14 22:19:56.882886 kernel: rtc_cmos 00:04: registered as rtc0 Jul 14 22:19:56.882998 kernel: rtc_cmos 00:04: setting system clock to 2025-07-14T22:19:56 UTC (1752531596) Jul 14 22:19:56.883117 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 14 22:19:56.883128 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 14 22:19:56.883135 kernel: NET: Registered PF_INET6 protocol family Jul 14 22:19:56.883143 kernel: Segment Routing with IPv6 Jul 14 22:19:56.883150 kernel: In-situ OAM (IOAM) with IPv6 Jul 14 22:19:56.883161 kernel: NET: Registered PF_PACKET protocol family Jul 14 22:19:56.883168 kernel: Key type dns_resolver registered Jul 14 22:19:56.883176 kernel: IPI shorthand broadcast: enabled Jul 14 22:19:56.883183 kernel: sched_clock: Marking stable (566003979, 123533784)->(735376587, -45838824) Jul 14 22:19:56.883191 kernel: registered taskstats version 1 Jul 14 22:19:56.883198 kernel: Loading compiled-in X.509 certificates Jul 14 22:19:56.883206 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.97-flatcar: ff10e110ca3923b510cf0133f4e9f48dd636b870' Jul 14 22:19:56.883213 kernel: Key type .fscrypt registered Jul 14 22:19:56.883221 kernel: Key type fscrypt-provisioning registered Jul 14 22:19:56.883231 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 14 22:19:56.883239 kernel: ima: Allocated hash algorithm: sha1 Jul 14 22:19:56.883246 kernel: ima: No architecture policies found Jul 14 22:19:56.883253 kernel: clk: Disabling unused clocks Jul 14 22:19:56.883261 kernel: Freeing unused kernel image (initmem) memory: 42876K Jul 14 22:19:56.883268 kernel: Write protecting the kernel read-only data: 36864k Jul 14 22:19:56.883276 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Jul 14 22:19:56.883283 kernel: Run /init as init process Jul 14 22:19:56.883291 kernel: with arguments: Jul 14 22:19:56.883301 kernel: /init Jul 14 22:19:56.883308 kernel: with environment: Jul 14 22:19:56.883315 kernel: HOME=/ Jul 14 22:19:56.883323 kernel: TERM=linux Jul 14 22:19:56.883330 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 14 22:19:56.883339 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 14 22:19:56.883349 systemd[1]: Detected virtualization kvm. Jul 14 22:19:56.883357 systemd[1]: Detected architecture x86-64. Jul 14 22:19:56.883368 systemd[1]: Running in initrd. Jul 14 22:19:56.883376 systemd[1]: No hostname configured, using default hostname. Jul 14 22:19:56.883384 systemd[1]: Hostname set to . Jul 14 22:19:56.883392 systemd[1]: Initializing machine ID from VM UUID. Jul 14 22:19:56.883400 systemd[1]: Queued start job for default target initrd.target. Jul 14 22:19:56.883408 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 22:19:56.883416 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 22:19:56.883425 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 14 22:19:56.883437 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 14 22:19:56.883458 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 14 22:19:56.883470 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 14 22:19:56.883480 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 14 22:19:56.883490 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 14 22:19:56.883499 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 22:19:56.883507 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 14 22:19:56.883515 systemd[1]: Reached target paths.target - Path Units. Jul 14 22:19:56.883524 systemd[1]: Reached target slices.target - Slice Units. Jul 14 22:19:56.883532 systemd[1]: Reached target swap.target - Swaps. Jul 14 22:19:56.883540 systemd[1]: Reached target timers.target - Timer Units. Jul 14 22:19:56.883562 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 14 22:19:56.883572 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 14 22:19:56.883584 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 14 22:19:56.883594 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 14 22:19:56.883603 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 14 22:19:56.883614 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 14 22:19:56.883623 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 22:19:56.883631 systemd[1]: Reached target sockets.target - Socket Units. Jul 14 22:19:56.883639 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 14 22:19:56.883648 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 14 22:19:56.883656 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 14 22:19:56.883667 systemd[1]: Starting systemd-fsck-usr.service... Jul 14 22:19:56.883676 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 14 22:19:56.883684 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 14 22:19:56.883692 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 22:19:56.883701 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 14 22:19:56.883709 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 22:19:56.883717 systemd[1]: Finished systemd-fsck-usr.service. Jul 14 22:19:56.883746 systemd-journald[193]: Collecting audit messages is disabled. Jul 14 22:19:56.883771 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 14 22:19:56.883779 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 14 22:19:56.883788 systemd-journald[193]: Journal started Jul 14 22:19:56.883811 systemd-journald[193]: Runtime Journal (/run/log/journal/4f149542dffc4146809623a022e60e32) is 6.0M, max 48.4M, 42.3M free. Jul 14 22:19:56.873406 systemd-modules-load[194]: Inserted module 'overlay' Jul 14 22:19:56.911198 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 14 22:19:56.911212 kernel: Bridge firewalling registered Jul 14 22:19:56.900955 systemd-modules-load[194]: Inserted module 'br_netfilter' Jul 14 22:19:56.912887 systemd[1]: Started systemd-journald.service - Journal Service. Jul 14 22:19:56.912758 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 14 22:19:56.922796 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 14 22:19:56.925546 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 14 22:19:56.928490 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 14 22:19:56.931219 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 22:19:56.933741 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 14 22:19:56.938499 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 22:19:56.941045 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 22:19:56.943586 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 22:19:56.947880 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 14 22:19:56.968945 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 22:19:56.979698 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 14 22:19:56.981209 systemd-resolved[220]: Positive Trust Anchors: Jul 14 22:19:56.981219 systemd-resolved[220]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 22:19:56.981250 systemd-resolved[220]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 14 22:19:56.983728 systemd-resolved[220]: Defaulting to hostname 'linux'. Jul 14 22:19:56.984821 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 14 22:19:56.990245 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 14 22:19:57.005729 dracut-cmdline[230]: dracut-dracut-053 Jul 14 22:19:57.008915 dracut-cmdline[230]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bfa97d577a2baa7448b0ab2cae71f1606bd0084ffae5b72cc7eef5122a2ca497 Jul 14 22:19:57.097590 kernel: SCSI subsystem initialized Jul 14 22:19:57.106577 kernel: Loading iSCSI transport class v2.0-870. Jul 14 22:19:57.116583 kernel: iscsi: registered transport (tcp) Jul 14 22:19:57.139589 kernel: iscsi: registered transport (qla4xxx) Jul 14 22:19:57.139645 kernel: QLogic iSCSI HBA Driver Jul 14 22:19:57.189863 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 14 22:19:57.207810 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 14 22:19:57.231835 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 14 22:19:57.231894 kernel: device-mapper: uevent: version 1.0.3 Jul 14 22:19:57.233012 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 14 22:19:57.274581 kernel: raid6: avx2x4 gen() 29926 MB/s Jul 14 22:19:57.291582 kernel: raid6: avx2x2 gen() 30367 MB/s Jul 14 22:19:57.308617 kernel: raid6: avx2x1 gen() 25688 MB/s Jul 14 22:19:57.308652 kernel: raid6: using algorithm avx2x2 gen() 30367 MB/s Jul 14 22:19:57.326633 kernel: raid6: .... xor() 19431 MB/s, rmw enabled Jul 14 22:19:57.326655 kernel: raid6: using avx2x2 recovery algorithm Jul 14 22:19:57.347590 kernel: xor: automatically using best checksumming function avx Jul 14 22:19:57.501602 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 14 22:19:57.514335 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 14 22:19:57.529694 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 22:19:57.542081 systemd-udevd[413]: Using default interface naming scheme 'v255'. Jul 14 22:19:57.546703 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 22:19:57.557687 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 14 22:19:57.573584 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Jul 14 22:19:57.606624 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 14 22:19:57.612694 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 14 22:19:57.674279 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 22:19:57.684714 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 14 22:19:57.698505 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 14 22:19:57.701833 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 14 22:19:57.702088 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 22:19:57.702421 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 14 22:19:57.712787 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 14 22:19:57.716594 kernel: cryptd: max_cpu_qlen set to 1000 Jul 14 22:19:57.727287 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 14 22:19:57.728696 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 14 22:19:57.725176 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 14 22:19:57.731568 kernel: AVX2 version of gcm_enc/dec engaged. Jul 14 22:19:57.731599 kernel: AES CTR mode by8 optimization enabled Jul 14 22:19:57.738290 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 14 22:19:57.738313 kernel: GPT:9289727 != 19775487 Jul 14 22:19:57.738323 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 14 22:19:57.738333 kernel: GPT:9289727 != 19775487 Jul 14 22:19:57.738342 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 14 22:19:57.738352 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 22:19:57.750664 kernel: libata version 3.00 loaded. Jul 14 22:19:57.755239 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 14 22:19:57.755773 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 22:19:57.761082 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (462) Jul 14 22:19:57.760040 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 22:19:57.764741 kernel: BTRFS: device fsid d23b6972-ad36-4741-bf36-4d440b923127 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (475) Jul 14 22:19:57.764749 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 14 22:19:57.768464 kernel: ahci 0000:00:1f.2: version 3.0 Jul 14 22:19:57.768668 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 14 22:19:57.764813 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 22:19:57.772732 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jul 14 22:19:57.772894 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 14 22:19:57.768657 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 22:19:57.777590 kernel: scsi host0: ahci Jul 14 22:19:57.777684 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 22:19:57.782577 kernel: scsi host1: ahci Jul 14 22:19:57.785987 kernel: scsi host2: ahci Jul 14 22:19:57.786222 kernel: scsi host3: ahci Jul 14 22:19:57.786920 kernel: scsi host4: ahci Jul 14 22:19:57.787175 kernel: scsi host5: ahci Jul 14 22:19:57.789140 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jul 14 22:19:57.789163 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jul 14 22:19:57.789174 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jul 14 22:19:57.790442 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jul 14 22:19:57.790750 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jul 14 22:19:57.790765 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jul 14 22:19:57.791761 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 14 22:19:57.826745 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 22:19:57.835900 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 14 22:19:57.840859 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 14 22:19:57.849415 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 14 22:19:57.849876 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 14 22:19:57.864784 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 14 22:19:57.866768 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 22:19:57.874050 disk-uuid[567]: Primary Header is updated. Jul 14 22:19:57.874050 disk-uuid[567]: Secondary Entries is updated. Jul 14 22:19:57.874050 disk-uuid[567]: Secondary Header is updated. Jul 14 22:19:57.878579 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 22:19:57.882581 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 22:19:57.889739 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 22:19:58.096744 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 14 22:19:58.096819 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 14 22:19:58.096847 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 14 22:19:58.097580 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 14 22:19:58.098586 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 14 22:19:58.099577 kernel: ata3.00: applying bridge limits Jul 14 22:19:58.099594 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 14 22:19:58.100583 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 14 22:19:58.101575 kernel: ata3.00: configured for UDMA/100 Jul 14 22:19:58.101598 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 14 22:19:58.148589 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 14 22:19:58.148807 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 14 22:19:58.161582 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 14 22:19:58.884585 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 22:19:58.884637 disk-uuid[569]: The operation has completed successfully. Jul 14 22:19:58.917452 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 14 22:19:58.917587 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 14 22:19:58.935688 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 14 22:19:58.941055 sh[592]: Success Jul 14 22:19:58.953577 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 14 22:19:58.985321 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 14 22:19:59.001079 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 14 22:19:59.004534 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 14 22:19:59.015762 kernel: BTRFS info (device dm-0): first mount of filesystem d23b6972-ad36-4741-bf36-4d440b923127 Jul 14 22:19:59.015813 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 14 22:19:59.015824 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 14 22:19:59.016795 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 14 22:19:59.018106 kernel: BTRFS info (device dm-0): using free space tree Jul 14 22:19:59.022315 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 14 22:19:59.023771 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 14 22:19:59.034805 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 14 22:19:59.037410 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 14 22:19:59.046137 kernel: BTRFS info (device vda6): first mount of filesystem 1f379987-f438-494c-89f9-63473ca1b18d Jul 14 22:19:59.046165 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 14 22:19:59.046176 kernel: BTRFS info (device vda6): using free space tree Jul 14 22:19:59.048576 kernel: BTRFS info (device vda6): auto enabling async discard Jul 14 22:19:59.057840 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 14 22:19:59.059572 kernel: BTRFS info (device vda6): last unmount of filesystem 1f379987-f438-494c-89f9-63473ca1b18d Jul 14 22:19:59.068510 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 14 22:19:59.074925 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 14 22:19:59.128581 ignition[684]: Ignition 2.19.0 Jul 14 22:19:59.129357 ignition[684]: Stage: fetch-offline Jul 14 22:19:59.129421 ignition[684]: no configs at "/usr/lib/ignition/base.d" Jul 14 22:19:59.129440 ignition[684]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:19:59.129573 ignition[684]: parsed url from cmdline: "" Jul 14 22:19:59.129577 ignition[684]: no config URL provided Jul 14 22:19:59.129583 ignition[684]: reading system config file "/usr/lib/ignition/user.ign" Jul 14 22:19:59.129593 ignition[684]: no config at "/usr/lib/ignition/user.ign" Jul 14 22:19:59.129622 ignition[684]: op(1): [started] loading QEMU firmware config module Jul 14 22:19:59.129628 ignition[684]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 14 22:19:59.138271 ignition[684]: op(1): [finished] loading QEMU firmware config module Jul 14 22:19:59.155206 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 14 22:19:59.167697 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 14 22:19:59.179745 ignition[684]: parsing config with SHA512: 507438aa22651b15bbd921a2857c1778df5f73a8611819596b2b956ca7d217e35cf0a514796f53b334db2b9d57fead966f884288c49ee1c2e5caff678c5e7619 Jul 14 22:19:59.184067 unknown[684]: fetched base config from "system" Jul 14 22:19:59.184081 unknown[684]: fetched user config from "qemu" Jul 14 22:19:59.185313 ignition[684]: fetch-offline: fetch-offline passed Jul 14 22:19:59.185421 ignition[684]: Ignition finished successfully Jul 14 22:19:59.187601 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 14 22:19:59.190136 systemd-networkd[780]: lo: Link UP Jul 14 22:19:59.190146 systemd-networkd[780]: lo: Gained carrier Jul 14 22:19:59.191712 systemd-networkd[780]: Enumeration completed Jul 14 22:19:59.191800 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 14 22:19:59.192125 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 22:19:59.192129 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 14 22:19:59.192926 systemd-networkd[780]: eth0: Link UP Jul 14 22:19:59.192930 systemd-networkd[780]: eth0: Gained carrier Jul 14 22:19:59.192937 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 22:19:59.193965 systemd[1]: Reached target network.target - Network. Jul 14 22:19:59.195923 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 14 22:19:59.203692 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 14 22:19:59.213607 systemd-networkd[780]: eth0: DHCPv4 address 10.0.0.137/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 14 22:19:59.219716 ignition[783]: Ignition 2.19.0 Jul 14 22:19:59.219725 ignition[783]: Stage: kargs Jul 14 22:19:59.219874 ignition[783]: no configs at "/usr/lib/ignition/base.d" Jul 14 22:19:59.219885 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:19:59.220715 ignition[783]: kargs: kargs passed Jul 14 22:19:59.220750 ignition[783]: Ignition finished successfully Jul 14 22:19:59.227151 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 14 22:19:59.237697 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 14 22:19:59.251240 ignition[791]: Ignition 2.19.0 Jul 14 22:19:59.251256 ignition[791]: Stage: disks Jul 14 22:19:59.251419 ignition[791]: no configs at "/usr/lib/ignition/base.d" Jul 14 22:19:59.251430 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:19:59.255057 ignition[791]: disks: disks passed Jul 14 22:19:59.255111 ignition[791]: Ignition finished successfully Jul 14 22:19:59.258379 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 14 22:19:59.258991 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 14 22:19:59.260487 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 14 22:19:59.262854 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 14 22:19:59.263180 systemd[1]: Reached target sysinit.target - System Initialization. Jul 14 22:19:59.263492 systemd[1]: Reached target basic.target - Basic System. Jul 14 22:19:59.278692 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 14 22:19:59.313538 systemd-fsck[801]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 14 22:19:59.450150 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 14 22:19:59.458653 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 14 22:19:59.549042 systemd-resolved[220]: Detected conflict on linux IN A 10.0.0.137 Jul 14 22:19:59.549061 systemd-resolved[220]: Hostname conflict, changing published hostname from 'linux' to 'linux6'. Jul 14 22:19:59.582580 kernel: EXT4-fs (vda9): mounted filesystem dda007d3-640b-4d11-976f-3b761ca7aabd r/w with ordered data mode. Quota mode: none. Jul 14 22:19:59.582724 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 14 22:19:59.584880 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 14 22:19:59.595641 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 14 22:19:59.597834 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 14 22:19:59.598322 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 14 22:19:59.598356 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 14 22:19:59.607614 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (809) Jul 14 22:19:59.598377 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 14 22:19:59.611313 kernel: BTRFS info (device vda6): first mount of filesystem 1f379987-f438-494c-89f9-63473ca1b18d Jul 14 22:19:59.611337 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 14 22:19:59.611348 kernel: BTRFS info (device vda6): using free space tree Jul 14 22:19:59.613583 kernel: BTRFS info (device vda6): auto enabling async discard Jul 14 22:19:59.615478 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 14 22:19:59.622030 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 14 22:19:59.623413 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 14 22:19:59.661186 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Jul 14 22:19:59.666599 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Jul 14 22:19:59.671439 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory Jul 14 22:19:59.676307 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory Jul 14 22:19:59.766728 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 14 22:19:59.776762 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 14 22:19:59.778583 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 14 22:19:59.785617 kernel: BTRFS info (device vda6): last unmount of filesystem 1f379987-f438-494c-89f9-63473ca1b18d Jul 14 22:19:59.803063 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 14 22:19:59.934595 ignition[928]: INFO : Ignition 2.19.0 Jul 14 22:19:59.934595 ignition[928]: INFO : Stage: mount Jul 14 22:19:59.936277 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 22:19:59.936277 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:19:59.936277 ignition[928]: INFO : mount: mount passed Jul 14 22:19:59.936277 ignition[928]: INFO : Ignition finished successfully Jul 14 22:19:59.942035 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 14 22:19:59.952655 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 14 22:20:00.015168 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 14 22:20:00.024790 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 14 22:20:00.031574 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (937) Jul 14 22:20:00.031601 kernel: BTRFS info (device vda6): first mount of filesystem 1f379987-f438-494c-89f9-63473ca1b18d Jul 14 22:20:00.034053 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 14 22:20:00.034077 kernel: BTRFS info (device vda6): using free space tree Jul 14 22:20:00.036581 kernel: BTRFS info (device vda6): auto enabling async discard Jul 14 22:20:00.038192 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 14 22:20:00.062328 ignition[954]: INFO : Ignition 2.19.0 Jul 14 22:20:00.062328 ignition[954]: INFO : Stage: files Jul 14 22:20:00.064302 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 22:20:00.064302 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:20:00.064302 ignition[954]: DEBUG : files: compiled without relabeling support, skipping Jul 14 22:20:00.064302 ignition[954]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 14 22:20:00.064302 ignition[954]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 14 22:20:00.071008 ignition[954]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 14 22:20:00.071008 ignition[954]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 14 22:20:00.071008 ignition[954]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 14 22:20:00.071008 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 14 22:20:00.071008 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jul 14 22:20:00.067390 unknown[954]: wrote ssh authorized keys file for user: core Jul 14 22:20:00.817772 systemd-networkd[780]: eth0: Gained IPv6LL Jul 14 22:20:10.126270 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 14 22:20:10.616258 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 14 22:20:10.616258 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 14 22:20:10.620116 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 14 22:20:10.620116 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 14 22:20:10.620116 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 14 22:20:10.620116 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 14 22:20:10.620116 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 14 22:20:10.620116 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 14 22:20:10.620116 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 14 22:20:10.620116 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 22:20:10.620116 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 22:20:10.620116 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 14 22:20:10.620116 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 14 22:20:10.620116 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 14 22:20:10.620116 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jul 14 22:20:23.410577 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 14 22:20:24.375963 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 14 22:20:24.375963 ignition[954]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 14 22:20:24.379618 ignition[954]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 14 22:20:24.381755 ignition[954]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 14 22:20:24.381755 ignition[954]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 14 22:20:24.381755 ignition[954]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 14 22:20:24.386276 ignition[954]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 22:20:24.386276 ignition[954]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 22:20:24.386276 ignition[954]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 14 22:20:24.386276 ignition[954]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 14 22:20:24.409427 ignition[954]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 22:20:24.414736 ignition[954]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 22:20:24.416478 ignition[954]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 14 22:20:24.416478 ignition[954]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 14 22:20:24.416478 ignition[954]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 14 22:20:24.416478 ignition[954]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 14 22:20:24.416478 ignition[954]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 14 22:20:24.416478 ignition[954]: INFO : files: files passed Jul 14 22:20:24.416478 ignition[954]: INFO : Ignition finished successfully Jul 14 22:20:24.417982 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 14 22:20:24.432703 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 14 22:20:24.435316 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 14 22:20:24.437061 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 14 22:20:24.437166 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 14 22:20:24.445470 initrd-setup-root-after-ignition[981]: grep: /sysroot/oem/oem-release: No such file or directory Jul 14 22:20:24.448622 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 14 22:20:24.448622 initrd-setup-root-after-ignition[983]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 14 22:20:24.451648 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 14 22:20:24.451348 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 14 22:20:24.453045 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 14 22:20:24.459692 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 14 22:20:24.484400 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 14 22:20:24.485437 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 14 22:20:24.488022 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 14 22:20:24.490220 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 14 22:20:24.492189 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 14 22:20:24.494328 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 14 22:20:24.513021 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 14 22:20:24.516645 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 14 22:20:24.529649 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 14 22:20:24.531935 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 22:20:24.534237 systemd[1]: Stopped target timers.target - Timer Units. Jul 14 22:20:24.536002 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 14 22:20:24.536987 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 14 22:20:24.539512 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 14 22:20:24.541573 systemd[1]: Stopped target basic.target - Basic System. Jul 14 22:20:24.543334 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 14 22:20:24.545530 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 14 22:20:24.547832 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 14 22:20:24.549977 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 14 22:20:24.552021 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 14 22:20:24.554454 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 14 22:20:24.556483 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 14 22:20:24.558501 systemd[1]: Stopped target swap.target - Swaps. Jul 14 22:20:24.560111 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 14 22:20:24.561099 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 14 22:20:24.563271 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 14 22:20:24.565417 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 22:20:24.567723 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 14 22:20:24.568678 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 22:20:24.571242 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 14 22:20:24.572245 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 14 22:20:24.574473 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 14 22:20:24.575578 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 14 22:20:24.577847 systemd[1]: Stopped target paths.target - Path Units. Jul 14 22:20:24.579538 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 14 22:20:24.584619 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 22:20:24.587289 systemd[1]: Stopped target slices.target - Slice Units. Jul 14 22:20:24.589110 systemd[1]: Stopped target sockets.target - Socket Units. Jul 14 22:20:24.590941 systemd[1]: iscsid.socket: Deactivated successfully. Jul 14 22:20:24.591787 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 14 22:20:24.593716 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 14 22:20:24.594595 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 14 22:20:24.596592 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 14 22:20:24.597742 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 14 22:20:24.600193 systemd[1]: ignition-files.service: Deactivated successfully. Jul 14 22:20:24.601172 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 14 22:20:24.614703 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 14 22:20:24.616522 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 14 22:20:24.616651 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 22:20:24.620507 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 14 22:20:24.622220 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 14 22:20:24.623275 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 22:20:24.625594 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 14 22:20:24.626632 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 14 22:20:24.628735 ignition[1007]: INFO : Ignition 2.19.0 Jul 14 22:20:24.628735 ignition[1007]: INFO : Stage: umount Jul 14 22:20:24.628735 ignition[1007]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 22:20:24.628735 ignition[1007]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:20:24.628735 ignition[1007]: INFO : umount: umount passed Jul 14 22:20:24.628735 ignition[1007]: INFO : Ignition finished successfully Jul 14 22:20:24.633545 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 14 22:20:24.633698 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 14 22:20:24.638870 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 14 22:20:24.639001 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 14 22:20:24.639508 systemd[1]: Stopped target network.target - Network. Jul 14 22:20:24.641990 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 14 22:20:24.642044 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 14 22:20:24.642346 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 14 22:20:24.642388 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 14 22:20:24.642826 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 14 22:20:24.642873 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 14 22:20:24.643143 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 14 22:20:24.643186 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 14 22:20:24.643637 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 14 22:20:24.644048 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 14 22:20:24.649060 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 14 22:20:24.653698 systemd-networkd[780]: eth0: DHCPv6 lease lost Jul 14 22:20:24.654085 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 14 22:20:24.654215 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 14 22:20:24.656337 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 14 22:20:24.656486 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 14 22:20:24.658451 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 14 22:20:24.658512 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 14 22:20:24.665716 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 14 22:20:24.665917 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 14 22:20:24.665968 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 14 22:20:24.666305 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 14 22:20:24.666353 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 14 22:20:24.666774 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 14 22:20:24.666817 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 14 22:20:24.667088 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 14 22:20:24.667128 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 22:20:24.667498 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 22:20:24.668178 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 14 22:20:24.668301 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 14 22:20:24.678740 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 14 22:20:24.678861 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 14 22:20:24.683930 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 14 22:20:24.684105 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 22:20:24.686123 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 14 22:20:24.686205 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 14 22:20:24.688009 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 14 22:20:24.688055 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 22:20:24.688468 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 14 22:20:24.688513 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 14 22:20:24.689317 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 14 22:20:24.689365 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 14 22:20:24.690102 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 14 22:20:24.690159 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 22:20:24.699272 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 14 22:20:24.699859 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 14 22:20:24.699913 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 22:20:24.700236 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 14 22:20:24.700279 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 22:20:24.700907 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 14 22:20:24.701019 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 14 22:20:24.713270 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 14 22:20:24.713391 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 14 22:20:24.714030 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 14 22:20:24.722711 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 14 22:20:24.732753 systemd[1]: Switching root. Jul 14 22:20:24.766059 systemd-journald[193]: Journal stopped Jul 14 22:20:26.089140 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Jul 14 22:20:26.089223 kernel: SELinux: policy capability network_peer_controls=1 Jul 14 22:20:26.089243 kernel: SELinux: policy capability open_perms=1 Jul 14 22:20:26.089254 kernel: SELinux: policy capability extended_socket_class=1 Jul 14 22:20:26.089270 kernel: SELinux: policy capability always_check_network=0 Jul 14 22:20:26.089282 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 14 22:20:26.089297 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 14 22:20:26.089312 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 14 22:20:26.089324 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 14 22:20:26.089335 kernel: audit: type=1403 audit(1752531625.373:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 14 22:20:26.089347 systemd[1]: Successfully loaded SELinux policy in 39.360ms. Jul 14 22:20:26.089369 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.107ms. Jul 14 22:20:26.089384 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 14 22:20:26.089396 systemd[1]: Detected virtualization kvm. Jul 14 22:20:26.089408 systemd[1]: Detected architecture x86-64. Jul 14 22:20:26.089423 systemd[1]: Detected first boot. Jul 14 22:20:26.089435 systemd[1]: Initializing machine ID from VM UUID. Jul 14 22:20:26.089446 zram_generator::config[1051]: No configuration found. Jul 14 22:20:26.089459 systemd[1]: Populated /etc with preset unit settings. Jul 14 22:20:26.089472 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 14 22:20:26.089483 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 14 22:20:26.089496 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 14 22:20:26.089508 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 14 22:20:26.089523 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 14 22:20:26.089535 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 14 22:20:26.089569 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 14 22:20:26.089582 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 14 22:20:26.089595 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 14 22:20:26.089607 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 14 22:20:26.089619 systemd[1]: Created slice user.slice - User and Session Slice. Jul 14 22:20:26.089631 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 22:20:26.089643 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 22:20:26.089659 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 14 22:20:26.089672 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 14 22:20:26.089684 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 14 22:20:26.089696 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 14 22:20:26.089708 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 14 22:20:26.089721 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 22:20:26.089733 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 14 22:20:26.089745 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 14 22:20:26.089763 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 14 22:20:26.089777 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 14 22:20:26.089790 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 22:20:26.089802 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 14 22:20:26.089814 systemd[1]: Reached target slices.target - Slice Units. Jul 14 22:20:26.089826 systemd[1]: Reached target swap.target - Swaps. Jul 14 22:20:26.089838 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 14 22:20:26.089850 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 14 22:20:26.089862 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 14 22:20:26.089876 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 14 22:20:26.089888 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 22:20:26.089900 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 14 22:20:26.089912 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 14 22:20:26.089924 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 14 22:20:26.089937 systemd[1]: Mounting media.mount - External Media Directory... Jul 14 22:20:26.089950 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:20:26.089962 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 14 22:20:26.089974 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 14 22:20:26.089989 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 14 22:20:26.090001 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 14 22:20:26.090014 systemd[1]: Reached target machines.target - Containers. Jul 14 22:20:26.090026 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 14 22:20:26.090038 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 22:20:26.090050 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 14 22:20:26.090062 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 14 22:20:26.090074 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 22:20:26.090088 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 14 22:20:26.090101 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 22:20:26.090113 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 14 22:20:26.090124 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 22:20:26.090137 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 14 22:20:26.090157 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 14 22:20:26.090170 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 14 22:20:26.090182 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 14 22:20:26.090197 systemd[1]: Stopped systemd-fsck-usr.service. Jul 14 22:20:26.090209 kernel: fuse: init (API version 7.39) Jul 14 22:20:26.090221 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 14 22:20:26.090233 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 14 22:20:26.090246 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 14 22:20:26.090257 kernel: loop: module loaded Jul 14 22:20:26.090269 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 14 22:20:26.090282 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 14 22:20:26.090294 systemd[1]: verity-setup.service: Deactivated successfully. Jul 14 22:20:26.090306 systemd[1]: Stopped verity-setup.service. Jul 14 22:20:26.090321 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:20:26.090335 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 14 22:20:26.090347 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 14 22:20:26.090359 systemd[1]: Mounted media.mount - External Media Directory. Jul 14 22:20:26.090371 kernel: ACPI: bus type drm_connector registered Jul 14 22:20:26.090385 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 14 22:20:26.090413 systemd-journald[1125]: Collecting audit messages is disabled. Jul 14 22:20:26.090436 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 14 22:20:26.090448 systemd-journald[1125]: Journal started Jul 14 22:20:26.090470 systemd-journald[1125]: Runtime Journal (/run/log/journal/4f149542dffc4146809623a022e60e32) is 6.0M, max 48.4M, 42.3M free. Jul 14 22:20:25.871932 systemd[1]: Queued start job for default target multi-user.target. Jul 14 22:20:25.891251 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 14 22:20:25.891740 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 14 22:20:26.091638 systemd[1]: Started systemd-journald.service - Journal Service. Jul 14 22:20:26.093076 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 14 22:20:26.094287 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 14 22:20:26.095722 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 22:20:26.097259 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 14 22:20:26.097442 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 14 22:20:26.098936 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 22:20:26.099107 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 22:20:26.100473 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 14 22:20:26.100798 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 14 22:20:26.102093 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 22:20:26.102270 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 22:20:26.103732 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 14 22:20:26.103900 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 14 22:20:26.105219 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 22:20:26.105388 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 22:20:26.106710 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 14 22:20:26.108031 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 14 22:20:26.109479 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 14 22:20:26.124330 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 14 22:20:26.130633 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 14 22:20:26.132837 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 14 22:20:26.133918 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 14 22:20:26.133945 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 14 22:20:26.135911 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 14 22:20:26.138214 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 14 22:20:26.141250 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 14 22:20:26.142458 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 22:20:26.145366 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 14 22:20:26.147671 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 14 22:20:26.148940 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 22:20:26.151872 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 14 22:20:26.155653 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 14 22:20:26.156799 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 14 22:20:26.161812 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 14 22:20:26.164302 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 14 22:20:26.170197 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 14 22:20:26.172060 systemd-journald[1125]: Time spent on flushing to /var/log/journal/4f149542dffc4146809623a022e60e32 is 21.227ms for 952 entries. Jul 14 22:20:26.172060 systemd-journald[1125]: System Journal (/var/log/journal/4f149542dffc4146809623a022e60e32) is 8.0M, max 195.6M, 187.6M free. Jul 14 22:20:26.203364 systemd-journald[1125]: Received client request to flush runtime journal. Jul 14 22:20:26.203397 kernel: loop0: detected capacity change from 0 to 224512 Jul 14 22:20:26.172046 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 14 22:20:26.176605 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 14 22:20:26.188089 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 14 22:20:26.190707 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 14 22:20:26.204707 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 14 22:20:26.208922 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 14 22:20:26.211417 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 14 22:20:26.214185 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 22:20:26.223758 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 14 22:20:26.225505 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 14 22:20:26.229580 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 14 22:20:26.230722 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 14 22:20:26.239403 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 14 22:20:26.240132 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 14 22:20:26.244937 udevadm[1179]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 14 22:20:26.257541 systemd-tmpfiles[1183]: ACLs are not supported, ignoring. Jul 14 22:20:26.257577 systemd-tmpfiles[1183]: ACLs are not supported, ignoring. Jul 14 22:20:26.263544 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 22:20:26.265573 kernel: loop1: detected capacity change from 0 to 140768 Jul 14 22:20:26.297573 kernel: loop2: detected capacity change from 0 to 142488 Jul 14 22:20:26.335570 kernel: loop3: detected capacity change from 0 to 224512 Jul 14 22:20:26.345606 kernel: loop4: detected capacity change from 0 to 140768 Jul 14 22:20:26.355579 kernel: loop5: detected capacity change from 0 to 142488 Jul 14 22:20:26.364339 (sd-merge)[1189]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 14 22:20:26.364929 (sd-merge)[1189]: Merged extensions into '/usr'. Jul 14 22:20:26.368676 systemd[1]: Reloading requested from client PID 1165 ('systemd-sysext') (unit systemd-sysext.service)... Jul 14 22:20:26.368771 systemd[1]: Reloading... Jul 14 22:20:26.421859 zram_generator::config[1213]: No configuration found. Jul 14 22:20:26.474875 ldconfig[1160]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 14 22:20:26.535981 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 22:20:26.584443 systemd[1]: Reloading finished in 215 ms. Jul 14 22:20:26.620525 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 14 22:20:26.621985 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 14 22:20:26.635768 systemd[1]: Starting ensure-sysext.service... Jul 14 22:20:26.637677 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 14 22:20:26.645414 systemd[1]: Reloading requested from client PID 1252 ('systemctl') (unit ensure-sysext.service)... Jul 14 22:20:26.645426 systemd[1]: Reloading... Jul 14 22:20:26.660763 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 14 22:20:26.661165 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 14 22:20:26.662185 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 14 22:20:26.662487 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Jul 14 22:20:26.662601 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Jul 14 22:20:26.665811 systemd-tmpfiles[1253]: Detected autofs mount point /boot during canonicalization of boot. Jul 14 22:20:26.665824 systemd-tmpfiles[1253]: Skipping /boot Jul 14 22:20:26.679059 systemd-tmpfiles[1253]: Detected autofs mount point /boot during canonicalization of boot. Jul 14 22:20:26.679075 systemd-tmpfiles[1253]: Skipping /boot Jul 14 22:20:26.690578 zram_generator::config[1280]: No configuration found. Jul 14 22:20:26.794751 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 22:20:26.843385 systemd[1]: Reloading finished in 197 ms. Jul 14 22:20:26.862774 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 14 22:20:26.875000 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 22:20:26.881310 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 14 22:20:26.883854 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 14 22:20:26.886069 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 14 22:20:26.889728 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 14 22:20:26.892966 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 22:20:26.896013 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 14 22:20:26.907809 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 14 22:20:26.912162 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:20:26.912335 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 22:20:26.914236 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 22:20:26.921781 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 22:20:26.928629 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 22:20:26.929752 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 22:20:26.929846 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:20:26.930845 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 14 22:20:26.932667 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 22:20:26.932829 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 22:20:26.934424 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 22:20:26.934632 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 22:20:26.940955 augenrules[1344]: No rules Jul 14 22:20:26.942463 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 22:20:26.943113 systemd-udevd[1325]: Using default interface naming scheme 'v255'. Jul 14 22:20:26.945879 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 14 22:20:26.947789 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 14 22:20:26.949493 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 22:20:26.949793 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 22:20:26.951657 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 14 22:20:26.961693 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:20:26.962000 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 22:20:26.965330 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 22:20:26.967845 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 22:20:26.972319 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 22:20:26.974690 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 22:20:26.974796 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:20:26.975818 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 14 22:20:26.977960 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 14 22:20:26.979484 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 22:20:26.985922 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 14 22:20:26.987742 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 22:20:26.988026 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 22:20:26.989851 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 22:20:26.990376 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 22:20:26.992172 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 22:20:26.992874 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 22:20:27.012747 systemd[1]: Finished ensure-sysext.service. Jul 14 22:20:27.016930 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 14 22:20:27.017055 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:20:27.017202 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 22:20:27.024746 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 22:20:27.027287 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 14 22:20:27.030965 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 22:20:27.034025 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 22:20:27.036718 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 22:20:27.039726 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 14 22:20:27.044970 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 14 22:20:27.046412 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 14 22:20:27.046439 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:20:27.047026 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 22:20:27.047212 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 22:20:27.048892 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 14 22:20:27.049149 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 14 22:20:27.052190 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 22:20:27.052356 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 22:20:27.053721 systemd-resolved[1323]: Positive Trust Anchors: Jul 14 22:20:27.054055 systemd-resolved[1323]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 22:20:27.054090 systemd-resolved[1323]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 14 22:20:27.061578 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1379) Jul 14 22:20:27.067450 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 22:20:27.067788 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 22:20:27.073274 systemd-resolved[1323]: Defaulting to hostname 'linux'. Jul 14 22:20:27.081669 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 14 22:20:27.086312 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 14 22:20:27.088015 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 22:20:27.088095 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 14 22:20:27.102133 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 14 22:20:27.105595 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 14 22:20:27.114762 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 14 22:20:27.121604 kernel: ACPI: button: Power Button [PWRF] Jul 14 22:20:27.126570 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jul 14 22:20:27.134068 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 14 22:20:27.142226 systemd-networkd[1394]: lo: Link UP Jul 14 22:20:27.142239 systemd-networkd[1394]: lo: Gained carrier Jul 14 22:20:27.148012 systemd-networkd[1394]: Enumeration completed Jul 14 22:20:27.148103 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 14 22:20:27.149358 systemd[1]: Reached target network.target - Network. Jul 14 22:20:27.151477 systemd-networkd[1394]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 22:20:27.151489 systemd-networkd[1394]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 14 22:20:27.154981 systemd-networkd[1394]: eth0: Link UP Jul 14 22:20:27.154994 systemd-networkd[1394]: eth0: Gained carrier Jul 14 22:20:27.155006 systemd-networkd[1394]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 22:20:27.158580 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 14 22:20:27.160706 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jul 14 22:20:27.162628 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 14 22:20:27.164718 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 14 22:20:27.165925 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 14 22:20:27.168830 systemd[1]: Reached target time-set.target - System Time Set. Jul 14 22:20:27.195904 systemd-networkd[1394]: eth0: DHCPv4 address 10.0.0.137/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 14 22:20:27.198170 systemd-timesyncd[1397]: Network configuration changed, trying to establish connection. Jul 14 22:20:28.442983 systemd-resolved[1323]: Clock change detected. Flushing caches. Jul 14 22:20:28.443063 systemd-timesyncd[1397]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 14 22:20:28.443112 systemd-timesyncd[1397]: Initial clock synchronization to Mon 2025-07-14 22:20:28.442944 UTC. Jul 14 22:20:28.469520 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 22:20:28.477250 kernel: mousedev: PS/2 mouse device common for all mice Jul 14 22:20:28.486766 kernel: kvm_amd: TSC scaling supported Jul 14 22:20:28.486800 kernel: kvm_amd: Nested Virtualization enabled Jul 14 22:20:28.486813 kernel: kvm_amd: Nested Paging enabled Jul 14 22:20:28.486826 kernel: kvm_amd: LBR virtualization supported Jul 14 22:20:28.487423 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 14 22:20:28.487508 kernel: kvm_amd: Virtual GIF supported Jul 14 22:20:28.509248 kernel: EDAC MC: Ver: 3.0.0 Jul 14 22:20:28.539564 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 14 22:20:28.576368 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 14 22:20:28.577959 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 22:20:28.586390 lvm[1423]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 14 22:20:28.615271 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 14 22:20:28.616653 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 14 22:20:28.617766 systemd[1]: Reached target sysinit.target - System Initialization. Jul 14 22:20:28.618929 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 14 22:20:28.620190 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 14 22:20:28.621595 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 14 22:20:28.622818 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 14 22:20:28.624042 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 14 22:20:28.625267 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 14 22:20:28.625295 systemd[1]: Reached target paths.target - Path Units. Jul 14 22:20:28.626178 systemd[1]: Reached target timers.target - Timer Units. Jul 14 22:20:28.627801 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 14 22:20:28.630462 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 14 22:20:28.638750 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 14 22:20:28.641067 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 14 22:20:28.642654 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 14 22:20:28.643820 systemd[1]: Reached target sockets.target - Socket Units. Jul 14 22:20:28.644747 systemd[1]: Reached target basic.target - Basic System. Jul 14 22:20:28.645688 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 14 22:20:28.645715 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 14 22:20:28.646688 systemd[1]: Starting containerd.service - containerd container runtime... Jul 14 22:20:28.648747 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 14 22:20:28.651312 lvm[1428]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 14 22:20:28.653322 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 14 22:20:28.655361 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 14 22:20:28.656456 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 14 22:20:28.660360 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 14 22:20:28.662115 jq[1431]: false Jul 14 22:20:28.664315 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 14 22:20:28.667644 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 14 22:20:28.672400 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 14 22:20:28.676380 extend-filesystems[1432]: Found loop3 Jul 14 22:20:28.677531 extend-filesystems[1432]: Found loop4 Jul 14 22:20:28.677531 extend-filesystems[1432]: Found loop5 Jul 14 22:20:28.677531 extend-filesystems[1432]: Found sr0 Jul 14 22:20:28.677531 extend-filesystems[1432]: Found vda Jul 14 22:20:28.677531 extend-filesystems[1432]: Found vda1 Jul 14 22:20:28.677531 extend-filesystems[1432]: Found vda2 Jul 14 22:20:28.677531 extend-filesystems[1432]: Found vda3 Jul 14 22:20:28.677531 extend-filesystems[1432]: Found usr Jul 14 22:20:28.677531 extend-filesystems[1432]: Found vda4 Jul 14 22:20:28.677531 extend-filesystems[1432]: Found vda6 Jul 14 22:20:28.677531 extend-filesystems[1432]: Found vda7 Jul 14 22:20:28.677531 extend-filesystems[1432]: Found vda9 Jul 14 22:20:28.677531 extend-filesystems[1432]: Checking size of /dev/vda9 Jul 14 22:20:28.677204 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 14 22:20:28.701255 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 14 22:20:28.701298 extend-filesystems[1432]: Resized partition /dev/vda9 Jul 14 22:20:28.688958 dbus-daemon[1430]: [system] SELinux support is enabled Jul 14 22:20:28.682878 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 14 22:20:28.705843 extend-filesystems[1449]: resize2fs 1.47.1 (20-May-2024) Jul 14 22:20:28.708091 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1374) Jul 14 22:20:28.683309 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 14 22:20:28.690469 systemd[1]: Starting update-engine.service - Update Engine... Jul 14 22:20:28.695326 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 14 22:20:28.698041 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 14 22:20:28.706056 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 14 22:20:28.710685 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 14 22:20:28.710895 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 14 22:20:28.711206 systemd[1]: motdgen.service: Deactivated successfully. Jul 14 22:20:28.711422 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 14 22:20:28.716334 jq[1450]: true Jul 14 22:20:28.715593 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 14 22:20:28.715817 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 14 22:20:28.725126 update_engine[1445]: I20250714 22:20:28.725054 1445 main.cc:92] Flatcar Update Engine starting Jul 14 22:20:28.726261 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 14 22:20:28.728456 update_engine[1445]: I20250714 22:20:28.728428 1445 update_check_scheduler.cc:74] Next update check in 7m39s Jul 14 22:20:28.740832 jq[1456]: true Jul 14 22:20:28.741333 (ntainerd)[1464]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 14 22:20:28.750455 systemd-logind[1440]: Watching system buttons on /dev/input/event1 (Power Button) Jul 14 22:20:28.750485 systemd-logind[1440]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 14 22:20:28.752936 extend-filesystems[1449]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 14 22:20:28.752936 extend-filesystems[1449]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 14 22:20:28.752936 extend-filesystems[1449]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 14 22:20:28.758157 extend-filesystems[1432]: Resized filesystem in /dev/vda9 Jul 14 22:20:28.753572 systemd-logind[1440]: New seat seat0. Jul 14 22:20:28.756888 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 14 22:20:28.759113 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 14 22:20:28.760684 systemd[1]: Started systemd-logind.service - User Login Management. Jul 14 22:20:28.765441 systemd[1]: Started update-engine.service - Update Engine. Jul 14 22:20:28.768298 tar[1455]: linux-amd64/LICENSE Jul 14 22:20:28.768298 tar[1455]: linux-amd64/helm Jul 14 22:20:28.767694 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 14 22:20:28.767849 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 14 22:20:28.769208 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 14 22:20:28.769500 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 14 22:20:28.782313 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 14 22:20:28.801078 bash[1486]: Updated "/home/core/.ssh/authorized_keys" Jul 14 22:20:28.806350 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 14 22:20:28.808322 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 14 22:20:28.813997 locksmithd[1485]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 14 22:20:28.934390 containerd[1464]: time="2025-07-14T22:20:28.934266563Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 14 22:20:28.952551 sshd_keygen[1463]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 14 22:20:28.956804 containerd[1464]: time="2025-07-14T22:20:28.956754754Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 14 22:20:28.959790 containerd[1464]: time="2025-07-14T22:20:28.959740705Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.97-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 14 22:20:28.959790 containerd[1464]: time="2025-07-14T22:20:28.959770862Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 14 22:20:28.959854 containerd[1464]: time="2025-07-14T22:20:28.959796149Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 14 22:20:28.959989 containerd[1464]: time="2025-07-14T22:20:28.959963503Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 14 22:20:28.959989 containerd[1464]: time="2025-07-14T22:20:28.959984973Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 14 22:20:28.960069 containerd[1464]: time="2025-07-14T22:20:28.960050206Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 22:20:28.960069 containerd[1464]: time="2025-07-14T22:20:28.960066256Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 14 22:20:28.960303 containerd[1464]: time="2025-07-14T22:20:28.960281650Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 22:20:28.960303 containerd[1464]: time="2025-07-14T22:20:28.960299463Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 14 22:20:28.960352 containerd[1464]: time="2025-07-14T22:20:28.960312648Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 22:20:28.960352 containerd[1464]: time="2025-07-14T22:20:28.960323238Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 14 22:20:28.960431 containerd[1464]: time="2025-07-14T22:20:28.960412615Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 14 22:20:28.960737 containerd[1464]: time="2025-07-14T22:20:28.960705915Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 14 22:20:28.960883 containerd[1464]: time="2025-07-14T22:20:28.960861668Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 22:20:28.960883 containerd[1464]: time="2025-07-14T22:20:28.960879601Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 14 22:20:28.960996 containerd[1464]: time="2025-07-14T22:20:28.960977054Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 14 22:20:28.961128 containerd[1464]: time="2025-07-14T22:20:28.961107589Z" level=info msg="metadata content store policy set" policy=shared Jul 14 22:20:28.969167 containerd[1464]: time="2025-07-14T22:20:28.969145736Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 14 22:20:28.969263 containerd[1464]: time="2025-07-14T22:20:28.969249451Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 14 22:20:28.969367 containerd[1464]: time="2025-07-14T22:20:28.969352845Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 14 22:20:28.969420 containerd[1464]: time="2025-07-14T22:20:28.969408289Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 14 22:20:28.969467 containerd[1464]: time="2025-07-14T22:20:28.969456229Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 14 22:20:28.969628 containerd[1464]: time="2025-07-14T22:20:28.969612161Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 14 22:20:28.969891 containerd[1464]: time="2025-07-14T22:20:28.969873742Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 14 22:20:28.970041 containerd[1464]: time="2025-07-14T22:20:28.970024925Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 14 22:20:28.970252 containerd[1464]: time="2025-07-14T22:20:28.970082413Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 14 22:20:28.970252 containerd[1464]: time="2025-07-14T22:20:28.970096980Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 14 22:20:28.970252 containerd[1464]: time="2025-07-14T22:20:28.970109634Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 14 22:20:28.970252 containerd[1464]: time="2025-07-14T22:20:28.970122598Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 14 22:20:28.970252 containerd[1464]: time="2025-07-14T22:20:28.970134371Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 14 22:20:28.970252 containerd[1464]: time="2025-07-14T22:20:28.970146433Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 14 22:20:28.970252 containerd[1464]: time="2025-07-14T22:20:28.970160480Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 14 22:20:28.970252 containerd[1464]: time="2025-07-14T22:20:28.970175658Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 14 22:20:28.970252 containerd[1464]: time="2025-07-14T22:20:28.970191007Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 14 22:20:28.970252 containerd[1464]: time="2025-07-14T22:20:28.970202699Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 14 22:20:28.970719 containerd[1464]: time="2025-07-14T22:20:28.970236342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 14 22:20:28.970719 containerd[1464]: time="2025-07-14T22:20:28.970463588Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 14 22:20:28.970719 containerd[1464]: time="2025-07-14T22:20:28.970477204Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 14 22:20:28.970719 containerd[1464]: time="2025-07-14T22:20:28.970489968Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 14 22:20:28.970719 containerd[1464]: time="2025-07-14T22:20:28.970502521Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 14 22:20:28.970719 containerd[1464]: time="2025-07-14T22:20:28.970516968Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 14 22:20:28.970719 containerd[1464]: time="2025-07-14T22:20:28.970529973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 14 22:20:28.970719 containerd[1464]: time="2025-07-14T22:20:28.970543077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 14 22:20:28.970719 containerd[1464]: time="2025-07-14T22:20:28.970555420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 14 22:20:28.970719 containerd[1464]: time="2025-07-14T22:20:28.970573755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 14 22:20:28.970719 containerd[1464]: time="2025-07-14T22:20:28.970585377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 14 22:20:28.970719 containerd[1464]: time="2025-07-14T22:20:28.970601056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 14 22:20:28.970719 containerd[1464]: time="2025-07-14T22:20:28.970612457Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 14 22:20:28.970719 containerd[1464]: time="2025-07-14T22:20:28.970628768Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 14 22:20:28.970719 containerd[1464]: time="2025-07-14T22:20:28.970648475Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 14 22:20:28.971002 containerd[1464]: time="2025-07-14T22:20:28.970659816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 14 22:20:28.971002 containerd[1464]: time="2025-07-14T22:20:28.970670306Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 14 22:20:28.971075 containerd[1464]: time="2025-07-14T22:20:28.971060658Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 14 22:20:28.971137 containerd[1464]: time="2025-07-14T22:20:28.971122624Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 14 22:20:28.971182 containerd[1464]: time="2025-07-14T22:20:28.971171336Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 14 22:20:28.971247 containerd[1464]: time="2025-07-14T22:20:28.971216110Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 14 22:20:28.972032 containerd[1464]: time="2025-07-14T22:20:28.971280090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 14 22:20:28.972032 containerd[1464]: time="2025-07-14T22:20:28.971295228Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 14 22:20:28.972032 containerd[1464]: time="2025-07-14T22:20:28.971313021Z" level=info msg="NRI interface is disabled by configuration." Jul 14 22:20:28.972032 containerd[1464]: time="2025-07-14T22:20:28.971330174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 14 22:20:28.972125 containerd[1464]: time="2025-07-14T22:20:28.971562870Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 14 22:20:28.972125 containerd[1464]: time="2025-07-14T22:20:28.971609467Z" level=info msg="Connect containerd service" Jul 14 22:20:28.972125 containerd[1464]: time="2025-07-14T22:20:28.971641267Z" level=info msg="using legacy CRI server" Jul 14 22:20:28.972125 containerd[1464]: time="2025-07-14T22:20:28.971647459Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 14 22:20:28.972125 containerd[1464]: time="2025-07-14T22:20:28.971751474Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 14 22:20:28.972723 containerd[1464]: time="2025-07-14T22:20:28.972702337Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 14 22:20:28.972901 containerd[1464]: time="2025-07-14T22:20:28.972871304Z" level=info msg="Start subscribing containerd event" Jul 14 22:20:28.972968 containerd[1464]: time="2025-07-14T22:20:28.972956153Z" level=info msg="Start recovering state" Jul 14 22:20:28.973058 containerd[1464]: time="2025-07-14T22:20:28.973045230Z" level=info msg="Start event monitor" Jul 14 22:20:28.973110 containerd[1464]: time="2025-07-14T22:20:28.973099983Z" level=info msg="Start snapshots syncer" Jul 14 22:20:28.973155 containerd[1464]: time="2025-07-14T22:20:28.973144677Z" level=info msg="Start cni network conf syncer for default" Jul 14 22:20:28.973206 containerd[1464]: time="2025-07-14T22:20:28.973194510Z" level=info msg="Start streaming server" Jul 14 22:20:28.974343 containerd[1464]: time="2025-07-14T22:20:28.974305364Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 14 22:20:28.974399 containerd[1464]: time="2025-07-14T22:20:28.974375315Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 14 22:20:28.974849 containerd[1464]: time="2025-07-14T22:20:28.974444124Z" level=info msg="containerd successfully booted in 0.041979s" Jul 14 22:20:28.974549 systemd[1]: Started containerd.service - containerd container runtime. Jul 14 22:20:28.978740 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 14 22:20:28.988431 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 14 22:20:28.995488 systemd[1]: issuegen.service: Deactivated successfully. Jul 14 22:20:28.995794 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 14 22:20:28.999304 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 14 22:20:29.013417 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 14 22:20:29.016185 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 14 22:20:29.018482 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 14 22:20:29.019884 systemd[1]: Reached target getty.target - Login Prompts. Jul 14 22:20:29.164042 tar[1455]: linux-amd64/README.md Jul 14 22:20:29.178732 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 14 22:20:29.830391 systemd-networkd[1394]: eth0: Gained IPv6LL Jul 14 22:20:29.833569 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 14 22:20:29.835439 systemd[1]: Reached target network-online.target - Network is Online. Jul 14 22:20:29.847433 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 14 22:20:29.849894 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:20:29.852008 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 14 22:20:29.872538 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 14 22:20:29.872909 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 14 22:20:29.874653 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 14 22:20:29.876194 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 14 22:20:30.550752 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:20:30.552332 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 14 22:20:30.554306 systemd[1]: Startup finished in 697ms (kernel) + 28.679s (initrd) + 3.981s (userspace) = 33.358s. Jul 14 22:20:30.581690 (kubelet)[1543]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:20:30.964809 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 14 22:20:30.966119 systemd[1]: Started sshd@0-10.0.0.137:22-10.0.0.1:58272.service - OpenSSH per-connection server daemon (10.0.0.1:58272). Jul 14 22:20:30.982819 kubelet[1543]: E0714 22:20:30.982784 1543 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:20:30.986764 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:20:30.986999 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:20:31.017634 sshd[1555]: Accepted publickey for core from 10.0.0.1 port 58272 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:20:31.019573 sshd[1555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:20:31.029908 systemd-logind[1440]: New session 1 of user core. Jul 14 22:20:31.031617 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 14 22:20:31.052589 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 14 22:20:31.063949 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 14 22:20:31.067333 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 14 22:20:31.074821 (systemd)[1560]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:20:31.174515 systemd[1560]: Queued start job for default target default.target. Jul 14 22:20:31.188432 systemd[1560]: Created slice app.slice - User Application Slice. Jul 14 22:20:31.188457 systemd[1560]: Reached target paths.target - Paths. Jul 14 22:20:31.188470 systemd[1560]: Reached target timers.target - Timers. Jul 14 22:20:31.189984 systemd[1560]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 14 22:20:31.202440 systemd[1560]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 14 22:20:31.202562 systemd[1560]: Reached target sockets.target - Sockets. Jul 14 22:20:31.202580 systemd[1560]: Reached target basic.target - Basic System. Jul 14 22:20:31.202615 systemd[1560]: Reached target default.target - Main User Target. Jul 14 22:20:31.202646 systemd[1560]: Startup finished in 121ms. Jul 14 22:20:31.203107 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 14 22:20:31.204531 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 14 22:20:31.267828 systemd[1]: Started sshd@1-10.0.0.137:22-10.0.0.1:58280.service - OpenSSH per-connection server daemon (10.0.0.1:58280). Jul 14 22:20:31.298637 sshd[1571]: Accepted publickey for core from 10.0.0.1 port 58280 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:20:31.300114 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:20:31.304020 systemd-logind[1440]: New session 2 of user core. Jul 14 22:20:31.313362 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 14 22:20:31.367278 sshd[1571]: pam_unix(sshd:session): session closed for user core Jul 14 22:20:31.380916 systemd[1]: sshd@1-10.0.0.137:22-10.0.0.1:58280.service: Deactivated successfully. Jul 14 22:20:31.382461 systemd[1]: session-2.scope: Deactivated successfully. Jul 14 22:20:31.383717 systemd-logind[1440]: Session 2 logged out. Waiting for processes to exit. Jul 14 22:20:31.384862 systemd[1]: Started sshd@2-10.0.0.137:22-10.0.0.1:58296.service - OpenSSH per-connection server daemon (10.0.0.1:58296). Jul 14 22:20:31.385531 systemd-logind[1440]: Removed session 2. Jul 14 22:20:31.414927 sshd[1578]: Accepted publickey for core from 10.0.0.1 port 58296 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:20:31.416276 sshd[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:20:31.420068 systemd-logind[1440]: New session 3 of user core. Jul 14 22:20:31.428350 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 14 22:20:31.478953 sshd[1578]: pam_unix(sshd:session): session closed for user core Jul 14 22:20:31.494932 systemd[1]: sshd@2-10.0.0.137:22-10.0.0.1:58296.service: Deactivated successfully. Jul 14 22:20:31.496648 systemd[1]: session-3.scope: Deactivated successfully. Jul 14 22:20:31.498362 systemd-logind[1440]: Session 3 logged out. Waiting for processes to exit. Jul 14 22:20:31.511486 systemd[1]: Started sshd@3-10.0.0.137:22-10.0.0.1:58302.service - OpenSSH per-connection server daemon (10.0.0.1:58302). Jul 14 22:20:31.512378 systemd-logind[1440]: Removed session 3. Jul 14 22:20:31.537210 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 58302 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:20:31.538691 sshd[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:20:31.542639 systemd-logind[1440]: New session 4 of user core. Jul 14 22:20:31.552344 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 14 22:20:31.605747 sshd[1585]: pam_unix(sshd:session): session closed for user core Jul 14 22:20:31.620301 systemd[1]: sshd@3-10.0.0.137:22-10.0.0.1:58302.service: Deactivated successfully. Jul 14 22:20:31.622109 systemd[1]: session-4.scope: Deactivated successfully. Jul 14 22:20:31.623785 systemd-logind[1440]: Session 4 logged out. Waiting for processes to exit. Jul 14 22:20:31.636544 systemd[1]: Started sshd@4-10.0.0.137:22-10.0.0.1:58306.service - OpenSSH per-connection server daemon (10.0.0.1:58306). Jul 14 22:20:31.637535 systemd-logind[1440]: Removed session 4. Jul 14 22:20:31.664838 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 58306 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:20:31.666535 sshd[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:20:31.670834 systemd-logind[1440]: New session 5 of user core. Jul 14 22:20:31.682342 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 14 22:20:31.739727 sudo[1595]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 14 22:20:31.740066 sudo[1595]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 22:20:31.759389 sudo[1595]: pam_unix(sudo:session): session closed for user root Jul 14 22:20:31.761265 sshd[1592]: pam_unix(sshd:session): session closed for user core Jul 14 22:20:31.768758 systemd[1]: sshd@4-10.0.0.137:22-10.0.0.1:58306.service: Deactivated successfully. Jul 14 22:20:31.770191 systemd[1]: session-5.scope: Deactivated successfully. Jul 14 22:20:31.771578 systemd-logind[1440]: Session 5 logged out. Waiting for processes to exit. Jul 14 22:20:31.783463 systemd[1]: Started sshd@5-10.0.0.137:22-10.0.0.1:58318.service - OpenSSH per-connection server daemon (10.0.0.1:58318). Jul 14 22:20:31.784402 systemd-logind[1440]: Removed session 5. Jul 14 22:20:31.813580 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 58318 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:20:31.815587 sshd[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:20:31.819842 systemd-logind[1440]: New session 6 of user core. Jul 14 22:20:31.839580 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 14 22:20:31.893241 sudo[1604]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 14 22:20:31.893556 sudo[1604]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 22:20:31.897092 sudo[1604]: pam_unix(sudo:session): session closed for user root Jul 14 22:20:31.902961 sudo[1603]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 14 22:20:31.903283 sudo[1603]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 22:20:31.918423 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 14 22:20:31.919955 auditctl[1607]: No rules Jul 14 22:20:31.921101 systemd[1]: audit-rules.service: Deactivated successfully. Jul 14 22:20:31.921361 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 14 22:20:31.923068 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 14 22:20:31.952280 augenrules[1625]: No rules Jul 14 22:20:31.954155 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 14 22:20:31.955370 sudo[1603]: pam_unix(sudo:session): session closed for user root Jul 14 22:20:31.957014 sshd[1600]: pam_unix(sshd:session): session closed for user core Jul 14 22:20:31.975699 systemd[1]: sshd@5-10.0.0.137:22-10.0.0.1:58318.service: Deactivated successfully. Jul 14 22:20:31.977181 systemd[1]: session-6.scope: Deactivated successfully. Jul 14 22:20:31.978422 systemd-logind[1440]: Session 6 logged out. Waiting for processes to exit. Jul 14 22:20:31.988435 systemd[1]: Started sshd@6-10.0.0.137:22-10.0.0.1:58326.service - OpenSSH per-connection server daemon (10.0.0.1:58326). Jul 14 22:20:31.989218 systemd-logind[1440]: Removed session 6. Jul 14 22:20:32.013957 sshd[1633]: Accepted publickey for core from 10.0.0.1 port 58326 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:20:32.015390 sshd[1633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:20:32.019342 systemd-logind[1440]: New session 7 of user core. Jul 14 22:20:32.029354 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 14 22:20:32.082999 sudo[1636]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 14 22:20:32.083346 sudo[1636]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 22:20:32.351475 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 14 22:20:32.351625 (dockerd)[1654]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 14 22:20:32.632195 dockerd[1654]: time="2025-07-14T22:20:32.632054887Z" level=info msg="Starting up" Jul 14 22:20:33.879580 dockerd[1654]: time="2025-07-14T22:20:33.879527699Z" level=info msg="Loading containers: start." Jul 14 22:20:34.187257 kernel: Initializing XFRM netlink socket Jul 14 22:20:34.267088 systemd-networkd[1394]: docker0: Link UP Jul 14 22:20:34.297287 dockerd[1654]: time="2025-07-14T22:20:34.297244466Z" level=info msg="Loading containers: done." Jul 14 22:20:34.312463 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1515629106-merged.mount: Deactivated successfully. Jul 14 22:20:34.315360 dockerd[1654]: time="2025-07-14T22:20:34.315310413Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 14 22:20:34.315427 dockerd[1654]: time="2025-07-14T22:20:34.315414428Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 14 22:20:34.315556 dockerd[1654]: time="2025-07-14T22:20:34.315539743Z" level=info msg="Daemon has completed initialization" Jul 14 22:20:34.354974 dockerd[1654]: time="2025-07-14T22:20:34.354040700Z" level=info msg="API listen on /run/docker.sock" Jul 14 22:20:34.354688 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 14 22:20:41.231733 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 14 22:20:41.241386 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:20:41.419371 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:20:41.424708 (kubelet)[1808]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:20:41.466786 kubelet[1808]: E0714 22:20:41.466716 1808 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:20:41.473555 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:20:41.473798 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:20:44.672240 containerd[1464]: time="2025-07-14T22:20:44.672188997Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" Jul 14 22:20:51.481899 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 14 22:20:51.500477 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:20:51.661189 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:20:51.665605 (kubelet)[1826]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:20:51.704029 kubelet[1826]: E0714 22:20:51.703965 1826 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:20:51.707929 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:20:51.708135 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:20:56.384456 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2783855439.mount: Deactivated successfully. Jul 14 22:20:57.248245 containerd[1464]: time="2025-07-14T22:20:57.248176645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:20:57.248931 containerd[1464]: time="2025-07-14T22:20:57.248859416Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=28682879" Jul 14 22:20:57.251578 containerd[1464]: time="2025-07-14T22:20:57.251350949Z" level=info msg="ImageCreate event name:\"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:20:57.254334 containerd[1464]: time="2025-07-14T22:20:57.254295262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:20:57.255317 containerd[1464]: time="2025-07-14T22:20:57.255249061Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"28679679\" in 12.583018255s" Jul 14 22:20:57.255317 containerd[1464]: time="2025-07-14T22:20:57.255311217Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" Jul 14 22:20:57.255923 containerd[1464]: time="2025-07-14T22:20:57.255877560Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" Jul 14 22:20:58.508031 containerd[1464]: time="2025-07-14T22:20:58.507963295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:20:58.508939 containerd[1464]: time="2025-07-14T22:20:58.508886667Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=24779589" Jul 14 22:20:58.510385 containerd[1464]: time="2025-07-14T22:20:58.510349180Z" level=info msg="ImageCreate event name:\"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:20:58.513911 containerd[1464]: time="2025-07-14T22:20:58.513862560Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:20:58.515151 containerd[1464]: time="2025-07-14T22:20:58.515097156Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"26267962\" in 1.259189971s" Jul 14 22:20:58.515151 containerd[1464]: time="2025-07-14T22:20:58.515147881Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" Jul 14 22:20:58.515731 containerd[1464]: time="2025-07-14T22:20:58.515694606Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" Jul 14 22:21:00.351894 containerd[1464]: time="2025-07-14T22:21:00.351821696Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:21:00.352802 containerd[1464]: time="2025-07-14T22:21:00.352734759Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=19169938" Jul 14 22:21:00.354039 containerd[1464]: time="2025-07-14T22:21:00.354008187Z" level=info msg="ImageCreate event name:\"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:21:00.358669 containerd[1464]: time="2025-07-14T22:21:00.358629005Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:21:00.360200 containerd[1464]: time="2025-07-14T22:21:00.360148405Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"20658329\" in 1.844413843s" Jul 14 22:21:00.360279 containerd[1464]: time="2025-07-14T22:21:00.360208658Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" Jul 14 22:21:00.360785 containerd[1464]: time="2025-07-14T22:21:00.360753990Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" Jul 14 22:21:01.250140 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1387221955.mount: Deactivated successfully. Jul 14 22:21:01.731712 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 14 22:21:01.741367 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:21:02.370290 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:21:02.375103 (kubelet)[1911]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:21:02.508461 kubelet[1911]: E0714 22:21:02.508408 1911 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:21:02.512603 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:21:02.512826 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:21:02.540102 containerd[1464]: time="2025-07-14T22:21:02.540043340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:21:02.543133 containerd[1464]: time="2025-07-14T22:21:02.542885024Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=30917856" Jul 14 22:21:02.544874 containerd[1464]: time="2025-07-14T22:21:02.544818386Z" level=info msg="ImageCreate event name:\"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:21:02.547040 containerd[1464]: time="2025-07-14T22:21:02.547013041Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:21:02.547582 containerd[1464]: time="2025-07-14T22:21:02.547535188Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"30916875\" in 2.186746061s" Jul 14 22:21:02.547620 containerd[1464]: time="2025-07-14T22:21:02.547580425Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" Jul 14 22:21:02.548116 containerd[1464]: time="2025-07-14T22:21:02.548076022Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 14 22:21:03.170599 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1629766726.mount: Deactivated successfully. Jul 14 22:21:04.499783 containerd[1464]: time="2025-07-14T22:21:04.499726848Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:21:04.500567 containerd[1464]: time="2025-07-14T22:21:04.500516957Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 14 22:21:04.501669 containerd[1464]: time="2025-07-14T22:21:04.501640358Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:21:04.504590 containerd[1464]: time="2025-07-14T22:21:04.504557188Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:21:04.505561 containerd[1464]: time="2025-07-14T22:21:04.505520560Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.957411335s" Jul 14 22:21:04.505607 containerd[1464]: time="2025-07-14T22:21:04.505566529Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 14 22:21:04.506365 containerd[1464]: time="2025-07-14T22:21:04.506332722Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 14 22:21:04.942205 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount927120397.mount: Deactivated successfully. Jul 14 22:21:04.948930 containerd[1464]: time="2025-07-14T22:21:04.948876176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:21:04.949773 containerd[1464]: time="2025-07-14T22:21:04.949740269Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 14 22:21:04.951186 containerd[1464]: time="2025-07-14T22:21:04.951158816Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:21:04.954058 containerd[1464]: time="2025-07-14T22:21:04.954004959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:21:04.955047 containerd[1464]: time="2025-07-14T22:21:04.954998641Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 448.634257ms" Jul 14 22:21:04.955047 containerd[1464]: time="2025-07-14T22:21:04.955033427Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 14 22:21:04.955569 containerd[1464]: time="2025-07-14T22:21:04.955535864Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 14 22:21:05.508319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3928525253.mount: Deactivated successfully. Jul 14 22:21:07.275648 containerd[1464]: time="2025-07-14T22:21:07.275573813Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:21:07.276452 containerd[1464]: time="2025-07-14T22:21:07.276393071Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Jul 14 22:21:07.277776 containerd[1464]: time="2025-07-14T22:21:07.277733088Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:21:07.281254 containerd[1464]: time="2025-07-14T22:21:07.281200808Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:21:07.282569 containerd[1464]: time="2025-07-14T22:21:07.282530364Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.326961076s" Jul 14 22:21:07.282622 containerd[1464]: time="2025-07-14T22:21:07.282568226Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jul 14 22:21:12.731678 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 14 22:21:12.741389 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:21:12.912258 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:21:12.916844 (kubelet)[2047]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:21:12.952560 kubelet[2047]: E0714 22:21:12.952511 2047 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:21:12.956420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:21:12.956614 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:21:14.236856 update_engine[1445]: I20250714 22:21:14.236732 1445 update_attempter.cc:509] Updating boot flags... Jul 14 22:21:14.925264 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2062) Jul 14 22:21:14.959765 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2065) Jul 14 22:21:19.961337 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:21:19.977440 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:21:20.003794 systemd[1]: Reloading requested from client PID 2090 ('systemctl') (unit session-7.scope)... Jul 14 22:21:20.003809 systemd[1]: Reloading... Jul 14 22:21:20.078546 zram_generator::config[2135]: No configuration found. Jul 14 22:21:20.741784 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 22:21:20.818503 systemd[1]: Reloading finished in 814 ms. Jul 14 22:21:20.873058 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 14 22:21:20.873167 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 14 22:21:20.873556 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:21:20.875275 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:21:21.052087 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:21:21.069541 (kubelet)[2178]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 14 22:21:21.105261 kubelet[2178]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 22:21:21.105261 kubelet[2178]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 14 22:21:21.105261 kubelet[2178]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 22:21:21.105596 kubelet[2178]: I0714 22:21:21.105336 2178 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 14 22:21:21.386816 kubelet[2178]: I0714 22:21:21.386685 2178 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 14 22:21:21.386816 kubelet[2178]: I0714 22:21:21.386723 2178 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 14 22:21:21.387013 kubelet[2178]: I0714 22:21:21.386990 2178 server.go:954] "Client rotation is on, will bootstrap in background" Jul 14 22:21:21.407682 kubelet[2178]: E0714 22:21:21.407636 2178 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.137:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:21:21.408298 kubelet[2178]: I0714 22:21:21.408271 2178 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 14 22:21:21.415427 kubelet[2178]: E0714 22:21:21.415396 2178 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 14 22:21:21.415427 kubelet[2178]: I0714 22:21:21.415426 2178 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 14 22:21:21.420492 kubelet[2178]: I0714 22:21:21.420443 2178 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 14 22:21:21.421602 kubelet[2178]: I0714 22:21:21.421553 2178 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 14 22:21:21.421780 kubelet[2178]: I0714 22:21:21.421588 2178 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 14 22:21:21.421780 kubelet[2178]: I0714 22:21:21.421772 2178 topology_manager.go:138] "Creating topology manager with none policy" Jul 14 22:21:21.421780 kubelet[2178]: I0714 22:21:21.421781 2178 container_manager_linux.go:304] "Creating device plugin manager" Jul 14 22:21:21.421951 kubelet[2178]: I0714 22:21:21.421921 2178 state_mem.go:36] "Initialized new in-memory state store" Jul 14 22:21:21.424375 kubelet[2178]: I0714 22:21:21.424346 2178 kubelet.go:446] "Attempting to sync node with API server" Jul 14 22:21:21.424375 kubelet[2178]: I0714 22:21:21.424375 2178 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 14 22:21:21.424418 kubelet[2178]: I0714 22:21:21.424394 2178 kubelet.go:352] "Adding apiserver pod source" Jul 14 22:21:21.424418 kubelet[2178]: I0714 22:21:21.424404 2178 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 14 22:21:21.428077 kubelet[2178]: I0714 22:21:21.428055 2178 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 14 22:21:21.428453 kubelet[2178]: I0714 22:21:21.428392 2178 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 14 22:21:21.429937 kubelet[2178]: W0714 22:21:21.429575 2178 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 14 22:21:21.430633 kubelet[2178]: W0714 22:21:21.430582 2178 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Jul 14 22:21:21.430672 kubelet[2178]: E0714 22:21:21.430632 2178 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:21:21.431101 kubelet[2178]: W0714 22:21:21.431053 2178 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Jul 14 22:21:21.431101 kubelet[2178]: E0714 22:21:21.431105 2178 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:21:21.431792 kubelet[2178]: I0714 22:21:21.431766 2178 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 14 22:21:21.431834 kubelet[2178]: I0714 22:21:21.431800 2178 server.go:1287] "Started kubelet" Jul 14 22:21:21.432713 kubelet[2178]: I0714 22:21:21.432620 2178 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 14 22:21:21.434134 kubelet[2178]: I0714 22:21:21.433117 2178 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 14 22:21:21.434134 kubelet[2178]: I0714 22:21:21.433179 2178 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 14 22:21:21.434134 kubelet[2178]: I0714 22:21:21.433207 2178 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 14 22:21:21.434134 kubelet[2178]: I0714 22:21:21.434030 2178 server.go:479] "Adding debug handlers to kubelet server" Jul 14 22:21:21.435631 kubelet[2178]: I0714 22:21:21.434910 2178 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 14 22:21:21.435631 kubelet[2178]: E0714 22:21:21.434960 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:21:21.435631 kubelet[2178]: I0714 22:21:21.434985 2178 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 14 22:21:21.435631 kubelet[2178]: I0714 22:21:21.435131 2178 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 14 22:21:21.435631 kubelet[2178]: I0714 22:21:21.435182 2178 reconciler.go:26] "Reconciler: start to sync state" Jul 14 22:21:21.436517 kubelet[2178]: W0714 22:21:21.436182 2178 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Jul 14 22:21:21.436517 kubelet[2178]: E0714 22:21:21.436261 2178 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:21:21.438163 kubelet[2178]: I0714 22:21:21.438135 2178 factory.go:221] Registration of the systemd container factory successfully Jul 14 22:21:21.438358 kubelet[2178]: I0714 22:21:21.438334 2178 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 14 22:21:21.439180 kubelet[2178]: E0714 22:21:21.439141 2178 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="200ms" Jul 14 22:21:21.439670 kubelet[2178]: E0714 22:21:21.436419 2178 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.137:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.137:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18523e47695bfdf2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-14 22:21:21.43178085 +0000 UTC m=+0.358612709,LastTimestamp:2025-07-14 22:21:21.43178085 +0000 UTC m=+0.358612709,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 14 22:21:21.440163 kubelet[2178]: E0714 22:21:21.440136 2178 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 14 22:21:21.440353 kubelet[2178]: I0714 22:21:21.440332 2178 factory.go:221] Registration of the containerd container factory successfully Jul 14 22:21:21.454369 kubelet[2178]: I0714 22:21:21.454337 2178 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 14 22:21:21.454862 kubelet[2178]: I0714 22:21:21.454825 2178 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 14 22:21:21.454862 kubelet[2178]: I0714 22:21:21.454845 2178 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 14 22:21:21.454914 kubelet[2178]: I0714 22:21:21.454870 2178 state_mem.go:36] "Initialized new in-memory state store" Jul 14 22:21:21.455953 kubelet[2178]: I0714 22:21:21.455908 2178 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 14 22:21:21.455953 kubelet[2178]: I0714 22:21:21.455932 2178 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 14 22:21:21.455953 kubelet[2178]: I0714 22:21:21.455948 2178 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 14 22:21:21.455953 kubelet[2178]: I0714 22:21:21.455956 2178 kubelet.go:2382] "Starting kubelet main sync loop" Jul 14 22:21:21.456675 kubelet[2178]: E0714 22:21:21.455996 2178 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 14 22:21:21.456675 kubelet[2178]: W0714 22:21:21.456419 2178 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Jul 14 22:21:21.456675 kubelet[2178]: E0714 22:21:21.456460 2178 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:21:21.535177 kubelet[2178]: E0714 22:21:21.535110 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:21:21.556623 kubelet[2178]: E0714 22:21:21.556571 2178 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 14 22:21:21.635806 kubelet[2178]: E0714 22:21:21.635748 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:21:21.640709 kubelet[2178]: E0714 22:21:21.640575 2178 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="400ms" Jul 14 22:21:21.735953 kubelet[2178]: E0714 22:21:21.735889 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:21:21.757337 kubelet[2178]: E0714 22:21:21.757294 2178 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 14 22:21:21.836777 kubelet[2178]: E0714 22:21:21.836689 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:21:21.886437 kubelet[2178]: I0714 22:21:21.886370 2178 policy_none.go:49] "None policy: Start" Jul 14 22:21:21.886437 kubelet[2178]: I0714 22:21:21.886413 2178 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 14 22:21:21.886437 kubelet[2178]: I0714 22:21:21.886427 2178 state_mem.go:35] "Initializing new in-memory state store" Jul 14 22:21:21.893654 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 14 22:21:21.904155 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 14 22:21:21.907449 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 14 22:21:21.926468 kubelet[2178]: I0714 22:21:21.926417 2178 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 14 22:21:21.926685 kubelet[2178]: I0714 22:21:21.926660 2178 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 14 22:21:21.926830 kubelet[2178]: I0714 22:21:21.926677 2178 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 14 22:21:21.927312 kubelet[2178]: I0714 22:21:21.926940 2178 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 14 22:21:21.928096 kubelet[2178]: E0714 22:21:21.928063 2178 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 14 22:21:21.928143 kubelet[2178]: E0714 22:21:21.928111 2178 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 14 22:21:22.028789 kubelet[2178]: I0714 22:21:22.028736 2178 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 22:21:22.029264 kubelet[2178]: E0714 22:21:22.029187 2178 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" Jul 14 22:21:22.041162 kubelet[2178]: E0714 22:21:22.041107 2178 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="800ms" Jul 14 22:21:22.165865 systemd[1]: Created slice kubepods-burstable-pod28aac17f0ef57936396276120eabb442.slice - libcontainer container kubepods-burstable-pod28aac17f0ef57936396276120eabb442.slice. Jul 14 22:21:22.174030 kubelet[2178]: E0714 22:21:22.173995 2178 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 22:21:22.175724 systemd[1]: Created slice kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice - libcontainer container kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice. Jul 14 22:21:22.186349 kubelet[2178]: E0714 22:21:22.186323 2178 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 22:21:22.188919 systemd[1]: Created slice kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice - libcontainer container kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice. Jul 14 22:21:22.190448 kubelet[2178]: E0714 22:21:22.190414 2178 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 22:21:22.230365 kubelet[2178]: I0714 22:21:22.230293 2178 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 22:21:22.230622 kubelet[2178]: E0714 22:21:22.230597 2178 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" Jul 14 22:21:22.240106 kubelet[2178]: I0714 22:21:22.240073 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/28aac17f0ef57936396276120eabb442-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"28aac17f0ef57936396276120eabb442\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:21:22.240106 kubelet[2178]: I0714 22:21:22.240102 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/28aac17f0ef57936396276120eabb442-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"28aac17f0ef57936396276120eabb442\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:21:22.240199 kubelet[2178]: I0714 22:21:22.240119 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:21:22.240199 kubelet[2178]: I0714 22:21:22.240136 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:21:22.240199 kubelet[2178]: I0714 22:21:22.240152 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" Jul 14 22:21:22.240199 kubelet[2178]: I0714 22:21:22.240166 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/28aac17f0ef57936396276120eabb442-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"28aac17f0ef57936396276120eabb442\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:21:22.240199 kubelet[2178]: I0714 22:21:22.240183 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:21:22.240327 kubelet[2178]: I0714 22:21:22.240198 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:21:22.240327 kubelet[2178]: I0714 22:21:22.240214 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:21:22.474966 kubelet[2178]: E0714 22:21:22.474805 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:22.475573 containerd[1464]: time="2025-07-14T22:21:22.475518276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:28aac17f0ef57936396276120eabb442,Namespace:kube-system,Attempt:0,}" Jul 14 22:21:22.486917 kubelet[2178]: E0714 22:21:22.486876 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:22.487335 containerd[1464]: time="2025-07-14T22:21:22.487303887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,}" Jul 14 22:21:22.491560 kubelet[2178]: E0714 22:21:22.491541 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:22.491920 containerd[1464]: time="2025-07-14T22:21:22.491878998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,}" Jul 14 22:21:22.632633 kubelet[2178]: I0714 22:21:22.632591 2178 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 22:21:22.633006 kubelet[2178]: E0714 22:21:22.632957 2178 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" Jul 14 22:21:22.820703 kubelet[2178]: W0714 22:21:22.820613 2178 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Jul 14 22:21:22.820878 kubelet[2178]: E0714 22:21:22.820690 2178 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:21:22.826532 kubelet[2178]: W0714 22:21:22.826480 2178 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Jul 14 22:21:22.826592 kubelet[2178]: E0714 22:21:22.826541 2178 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:21:22.841397 kubelet[2178]: E0714 22:21:22.841361 2178 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="1.6s" Jul 14 22:21:22.841536 kubelet[2178]: W0714 22:21:22.841420 2178 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Jul 14 22:21:22.841536 kubelet[2178]: E0714 22:21:22.841469 2178 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:21:22.931763 kubelet[2178]: W0714 22:21:22.931689 2178 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Jul 14 22:21:22.931763 kubelet[2178]: E0714 22:21:22.931760 2178 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:21:22.982089 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount43325418.mount: Deactivated successfully. Jul 14 22:21:22.988469 containerd[1464]: time="2025-07-14T22:21:22.988435403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 22:21:22.989484 containerd[1464]: time="2025-07-14T22:21:22.989440824Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 22:21:22.990423 containerd[1464]: time="2025-07-14T22:21:22.990377064Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 14 22:21:22.991405 containerd[1464]: time="2025-07-14T22:21:22.991374439Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 22:21:22.992313 containerd[1464]: time="2025-07-14T22:21:22.992281845Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 14 22:21:22.993246 containerd[1464]: time="2025-07-14T22:21:22.993200311Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 14 22:21:22.994308 containerd[1464]: time="2025-07-14T22:21:22.994280082Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 22:21:22.998056 containerd[1464]: time="2025-07-14T22:21:22.997993614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 22:21:22.998979 containerd[1464]: time="2025-07-14T22:21:22.998926537Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 511.555122ms" Jul 14 22:21:23.000679 containerd[1464]: time="2025-07-14T22:21:23.000640468Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 525.025038ms" Jul 14 22:21:23.004847 containerd[1464]: time="2025-07-14T22:21:23.004792404Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 512.832402ms" Jul 14 22:21:23.132614 containerd[1464]: time="2025-07-14T22:21:23.132401870Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:21:23.132614 containerd[1464]: time="2025-07-14T22:21:23.132502440Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:21:23.132614 containerd[1464]: time="2025-07-14T22:21:23.132434943Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:21:23.132614 containerd[1464]: time="2025-07-14T22:21:23.132495517Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:21:23.132614 containerd[1464]: time="2025-07-14T22:21:23.132515725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:21:23.132614 containerd[1464]: time="2025-07-14T22:21:23.132529551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:21:23.133402 containerd[1464]: time="2025-07-14T22:21:23.132648086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:21:23.134821 containerd[1464]: time="2025-07-14T22:21:23.133634420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:21:23.136921 containerd[1464]: time="2025-07-14T22:21:23.136754305Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:21:23.136997 containerd[1464]: time="2025-07-14T22:21:23.136893137Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:21:23.136997 containerd[1464]: time="2025-07-14T22:21:23.136919597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:21:23.137183 containerd[1464]: time="2025-07-14T22:21:23.137126017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:21:23.159377 systemd[1]: Started cri-containerd-3103eb32118734375b34becb42ad77dd5e4ad3e8750d8b940fce170f0889e8e0.scope - libcontainer container 3103eb32118734375b34becb42ad77dd5e4ad3e8750d8b940fce170f0889e8e0. Jul 14 22:21:23.160785 systemd[1]: Started cri-containerd-9392c539a47dc9dcf8d11f73a4ab63157087aee621b8a84a0cf417ae0c78d205.scope - libcontainer container 9392c539a47dc9dcf8d11f73a4ab63157087aee621b8a84a0cf417ae0c78d205. Jul 14 22:21:23.162183 systemd[1]: Started cri-containerd-b8c72d76970151e76521f66660e8f1fdba021fea7c0369926e0c3aa79abc5644.scope - libcontainer container b8c72d76970151e76521f66660e8f1fdba021fea7c0369926e0c3aa79abc5644. Jul 14 22:21:23.202795 containerd[1464]: time="2025-07-14T22:21:23.201443595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,} returns sandbox id \"3103eb32118734375b34becb42ad77dd5e4ad3e8750d8b940fce170f0889e8e0\"" Jul 14 22:21:23.203569 containerd[1464]: time="2025-07-14T22:21:23.203534405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:28aac17f0ef57936396276120eabb442,Namespace:kube-system,Attempt:0,} returns sandbox id \"9392c539a47dc9dcf8d11f73a4ab63157087aee621b8a84a0cf417ae0c78d205\"" Jul 14 22:21:23.204522 kubelet[2178]: E0714 22:21:23.204488 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:23.204813 kubelet[2178]: E0714 22:21:23.204786 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:23.206569 containerd[1464]: time="2025-07-14T22:21:23.206537941Z" level=info msg="CreateContainer within sandbox \"3103eb32118734375b34becb42ad77dd5e4ad3e8750d8b940fce170f0889e8e0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 14 22:21:23.206679 containerd[1464]: time="2025-07-14T22:21:23.206650965Z" level=info msg="CreateContainer within sandbox \"9392c539a47dc9dcf8d11f73a4ab63157087aee621b8a84a0cf417ae0c78d205\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 14 22:21:23.207938 containerd[1464]: time="2025-07-14T22:21:23.207911276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,} returns sandbox id \"b8c72d76970151e76521f66660e8f1fdba021fea7c0369926e0c3aa79abc5644\"" Jul 14 22:21:23.209470 kubelet[2178]: E0714 22:21:23.209449 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:23.210975 containerd[1464]: time="2025-07-14T22:21:23.210949397Z" level=info msg="CreateContainer within sandbox \"b8c72d76970151e76521f66660e8f1fdba021fea7c0369926e0c3aa79abc5644\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 14 22:21:23.231500 containerd[1464]: time="2025-07-14T22:21:23.231436080Z" level=info msg="CreateContainer within sandbox \"3103eb32118734375b34becb42ad77dd5e4ad3e8750d8b940fce170f0889e8e0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f56897413b0da15bbc84727943a4f3c34a85701e4f13401b9059bece8e1f9fba\"" Jul 14 22:21:23.231933 containerd[1464]: time="2025-07-14T22:21:23.231911458Z" level=info msg="StartContainer for \"f56897413b0da15bbc84727943a4f3c34a85701e4f13401b9059bece8e1f9fba\"" Jul 14 22:21:23.234266 containerd[1464]: time="2025-07-14T22:21:23.234133316Z" level=info msg="CreateContainer within sandbox \"9392c539a47dc9dcf8d11f73a4ab63157087aee621b8a84a0cf417ae0c78d205\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a4dcb2d07dc8a9e7e2d8e2e6a38b58d934b1809811c8c3ae9431da56a0fb54f0\"" Jul 14 22:21:23.234577 containerd[1464]: time="2025-07-14T22:21:23.234559291Z" level=info msg="StartContainer for \"a4dcb2d07dc8a9e7e2d8e2e6a38b58d934b1809811c8c3ae9431da56a0fb54f0\"" Jul 14 22:21:23.235981 containerd[1464]: time="2025-07-14T22:21:23.235944709Z" level=info msg="CreateContainer within sandbox \"b8c72d76970151e76521f66660e8f1fdba021fea7c0369926e0c3aa79abc5644\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a9cb90aab6cee9143df72dea56ad8e0bbbe2af472d7b97169fc0d776e209fc57\"" Jul 14 22:21:23.236288 containerd[1464]: time="2025-07-14T22:21:23.236267580Z" level=info msg="StartContainer for \"a9cb90aab6cee9143df72dea56ad8e0bbbe2af472d7b97169fc0d776e209fc57\"" Jul 14 22:21:23.260386 systemd[1]: Started cri-containerd-f56897413b0da15bbc84727943a4f3c34a85701e4f13401b9059bece8e1f9fba.scope - libcontainer container f56897413b0da15bbc84727943a4f3c34a85701e4f13401b9059bece8e1f9fba. Jul 14 22:21:23.264017 systemd[1]: Started cri-containerd-a4dcb2d07dc8a9e7e2d8e2e6a38b58d934b1809811c8c3ae9431da56a0fb54f0.scope - libcontainer container a4dcb2d07dc8a9e7e2d8e2e6a38b58d934b1809811c8c3ae9431da56a0fb54f0. Jul 14 22:21:23.265176 systemd[1]: Started cri-containerd-a9cb90aab6cee9143df72dea56ad8e0bbbe2af472d7b97169fc0d776e209fc57.scope - libcontainer container a9cb90aab6cee9143df72dea56ad8e0bbbe2af472d7b97169fc0d776e209fc57. Jul 14 22:21:23.306945 containerd[1464]: time="2025-07-14T22:21:23.306386820Z" level=info msg="StartContainer for \"f56897413b0da15bbc84727943a4f3c34a85701e4f13401b9059bece8e1f9fba\" returns successfully" Jul 14 22:21:23.306945 containerd[1464]: time="2025-07-14T22:21:23.306548305Z" level=info msg="StartContainer for \"a4dcb2d07dc8a9e7e2d8e2e6a38b58d934b1809811c8c3ae9431da56a0fb54f0\" returns successfully" Jul 14 22:21:23.309922 containerd[1464]: time="2025-07-14T22:21:23.309772558Z" level=info msg="StartContainer for \"a9cb90aab6cee9143df72dea56ad8e0bbbe2af472d7b97169fc0d776e209fc57\" returns successfully" Jul 14 22:21:23.435720 kubelet[2178]: I0714 22:21:23.435612 2178 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 22:21:23.462950 kubelet[2178]: E0714 22:21:23.462769 2178 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 22:21:23.462950 kubelet[2178]: E0714 22:21:23.462880 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:23.466189 kubelet[2178]: E0714 22:21:23.466036 2178 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 22:21:23.466189 kubelet[2178]: E0714 22:21:23.466118 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:23.468662 kubelet[2178]: E0714 22:21:23.468509 2178 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 22:21:23.468662 kubelet[2178]: E0714 22:21:23.468617 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:24.169734 kubelet[2178]: I0714 22:21:24.169601 2178 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 14 22:21:24.169734 kubelet[2178]: E0714 22:21:24.169636 2178 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 14 22:21:24.177162 kubelet[2178]: E0714 22:21:24.177132 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:21:24.277880 kubelet[2178]: E0714 22:21:24.277816 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:21:24.378657 kubelet[2178]: E0714 22:21:24.378618 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:21:24.470594 kubelet[2178]: E0714 22:21:24.470411 2178 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 22:21:24.470594 kubelet[2178]: E0714 22:21:24.470516 2178 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 22:21:24.470693 kubelet[2178]: E0714 22:21:24.470666 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:24.470953 kubelet[2178]: E0714 22:21:24.470917 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:24.471020 kubelet[2178]: E0714 22:21:24.470996 2178 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 22:21:24.471094 kubelet[2178]: E0714 22:21:24.471078 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:24.479442 kubelet[2178]: E0714 22:21:24.479410 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:21:24.580440 kubelet[2178]: E0714 22:21:24.580393 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:21:24.681281 kubelet[2178]: E0714 22:21:24.681220 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:21:24.782066 kubelet[2178]: E0714 22:21:24.782023 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:21:24.882582 kubelet[2178]: E0714 22:21:24.882546 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:21:24.983131 kubelet[2178]: E0714 22:21:24.983078 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:21:25.083927 kubelet[2178]: E0714 22:21:25.083757 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:21:25.184099 kubelet[2178]: E0714 22:21:25.184054 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:21:25.284809 kubelet[2178]: E0714 22:21:25.284761 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:21:25.385250 kubelet[2178]: E0714 22:21:25.385130 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:21:25.471314 kubelet[2178]: E0714 22:21:25.471280 2178 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 22:21:25.471425 kubelet[2178]: E0714 22:21:25.471393 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:25.485474 kubelet[2178]: E0714 22:21:25.485448 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:21:25.586314 kubelet[2178]: E0714 22:21:25.586277 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:21:25.687089 kubelet[2178]: E0714 22:21:25.686982 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:21:25.787749 kubelet[2178]: E0714 22:21:25.787697 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:21:25.887822 kubelet[2178]: E0714 22:21:25.887790 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:21:25.939636 kubelet[2178]: I0714 22:21:25.939522 2178 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 14 22:21:25.945711 kubelet[2178]: I0714 22:21:25.945682 2178 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 14 22:21:25.948632 kubelet[2178]: I0714 22:21:25.948606 2178 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 14 22:21:25.991085 systemd[1]: Reloading requested from client PID 2459 ('systemctl') (unit session-7.scope)... Jul 14 22:21:25.991100 systemd[1]: Reloading... Jul 14 22:21:26.055262 zram_generator::config[2498]: No configuration found. Jul 14 22:21:26.164717 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 22:21:26.258841 systemd[1]: Reloading finished in 267 ms. Jul 14 22:21:26.298421 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:21:26.324535 systemd[1]: kubelet.service: Deactivated successfully. Jul 14 22:21:26.324818 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:21:26.334429 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:21:26.499294 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:21:26.503971 (kubelet)[2543]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 14 22:21:26.539396 kubelet[2543]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 22:21:26.539396 kubelet[2543]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 14 22:21:26.539396 kubelet[2543]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 22:21:26.539396 kubelet[2543]: I0714 22:21:26.539364 2543 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 14 22:21:26.546194 kubelet[2543]: I0714 22:21:26.546161 2543 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 14 22:21:26.546194 kubelet[2543]: I0714 22:21:26.546182 2543 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 14 22:21:26.546426 kubelet[2543]: I0714 22:21:26.546405 2543 server.go:954] "Client rotation is on, will bootstrap in background" Jul 14 22:21:26.547481 kubelet[2543]: I0714 22:21:26.547459 2543 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 14 22:21:26.549547 kubelet[2543]: I0714 22:21:26.549525 2543 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 14 22:21:26.552164 kubelet[2543]: E0714 22:21:26.552127 2543 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 14 22:21:26.552164 kubelet[2543]: I0714 22:21:26.552162 2543 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 14 22:21:26.557138 kubelet[2543]: I0714 22:21:26.557106 2543 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 14 22:21:26.557431 kubelet[2543]: I0714 22:21:26.557398 2543 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 14 22:21:26.557862 kubelet[2543]: I0714 22:21:26.557428 2543 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 14 22:21:26.557862 kubelet[2543]: I0714 22:21:26.557691 2543 topology_manager.go:138] "Creating topology manager with none policy" Jul 14 22:21:26.557862 kubelet[2543]: I0714 22:21:26.557702 2543 container_manager_linux.go:304] "Creating device plugin manager" Jul 14 22:21:26.557862 kubelet[2543]: I0714 22:21:26.557757 2543 state_mem.go:36] "Initialized new in-memory state store" Jul 14 22:21:26.558177 kubelet[2543]: I0714 22:21:26.558147 2543 kubelet.go:446] "Attempting to sync node with API server" Jul 14 22:21:26.558210 kubelet[2543]: I0714 22:21:26.558190 2543 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 14 22:21:26.558210 kubelet[2543]: I0714 22:21:26.558210 2543 kubelet.go:352] "Adding apiserver pod source" Jul 14 22:21:26.558280 kubelet[2543]: I0714 22:21:26.558239 2543 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 14 22:21:26.559411 kubelet[2543]: I0714 22:21:26.559389 2543 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 14 22:21:26.559724 kubelet[2543]: I0714 22:21:26.559700 2543 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 14 22:21:26.560127 kubelet[2543]: I0714 22:21:26.560087 2543 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 14 22:21:26.560127 kubelet[2543]: I0714 22:21:26.560117 2543 server.go:1287] "Started kubelet" Jul 14 22:21:26.562066 kubelet[2543]: I0714 22:21:26.562043 2543 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 14 22:21:26.569536 kubelet[2543]: I0714 22:21:26.569473 2543 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 14 22:21:26.569710 kubelet[2543]: E0714 22:21:26.569616 2543 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 14 22:21:26.570479 kubelet[2543]: I0714 22:21:26.570462 2543 server.go:479] "Adding debug handlers to kubelet server" Jul 14 22:21:26.571490 kubelet[2543]: I0714 22:21:26.571175 2543 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 14 22:21:26.571490 kubelet[2543]: I0714 22:21:26.571377 2543 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 14 22:21:26.571595 kubelet[2543]: I0714 22:21:26.571578 2543 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 14 22:21:26.572581 kubelet[2543]: I0714 22:21:26.571866 2543 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 14 22:21:26.572581 kubelet[2543]: I0714 22:21:26.571936 2543 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 14 22:21:26.572700 kubelet[2543]: I0714 22:21:26.572669 2543 reconciler.go:26] "Reconciler: start to sync state" Jul 14 22:21:26.572870 kubelet[2543]: I0714 22:21:26.572838 2543 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 14 22:21:26.573723 kubelet[2543]: I0714 22:21:26.573690 2543 factory.go:221] Registration of the containerd container factory successfully Jul 14 22:21:26.573723 kubelet[2543]: I0714 22:21:26.573710 2543 factory.go:221] Registration of the systemd container factory successfully Jul 14 22:21:26.580405 kubelet[2543]: I0714 22:21:26.580357 2543 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 14 22:21:26.582762 kubelet[2543]: I0714 22:21:26.582732 2543 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 14 22:21:26.582762 kubelet[2543]: I0714 22:21:26.582761 2543 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 14 22:21:26.582960 kubelet[2543]: I0714 22:21:26.582797 2543 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 14 22:21:26.582960 kubelet[2543]: I0714 22:21:26.582805 2543 kubelet.go:2382] "Starting kubelet main sync loop" Jul 14 22:21:26.582960 kubelet[2543]: E0714 22:21:26.582855 2543 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 14 22:21:26.603593 kubelet[2543]: I0714 22:21:26.603559 2543 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 14 22:21:26.603593 kubelet[2543]: I0714 22:21:26.603578 2543 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 14 22:21:26.603593 kubelet[2543]: I0714 22:21:26.603596 2543 state_mem.go:36] "Initialized new in-memory state store" Jul 14 22:21:26.603741 kubelet[2543]: I0714 22:21:26.603723 2543 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 14 22:21:26.603776 kubelet[2543]: I0714 22:21:26.603738 2543 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 14 22:21:26.603776 kubelet[2543]: I0714 22:21:26.603756 2543 policy_none.go:49] "None policy: Start" Jul 14 22:21:26.603776 kubelet[2543]: I0714 22:21:26.603774 2543 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 14 22:21:26.603846 kubelet[2543]: I0714 22:21:26.603784 2543 state_mem.go:35] "Initializing new in-memory state store" Jul 14 22:21:26.603885 kubelet[2543]: I0714 22:21:26.603872 2543 state_mem.go:75] "Updated machine memory state" Jul 14 22:21:26.607697 kubelet[2543]: I0714 22:21:26.607607 2543 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 14 22:21:26.607832 kubelet[2543]: I0714 22:21:26.607775 2543 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 14 22:21:26.607832 kubelet[2543]: I0714 22:21:26.607786 2543 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 14 22:21:26.608495 kubelet[2543]: I0714 22:21:26.608330 2543 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 14 22:21:26.608821 kubelet[2543]: E0714 22:21:26.608780 2543 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 14 22:21:26.684154 kubelet[2543]: I0714 22:21:26.684095 2543 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 14 22:21:26.684154 kubelet[2543]: I0714 22:21:26.684147 2543 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 14 22:21:26.684297 kubelet[2543]: I0714 22:21:26.684168 2543 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 14 22:21:26.713640 kubelet[2543]: I0714 22:21:26.713614 2543 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 22:21:26.734828 kubelet[2543]: E0714 22:21:26.734667 2543 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 14 22:21:26.734828 kubelet[2543]: E0714 22:21:26.734678 2543 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 14 22:21:26.734828 kubelet[2543]: E0714 22:21:26.734791 2543 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 14 22:21:26.736995 kubelet[2543]: I0714 22:21:26.736974 2543 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 14 22:21:26.737066 kubelet[2543]: I0714 22:21:26.737038 2543 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 14 22:21:26.773625 kubelet[2543]: I0714 22:21:26.773589 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/28aac17f0ef57936396276120eabb442-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"28aac17f0ef57936396276120eabb442\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:21:26.773625 kubelet[2543]: I0714 22:21:26.773618 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:21:26.773809 kubelet[2543]: I0714 22:21:26.773637 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/28aac17f0ef57936396276120eabb442-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"28aac17f0ef57936396276120eabb442\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:21:26.773809 kubelet[2543]: I0714 22:21:26.773652 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/28aac17f0ef57936396276120eabb442-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"28aac17f0ef57936396276120eabb442\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:21:26.773809 kubelet[2543]: I0714 22:21:26.773669 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:21:26.773809 kubelet[2543]: I0714 22:21:26.773684 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:21:26.773809 kubelet[2543]: I0714 22:21:26.773699 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:21:26.773924 kubelet[2543]: I0714 22:21:26.773717 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:21:26.773924 kubelet[2543]: I0714 22:21:26.773732 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" Jul 14 22:21:27.035417 kubelet[2543]: E0714 22:21:27.035271 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:27.035417 kubelet[2543]: E0714 22:21:27.035307 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:27.035417 kubelet[2543]: E0714 22:21:27.035361 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:27.558780 kubelet[2543]: I0714 22:21:27.558717 2543 apiserver.go:52] "Watching apiserver" Jul 14 22:21:27.572372 kubelet[2543]: I0714 22:21:27.572327 2543 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 14 22:21:27.594021 kubelet[2543]: E0714 22:21:27.593984 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:27.594185 kubelet[2543]: I0714 22:21:27.594163 2543 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 14 22:21:27.594482 kubelet[2543]: I0714 22:21:27.594451 2543 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 14 22:21:27.600812 kubelet[2543]: E0714 22:21:27.600773 2543 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 14 22:21:27.601185 kubelet[2543]: E0714 22:21:27.600948 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:27.601372 kubelet[2543]: E0714 22:21:27.601308 2543 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 14 22:21:27.601555 kubelet[2543]: E0714 22:21:27.601529 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:27.609889 kubelet[2543]: I0714 22:21:27.609812 2543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.6097959299999998 podStartE2EDuration="2.60979593s" podCreationTimestamp="2025-07-14 22:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:21:27.609653462 +0000 UTC m=+1.101674345" watchObservedRunningTime="2025-07-14 22:21:27.60979593 +0000 UTC m=+1.101816803" Jul 14 22:21:27.616249 kubelet[2543]: I0714 22:21:27.616072 2543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.616057874 podStartE2EDuration="2.616057874s" podCreationTimestamp="2025-07-14 22:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:21:27.615911146 +0000 UTC m=+1.107932009" watchObservedRunningTime="2025-07-14 22:21:27.616057874 +0000 UTC m=+1.108078747" Jul 14 22:21:27.629403 kubelet[2543]: I0714 22:21:27.629335 2543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.629312267 podStartE2EDuration="2.629312267s" podCreationTimestamp="2025-07-14 22:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:21:27.623409742 +0000 UTC m=+1.115430616" watchObservedRunningTime="2025-07-14 22:21:27.629312267 +0000 UTC m=+1.121333140" Jul 14 22:21:28.595431 kubelet[2543]: E0714 22:21:28.595394 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:28.595864 kubelet[2543]: E0714 22:21:28.595446 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:29.102668 kubelet[2543]: E0714 22:21:29.102628 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:29.596696 kubelet[2543]: E0714 22:21:29.596648 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:31.758184 kubelet[2543]: I0714 22:21:31.758150 2543 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 14 22:21:31.758716 containerd[1464]: time="2025-07-14T22:21:31.758572297Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 14 22:21:31.758957 kubelet[2543]: I0714 22:21:31.758800 2543 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 14 22:21:32.772451 systemd[1]: Created slice kubepods-besteffort-pode2dc328c_4306_4f4e_aab6_69ec3061d034.slice - libcontainer container kubepods-besteffort-pode2dc328c_4306_4f4e_aab6_69ec3061d034.slice. Jul 14 22:21:32.811715 kubelet[2543]: I0714 22:21:32.811678 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e2dc328c-4306-4f4e-aab6-69ec3061d034-kube-proxy\") pod \"kube-proxy-hc5tw\" (UID: \"e2dc328c-4306-4f4e-aab6-69ec3061d034\") " pod="kube-system/kube-proxy-hc5tw" Jul 14 22:21:32.811998 kubelet[2543]: I0714 22:21:32.811718 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e2dc328c-4306-4f4e-aab6-69ec3061d034-xtables-lock\") pod \"kube-proxy-hc5tw\" (UID: \"e2dc328c-4306-4f4e-aab6-69ec3061d034\") " pod="kube-system/kube-proxy-hc5tw" Jul 14 22:21:32.811998 kubelet[2543]: I0714 22:21:32.811735 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e2dc328c-4306-4f4e-aab6-69ec3061d034-lib-modules\") pod \"kube-proxy-hc5tw\" (UID: \"e2dc328c-4306-4f4e-aab6-69ec3061d034\") " pod="kube-system/kube-proxy-hc5tw" Jul 14 22:21:32.811998 kubelet[2543]: I0714 22:21:32.811751 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkj48\" (UniqueName: \"kubernetes.io/projected/e2dc328c-4306-4f4e-aab6-69ec3061d034-kube-api-access-mkj48\") pod \"kube-proxy-hc5tw\" (UID: \"e2dc328c-4306-4f4e-aab6-69ec3061d034\") " pod="kube-system/kube-proxy-hc5tw" Jul 14 22:21:33.385371 kubelet[2543]: E0714 22:21:33.385314 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:33.403365 containerd[1464]: time="2025-07-14T22:21:33.385787436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hc5tw,Uid:e2dc328c-4306-4f4e-aab6-69ec3061d034,Namespace:kube-system,Attempt:0,}" Jul 14 22:21:34.699136 containerd[1464]: time="2025-07-14T22:21:34.698270018Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:21:34.699136 containerd[1464]: time="2025-07-14T22:21:34.699039748Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:21:34.699136 containerd[1464]: time="2025-07-14T22:21:34.699057581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:21:34.699696 containerd[1464]: time="2025-07-14T22:21:34.699153060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:21:34.724404 systemd[1]: Started cri-containerd-dc17a5dc884f2bfa469a8889e1412cb393b8196d719b2e0afdb56400af1ba7b2.scope - libcontainer container dc17a5dc884f2bfa469a8889e1412cb393b8196d719b2e0afdb56400af1ba7b2. Jul 14 22:21:34.746183 containerd[1464]: time="2025-07-14T22:21:34.746125319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hc5tw,Uid:e2dc328c-4306-4f4e-aab6-69ec3061d034,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc17a5dc884f2bfa469a8889e1412cb393b8196d719b2e0afdb56400af1ba7b2\"" Jul 14 22:21:34.746889 kubelet[2543]: E0714 22:21:34.746851 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:34.748423 containerd[1464]: time="2025-07-14T22:21:34.748385674Z" level=info msg="CreateContainer within sandbox \"dc17a5dc884f2bfa469a8889e1412cb393b8196d719b2e0afdb56400af1ba7b2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 14 22:21:34.917513 containerd[1464]: time="2025-07-14T22:21:34.917452283Z" level=info msg="CreateContainer within sandbox \"dc17a5dc884f2bfa469a8889e1412cb393b8196d719b2e0afdb56400af1ba7b2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fe4ba61357d0d3a3ae6fc0f4399e28938926de5a470e188cff7a0875712dddc8\"" Jul 14 22:21:34.918217 containerd[1464]: time="2025-07-14T22:21:34.918063163Z" level=info msg="StartContainer for \"fe4ba61357d0d3a3ae6fc0f4399e28938926de5a470e188cff7a0875712dddc8\"" Jul 14 22:21:34.945560 systemd[1]: Started cri-containerd-fe4ba61357d0d3a3ae6fc0f4399e28938926de5a470e188cff7a0875712dddc8.scope - libcontainer container fe4ba61357d0d3a3ae6fc0f4399e28938926de5a470e188cff7a0875712dddc8. Jul 14 22:21:34.975072 containerd[1464]: time="2025-07-14T22:21:34.974947542Z" level=info msg="StartContainer for \"fe4ba61357d0d3a3ae6fc0f4399e28938926de5a470e188cff7a0875712dddc8\" returns successfully" Jul 14 22:21:35.605861 kubelet[2543]: E0714 22:21:35.605815 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:35.616169 kubelet[2543]: I0714 22:21:35.616000 2543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hc5tw" podStartSLOduration=3.615981335 podStartE2EDuration="3.615981335s" podCreationTimestamp="2025-07-14 22:21:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:21:35.615834809 +0000 UTC m=+9.107855692" watchObservedRunningTime="2025-07-14 22:21:35.615981335 +0000 UTC m=+9.108002218" Jul 14 22:21:35.912883 kubelet[2543]: E0714 22:21:35.912758 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:36.607351 kubelet[2543]: E0714 22:21:36.607312 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:37.580034 systemd[1]: Created slice kubepods-besteffort-pod39a05f2b_67dc_4f60_8c70_df44a5496554.slice - libcontainer container kubepods-besteffort-pod39a05f2b_67dc_4f60_8c70_df44a5496554.slice. Jul 14 22:21:37.609141 kubelet[2543]: E0714 22:21:37.609092 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:37.644792 kubelet[2543]: I0714 22:21:37.644737 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/39a05f2b-67dc-4f60-8c70-df44a5496554-var-lib-calico\") pod \"tigera-operator-747864d56d-qp5wm\" (UID: \"39a05f2b-67dc-4f60-8c70-df44a5496554\") " pod="tigera-operator/tigera-operator-747864d56d-qp5wm" Jul 14 22:21:37.644792 kubelet[2543]: I0714 22:21:37.644785 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rlj6\" (UniqueName: \"kubernetes.io/projected/39a05f2b-67dc-4f60-8c70-df44a5496554-kube-api-access-4rlj6\") pod \"tigera-operator-747864d56d-qp5wm\" (UID: \"39a05f2b-67dc-4f60-8c70-df44a5496554\") " pod="tigera-operator/tigera-operator-747864d56d-qp5wm" Jul 14 22:21:37.888020 containerd[1464]: time="2025-07-14T22:21:37.887905101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-qp5wm,Uid:39a05f2b-67dc-4f60-8c70-df44a5496554,Namespace:tigera-operator,Attempt:0,}" Jul 14 22:21:37.912073 containerd[1464]: time="2025-07-14T22:21:37.911445449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:21:37.912073 containerd[1464]: time="2025-07-14T22:21:37.912055768Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:21:37.912073 containerd[1464]: time="2025-07-14T22:21:37.912071006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:21:37.912248 containerd[1464]: time="2025-07-14T22:21:37.912147109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:21:37.936379 systemd[1]: Started cri-containerd-6e944f9b1afb7e30b0b15c95c61a2e59023b2888d61eb2404963cf16dcd1c4f1.scope - libcontainer container 6e944f9b1afb7e30b0b15c95c61a2e59023b2888d61eb2404963cf16dcd1c4f1. Jul 14 22:21:37.971011 containerd[1464]: time="2025-07-14T22:21:37.970965297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-qp5wm,Uid:39a05f2b-67dc-4f60-8c70-df44a5496554,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6e944f9b1afb7e30b0b15c95c61a2e59023b2888d61eb2404963cf16dcd1c4f1\"" Jul 14 22:21:37.972548 containerd[1464]: time="2025-07-14T22:21:37.972514502Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 14 22:21:38.054780 kubelet[2543]: E0714 22:21:38.054646 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:38.611762 kubelet[2543]: E0714 22:21:38.611715 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:39.107030 kubelet[2543]: E0714 22:21:39.106986 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:39.718261 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount840814760.mount: Deactivated successfully. Jul 14 22:21:40.037802 containerd[1464]: time="2025-07-14T22:21:40.037747230Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:21:40.038710 containerd[1464]: time="2025-07-14T22:21:40.038648825Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 14 22:21:40.040022 containerd[1464]: time="2025-07-14T22:21:40.039990057Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:21:40.042157 containerd[1464]: time="2025-07-14T22:21:40.042121094Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:21:40.042799 containerd[1464]: time="2025-07-14T22:21:40.042751169Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.070201941s" Jul 14 22:21:40.042799 containerd[1464]: time="2025-07-14T22:21:40.042789220Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 14 22:21:40.044368 containerd[1464]: time="2025-07-14T22:21:40.044333204Z" level=info msg="CreateContainer within sandbox \"6e944f9b1afb7e30b0b15c95c61a2e59023b2888d61eb2404963cf16dcd1c4f1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 14 22:21:40.057822 containerd[1464]: time="2025-07-14T22:21:40.057776131Z" level=info msg="CreateContainer within sandbox \"6e944f9b1afb7e30b0b15c95c61a2e59023b2888d61eb2404963cf16dcd1c4f1\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f5faa951b5934e99de6294fc94455c482cced457264ccd44a213a58ddcfdfe21\"" Jul 14 22:21:40.058925 containerd[1464]: time="2025-07-14T22:21:40.058393462Z" level=info msg="StartContainer for \"f5faa951b5934e99de6294fc94455c482cced457264ccd44a213a58ddcfdfe21\"" Jul 14 22:21:40.088355 systemd[1]: Started cri-containerd-f5faa951b5934e99de6294fc94455c482cced457264ccd44a213a58ddcfdfe21.scope - libcontainer container f5faa951b5934e99de6294fc94455c482cced457264ccd44a213a58ddcfdfe21. Jul 14 22:21:40.118639 containerd[1464]: time="2025-07-14T22:21:40.118593460Z" level=info msg="StartContainer for \"f5faa951b5934e99de6294fc94455c482cced457264ccd44a213a58ddcfdfe21\" returns successfully" Jul 14 22:21:45.217674 sudo[1636]: pam_unix(sudo:session): session closed for user root Jul 14 22:21:45.222651 sshd[1633]: pam_unix(sshd:session): session closed for user core Jul 14 22:21:45.227002 systemd[1]: sshd@6-10.0.0.137:22-10.0.0.1:58326.service: Deactivated successfully. Jul 14 22:21:45.229598 systemd[1]: session-7.scope: Deactivated successfully. Jul 14 22:21:45.229783 systemd[1]: session-7.scope: Consumed 4.983s CPU time, 155.1M memory peak, 0B memory swap peak. Jul 14 22:21:45.230326 systemd-logind[1440]: Session 7 logged out. Waiting for processes to exit. Jul 14 22:21:45.232376 systemd-logind[1440]: Removed session 7. Jul 14 22:21:47.633344 kubelet[2543]: I0714 22:21:47.633259 2543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-qp5wm" podStartSLOduration=8.561983872 podStartE2EDuration="10.633240017s" podCreationTimestamp="2025-07-14 22:21:37 +0000 UTC" firstStartedPulling="2025-07-14 22:21:37.972117795 +0000 UTC m=+11.464138668" lastFinishedPulling="2025-07-14 22:21:40.04337394 +0000 UTC m=+13.535394813" observedRunningTime="2025-07-14 22:21:40.626788191 +0000 UTC m=+14.118809064" watchObservedRunningTime="2025-07-14 22:21:47.633240017 +0000 UTC m=+21.125260910" Jul 14 22:21:47.646714 systemd[1]: Created slice kubepods-besteffort-pod933872f8_46b7_49e3_aa11_5d700265b737.slice - libcontainer container kubepods-besteffort-pod933872f8_46b7_49e3_aa11_5d700265b737.slice. Jul 14 22:21:47.709749 kubelet[2543]: I0714 22:21:47.709696 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8g9t\" (UniqueName: \"kubernetes.io/projected/933872f8-46b7-49e3-aa11-5d700265b737-kube-api-access-w8g9t\") pod \"calico-typha-76c74f44c5-9qjr9\" (UID: \"933872f8-46b7-49e3-aa11-5d700265b737\") " pod="calico-system/calico-typha-76c74f44c5-9qjr9" Jul 14 22:21:47.709749 kubelet[2543]: I0714 22:21:47.709739 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/933872f8-46b7-49e3-aa11-5d700265b737-typha-certs\") pod \"calico-typha-76c74f44c5-9qjr9\" (UID: \"933872f8-46b7-49e3-aa11-5d700265b737\") " pod="calico-system/calico-typha-76c74f44c5-9qjr9" Jul 14 22:21:47.709749 kubelet[2543]: I0714 22:21:47.709757 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/933872f8-46b7-49e3-aa11-5d700265b737-tigera-ca-bundle\") pod \"calico-typha-76c74f44c5-9qjr9\" (UID: \"933872f8-46b7-49e3-aa11-5d700265b737\") " pod="calico-system/calico-typha-76c74f44c5-9qjr9" Jul 14 22:21:47.952067 kubelet[2543]: E0714 22:21:47.951935 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:47.952825 containerd[1464]: time="2025-07-14T22:21:47.952778836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76c74f44c5-9qjr9,Uid:933872f8-46b7-49e3-aa11-5d700265b737,Namespace:calico-system,Attempt:0,}" Jul 14 22:21:48.221464 kubelet[2543]: I0714 22:21:48.221347 2543 status_manager.go:890] "Failed to get status for pod" podUID="669c63e1-ec00-4b7a-9d32-af77fa381e90" pod="calico-system/calico-node-z6r7n" err="pods \"calico-node-z6r7n\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" Jul 14 22:21:48.232139 systemd[1]: Created slice kubepods-besteffort-pod669c63e1_ec00_4b7a_9d32_af77fa381e90.slice - libcontainer container kubepods-besteffort-pod669c63e1_ec00_4b7a_9d32_af77fa381e90.slice. Jul 14 22:21:48.234042 containerd[1464]: time="2025-07-14T22:21:48.233915648Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:21:48.234121 containerd[1464]: time="2025-07-14T22:21:48.234068705Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:21:48.234143 containerd[1464]: time="2025-07-14T22:21:48.234096527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:21:48.234367 containerd[1464]: time="2025-07-14T22:21:48.234283469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:21:48.257388 systemd[1]: Started cri-containerd-0c3834578b33d911fc4ffd70a62d5940349bd9fc72362973c74d22dfaeb48d91.scope - libcontainer container 0c3834578b33d911fc4ffd70a62d5940349bd9fc72362973c74d22dfaeb48d91. Jul 14 22:21:48.313140 kubelet[2543]: I0714 22:21:48.312841 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/669c63e1-ec00-4b7a-9d32-af77fa381e90-cni-bin-dir\") pod \"calico-node-z6r7n\" (UID: \"669c63e1-ec00-4b7a-9d32-af77fa381e90\") " pod="calico-system/calico-node-z6r7n" Jul 14 22:21:48.313140 kubelet[2543]: I0714 22:21:48.312882 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/669c63e1-ec00-4b7a-9d32-af77fa381e90-node-certs\") pod \"calico-node-z6r7n\" (UID: \"669c63e1-ec00-4b7a-9d32-af77fa381e90\") " pod="calico-system/calico-node-z6r7n" Jul 14 22:21:48.313140 kubelet[2543]: I0714 22:21:48.312901 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/669c63e1-ec00-4b7a-9d32-af77fa381e90-var-lib-calico\") pod \"calico-node-z6r7n\" (UID: \"669c63e1-ec00-4b7a-9d32-af77fa381e90\") " pod="calico-system/calico-node-z6r7n" Jul 14 22:21:48.313140 kubelet[2543]: I0714 22:21:48.312915 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/669c63e1-ec00-4b7a-9d32-af77fa381e90-lib-modules\") pod \"calico-node-z6r7n\" (UID: \"669c63e1-ec00-4b7a-9d32-af77fa381e90\") " pod="calico-system/calico-node-z6r7n" Jul 14 22:21:48.313140 kubelet[2543]: I0714 22:21:48.312930 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/669c63e1-ec00-4b7a-9d32-af77fa381e90-xtables-lock\") pod \"calico-node-z6r7n\" (UID: \"669c63e1-ec00-4b7a-9d32-af77fa381e90\") " pod="calico-system/calico-node-z6r7n" Jul 14 22:21:48.313443 kubelet[2543]: I0714 22:21:48.312944 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/669c63e1-ec00-4b7a-9d32-af77fa381e90-cni-net-dir\") pod \"calico-node-z6r7n\" (UID: \"669c63e1-ec00-4b7a-9d32-af77fa381e90\") " pod="calico-system/calico-node-z6r7n" Jul 14 22:21:48.313443 kubelet[2543]: I0714 22:21:48.312957 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/669c63e1-ec00-4b7a-9d32-af77fa381e90-tigera-ca-bundle\") pod \"calico-node-z6r7n\" (UID: \"669c63e1-ec00-4b7a-9d32-af77fa381e90\") " pod="calico-system/calico-node-z6r7n" Jul 14 22:21:48.313443 kubelet[2543]: I0714 22:21:48.312969 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/669c63e1-ec00-4b7a-9d32-af77fa381e90-var-run-calico\") pod \"calico-node-z6r7n\" (UID: \"669c63e1-ec00-4b7a-9d32-af77fa381e90\") " pod="calico-system/calico-node-z6r7n" Jul 14 22:21:48.313443 kubelet[2543]: I0714 22:21:48.312986 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/669c63e1-ec00-4b7a-9d32-af77fa381e90-flexvol-driver-host\") pod \"calico-node-z6r7n\" (UID: \"669c63e1-ec00-4b7a-9d32-af77fa381e90\") " pod="calico-system/calico-node-z6r7n" Jul 14 22:21:48.313443 kubelet[2543]: I0714 22:21:48.313003 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/669c63e1-ec00-4b7a-9d32-af77fa381e90-cni-log-dir\") pod \"calico-node-z6r7n\" (UID: \"669c63e1-ec00-4b7a-9d32-af77fa381e90\") " pod="calico-system/calico-node-z6r7n" Jul 14 22:21:48.313557 kubelet[2543]: I0714 22:21:48.313020 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/669c63e1-ec00-4b7a-9d32-af77fa381e90-policysync\") pod \"calico-node-z6r7n\" (UID: \"669c63e1-ec00-4b7a-9d32-af77fa381e90\") " pod="calico-system/calico-node-z6r7n" Jul 14 22:21:48.313557 kubelet[2543]: I0714 22:21:48.313034 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4s7m\" (UniqueName: \"kubernetes.io/projected/669c63e1-ec00-4b7a-9d32-af77fa381e90-kube-api-access-n4s7m\") pod \"calico-node-z6r7n\" (UID: \"669c63e1-ec00-4b7a-9d32-af77fa381e90\") " pod="calico-system/calico-node-z6r7n" Jul 14 22:21:48.332944 kubelet[2543]: E0714 22:21:48.332665 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7xmfp" podUID="a0d7e0a1-9365-4ef9-a68a-5541a9cd6ec7" Jul 14 22:21:48.335483 containerd[1464]: time="2025-07-14T22:21:48.335432977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76c74f44c5-9qjr9,Uid:933872f8-46b7-49e3-aa11-5d700265b737,Namespace:calico-system,Attempt:0,} returns sandbox id \"0c3834578b33d911fc4ffd70a62d5940349bd9fc72362973c74d22dfaeb48d91\"" Jul 14 22:21:48.337039 kubelet[2543]: E0714 22:21:48.336855 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:48.337726 containerd[1464]: time="2025-07-14T22:21:48.337701999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 14 22:21:48.413656 kubelet[2543]: I0714 22:21:48.413612 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a0d7e0a1-9365-4ef9-a68a-5541a9cd6ec7-registration-dir\") pod \"csi-node-driver-7xmfp\" (UID: \"a0d7e0a1-9365-4ef9-a68a-5541a9cd6ec7\") " pod="calico-system/csi-node-driver-7xmfp" Jul 14 22:21:48.413798 kubelet[2543]: I0714 22:21:48.413723 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpskv\" (UniqueName: \"kubernetes.io/projected/a0d7e0a1-9365-4ef9-a68a-5541a9cd6ec7-kube-api-access-gpskv\") pod \"csi-node-driver-7xmfp\" (UID: \"a0d7e0a1-9365-4ef9-a68a-5541a9cd6ec7\") " pod="calico-system/csi-node-driver-7xmfp" Jul 14 22:21:48.413798 kubelet[2543]: I0714 22:21:48.413769 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a0d7e0a1-9365-4ef9-a68a-5541a9cd6ec7-socket-dir\") pod \"csi-node-driver-7xmfp\" (UID: \"a0d7e0a1-9365-4ef9-a68a-5541a9cd6ec7\") " pod="calico-system/csi-node-driver-7xmfp" Jul 14 22:21:48.413798 kubelet[2543]: I0714 22:21:48.413784 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a0d7e0a1-9365-4ef9-a68a-5541a9cd6ec7-varrun\") pod \"csi-node-driver-7xmfp\" (UID: \"a0d7e0a1-9365-4ef9-a68a-5541a9cd6ec7\") " pod="calico-system/csi-node-driver-7xmfp" Jul 14 22:21:48.413885 kubelet[2543]: I0714 22:21:48.413814 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a0d7e0a1-9365-4ef9-a68a-5541a9cd6ec7-kubelet-dir\") pod \"csi-node-driver-7xmfp\" (UID: \"a0d7e0a1-9365-4ef9-a68a-5541a9cd6ec7\") " pod="calico-system/csi-node-driver-7xmfp" Jul 14 22:21:48.414779 kubelet[2543]: E0714 22:21:48.414739 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:48.414989 kubelet[2543]: W0714 22:21:48.414827 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:48.414989 kubelet[2543]: E0714 22:21:48.414892 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:48.415195 kubelet[2543]: E0714 22:21:48.415170 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:48.415195 kubelet[2543]: W0714 22:21:48.415182 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:48.415195 kubelet[2543]: E0714 22:21:48.415198 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:48.415428 kubelet[2543]: E0714 22:21:48.415404 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:48.415428 kubelet[2543]: W0714 22:21:48.415411 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:48.415428 kubelet[2543]: E0714 22:21:48.415419 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:48.415764 kubelet[2543]: E0714 22:21:48.415748 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:48.415764 kubelet[2543]: W0714 22:21:48.415759 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:48.415828 kubelet[2543]: E0714 22:21:48.415772 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:48.416516 kubelet[2543]: E0714 22:21:48.416030 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:48.416516 kubelet[2543]: W0714 22:21:48.416044 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:48.416516 kubelet[2543]: E0714 22:21:48.416152 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:48.416516 kubelet[2543]: E0714 22:21:48.416331 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:48.416516 kubelet[2543]: W0714 22:21:48.416339 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:48.416516 kubelet[2543]: E0714 22:21:48.416401 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:48.416665 kubelet[2543]: E0714 22:21:48.416573 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:48.416665 kubelet[2543]: W0714 22:21:48.416585 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:48.416665 kubelet[2543]: E0714 22:21:48.416620 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:48.416844 kubelet[2543]: E0714 22:21:48.416826 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:48.416844 kubelet[2543]: W0714 22:21:48.416839 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:48.416964 kubelet[2543]: E0714 22:21:48.416945 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:48.417416 kubelet[2543]: E0714 22:21:48.417401 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:48.417416 kubelet[2543]: W0714 22:21:48.417412 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:48.417491 kubelet[2543]: E0714 22:21:48.417435 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:48.417654 kubelet[2543]: E0714 22:21:48.417632 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:48.417654 kubelet[2543]: W0714 22:21:48.417647 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:48.417785 kubelet[2543]: E0714 22:21:48.417672 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:48.417853 kubelet[2543]: E0714 22:21:48.417838 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:48.417853 kubelet[2543]: W0714 22:21:48.417848 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:48.417916 kubelet[2543]: E0714 22:21:48.417862 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:48.418074 kubelet[2543]: E0714 22:21:48.418056 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:48.418074 kubelet[2543]: W0714 22:21:48.418068 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:48.418145 kubelet[2543]: E0714 22:21:48.418083 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:48.418328 kubelet[2543]: E0714 22:21:48.418296 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:48.418328 kubelet[2543]: W0714 22:21:48.418320 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:48.418396 kubelet[2543]: E0714 22:21:48.418330 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:48.418603 kubelet[2543]: E0714 22:21:48.418590 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:48.418603 kubelet[2543]: W0714 22:21:48.418600 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:48.418658 kubelet[2543]: E0714 22:21:48.418636 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:48.418798 kubelet[2543]: E0714 22:21:48.418777 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:48.418798 kubelet[2543]: W0714 22:21:48.418790 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:48.418869 kubelet[2543]: E0714 22:21:48.418819 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:48.418995 kubelet[2543]: E0714 22:21:48.418980 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:48.418995 kubelet[2543]: W0714 22:21:48.418991 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:48.419102 kubelet[2543]: E0714 22:21:48.419000 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:48.419208 kubelet[2543]: E0714 22:21:48.419192 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:48.419208 kubelet[2543]: W0714 22:21:48.419205 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:48.419286 kubelet[2543]: E0714 22:21:48.419215 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:48.421839 kubelet[2543]: E0714 22:21:48.421817 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:48.421839 kubelet[2543]: W0714 22:21:48.421830 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:48.421839 kubelet[2543]: E0714 22:21:48.421839 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:48.515568 kubelet[2543]: E0714 22:21:48.515375 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:48.515568 kubelet[2543]: W0714 22:21:48.515404 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:48.515568 kubelet[2543]: E0714 22:21:48.515454 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:48.516079 kubelet[2543]: E0714 22:21:48.515883 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:48.516079 kubelet[2543]: W0714 22:21:48.515902 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:48.516079 kubelet[2543]: E0714 22:21:48.515952 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:48.516540 kubelet[2543]: E0714 22:21:48.516522 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:48.516540 kubelet[2543]: W0714 22:21:48.516537 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:48.516633 kubelet[2543]: E0714 22:21:48.516554 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:48.516815 kubelet[2543]: E0714 22:21:48.516796 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:48.516815 kubelet[2543]: W0714 22:21:48.516812 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:48.516996 kubelet[2543]: E0714 22:21:48.516902 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:48.517346 kubelet[2543]: E0714 22:21:48.517080 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:48.517346 kubelet[2543]: W0714 22:21:48.517093 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:48.517346 kubelet[2543]: E0714 22:21:48.517271 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:48.518006 kubelet[2543]: E0714 22:21:48.517768 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:48.518006 kubelet[2543]: W0714 22:21:48.517786 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:48.518006 kubelet[2543]: E0714 22:21:48.517805 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:48.518101 kubelet[2543]: E0714 22:21:48.518075 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:48.518101 kubelet[2543]: W0714 22:21:48.518086 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:48.518147 kubelet[2543]: E0714 22:21:48.518120 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:48.518386 kubelet[2543]: E0714 22:21:48.518369 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:48.518386 kubelet[2543]: W0714 22:21:48.518383 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:48.518461 kubelet[2543]: E0714 22:21:48.518424 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:48.518640 kubelet[2543]: E0714 22:21:48.518622 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:48.518640 kubelet[2543]: W0714 22:21:48.518636 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:48.518721 kubelet[2543]: E0714 22:21:48.518671 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:48.518907 kubelet[2543]: E0714 22:21:48.518895 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:48.518907 kubelet[2543]: W0714 22:21:48.518908 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:48.519058 kubelet[2543]: E0714 22:21:48.519032 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:48.519548 kubelet[2543]: E0714 22:21:48.519290 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:48.519548 kubelet[2543]: W0714 22:21:48.519300 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:48.519548 kubelet[2543]: E0714 22:21:48.519352 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:48.519821 kubelet[2543]: E0714 22:21:48.519790 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:48.519821 kubelet[2543]: W0714 22:21:48.519805 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:48.520019 kubelet[2543]: E0714 22:21:48.519981 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:48.520400 kubelet[2543]: E0714 22:21:48.520304 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:48.520400 kubelet[2543]: W0714 22:21:48.520320 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:48.520455 kubelet[2543]: E0714 22:21:48.520425 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:48.520971 kubelet[2543]: E0714 22:21:48.520754 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:48.520971 kubelet[2543]: W0714 22:21:48.520772 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:48.520971 kubelet[2543]: E0714 22:21:48.520826 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:48.521203 kubelet[2543]: E0714 22:21:48.521187 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:48.521203 kubelet[2543]: W0714 22:21:48.521199 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:48.521298 kubelet[2543]: E0714 22:21:48.521281 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:48.521453 kubelet[2543]: E0714 22:21:48.521435 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:48.521453 kubelet[2543]: W0714 22:21:48.521449 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:48.521973 kubelet[2543]: E0714 22:21:48.521945 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:48.522185 kubelet[2543]: E0714 22:21:48.522169 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:48.522185 kubelet[2543]: W0714 22:21:48.522181 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:48.522458 kubelet[2543]: E0714 22:21:48.522410 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:48.522543 kubelet[2543]: E0714 22:21:48.522523 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:48.522543 kubelet[2543]: W0714 22:21:48.522538 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:48.522705 kubelet[2543]: E0714 22:21:48.522596 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:48.522787 kubelet[2543]: E0714 22:21:48.522731 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:48.522787 kubelet[2543]: W0714 22:21:48.522742 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:48.523046 kubelet[2543]: E0714 22:21:48.522827 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:48.523046 kubelet[2543]: E0714 22:21:48.522961 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:48.523046 kubelet[2543]: W0714 22:21:48.522968 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:48.523046 kubelet[2543]: E0714 22:21:48.523002 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:48.523172 kubelet[2543]: E0714 22:21:48.523153 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:48.523172 kubelet[2543]: W0714 22:21:48.523161 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:48.523305 kubelet[2543]: E0714 22:21:48.523267 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:48.523672 kubelet[2543]: E0714 22:21:48.523642 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:48.523672 kubelet[2543]: W0714 22:21:48.523658 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:48.523672 kubelet[2543]: E0714 22:21:48.523673 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:48.524008 kubelet[2543]: E0714 22:21:48.523927 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:48.524008 kubelet[2543]: W0714 22:21:48.523939 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:48.524008 kubelet[2543]: E0714 22:21:48.523951 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:48.524384 kubelet[2543]: E0714 22:21:48.524287 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:48.524384 kubelet[2543]: W0714 22:21:48.524302 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:48.524384 kubelet[2543]: E0714 22:21:48.524316 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:48.524557 kubelet[2543]: E0714 22:21:48.524539 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:48.524557 kubelet[2543]: W0714 22:21:48.524550 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:48.524557 kubelet[2543]: E0714 22:21:48.524558 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:48.533613 kubelet[2543]: E0714 22:21:48.533582 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:48.533613 kubelet[2543]: W0714 22:21:48.533598 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:48.533613 kubelet[2543]: E0714 22:21:48.533609 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:48.537154 containerd[1464]: time="2025-07-14T22:21:48.537119822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z6r7n,Uid:669c63e1-ec00-4b7a-9d32-af77fa381e90,Namespace:calico-system,Attempt:0,}" Jul 14 22:21:48.563576 containerd[1464]: time="2025-07-14T22:21:48.563368752Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:21:48.563576 containerd[1464]: time="2025-07-14T22:21:48.563473448Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:21:48.563576 containerd[1464]: time="2025-07-14T22:21:48.563573275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:21:48.565033 containerd[1464]: time="2025-07-14T22:21:48.564939121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:21:48.585404 systemd[1]: Started cri-containerd-9ae2b239c011d31933eabfe231b9d51958d6853392fdeb2f9a52ce9bd6816b14.scope - libcontainer container 9ae2b239c011d31933eabfe231b9d51958d6853392fdeb2f9a52ce9bd6816b14. Jul 14 22:21:48.610783 containerd[1464]: time="2025-07-14T22:21:48.610743564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z6r7n,Uid:669c63e1-ec00-4b7a-9d32-af77fa381e90,Namespace:calico-system,Attempt:0,} returns sandbox id \"9ae2b239c011d31933eabfe231b9d51958d6853392fdeb2f9a52ce9bd6816b14\"" Jul 14 22:21:49.583453 kubelet[2543]: E0714 22:21:49.583414 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7xmfp" podUID="a0d7e0a1-9365-4ef9-a68a-5541a9cd6ec7" Jul 14 22:21:49.772043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2498077094.mount: Deactivated successfully. Jul 14 22:21:50.080977 containerd[1464]: time="2025-07-14T22:21:50.080919282Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:21:50.081779 containerd[1464]: time="2025-07-14T22:21:50.081737519Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 14 22:21:50.083048 containerd[1464]: time="2025-07-14T22:21:50.083012694Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:21:50.087936 containerd[1464]: time="2025-07-14T22:21:50.087881969Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:21:50.088752 containerd[1464]: time="2025-07-14T22:21:50.088660852Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 1.750763386s" Jul 14 22:21:50.088752 containerd[1464]: time="2025-07-14T22:21:50.088719683Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 14 22:21:50.095698 containerd[1464]: time="2025-07-14T22:21:50.095645611Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 14 22:21:50.115744 containerd[1464]: time="2025-07-14T22:21:50.115696680Z" level=info msg="CreateContainer within sandbox \"0c3834578b33d911fc4ffd70a62d5940349bd9fc72362973c74d22dfaeb48d91\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 14 22:21:50.131169 containerd[1464]: time="2025-07-14T22:21:50.131129705Z" level=info msg="CreateContainer within sandbox \"0c3834578b33d911fc4ffd70a62d5940349bd9fc72362973c74d22dfaeb48d91\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b0e898f9cdaf4e9e7a8fb2521faa752fb144ed3235c223a17bd697793ab0f978\"" Jul 14 22:21:50.134398 containerd[1464]: time="2025-07-14T22:21:50.134348832Z" level=info msg="StartContainer for \"b0e898f9cdaf4e9e7a8fb2521faa752fb144ed3235c223a17bd697793ab0f978\"" Jul 14 22:21:50.163389 systemd[1]: Started cri-containerd-b0e898f9cdaf4e9e7a8fb2521faa752fb144ed3235c223a17bd697793ab0f978.scope - libcontainer container b0e898f9cdaf4e9e7a8fb2521faa752fb144ed3235c223a17bd697793ab0f978. Jul 14 22:21:50.205166 containerd[1464]: time="2025-07-14T22:21:50.205119770Z" level=info msg="StartContainer for \"b0e898f9cdaf4e9e7a8fb2521faa752fb144ed3235c223a17bd697793ab0f978\" returns successfully" Jul 14 22:21:50.649294 kubelet[2543]: E0714 22:21:50.649261 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:50.677991 kubelet[2543]: I0714 22:21:50.677239 2543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-76c74f44c5-9qjr9" podStartSLOduration=1.9190807460000001 podStartE2EDuration="3.677165424s" podCreationTimestamp="2025-07-14 22:21:47 +0000 UTC" firstStartedPulling="2025-07-14 22:21:48.337400943 +0000 UTC m=+21.829421816" lastFinishedPulling="2025-07-14 22:21:50.095485621 +0000 UTC m=+23.587506494" observedRunningTime="2025-07-14 22:21:50.673104096 +0000 UTC m=+24.165124969" watchObservedRunningTime="2025-07-14 22:21:50.677165424 +0000 UTC m=+24.169186297" Jul 14 22:21:50.720710 kubelet[2543]: E0714 22:21:50.720655 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:50.720710 kubelet[2543]: W0714 22:21:50.720692 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:50.721261 kubelet[2543]: E0714 22:21:50.721239 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:50.721525 kubelet[2543]: E0714 22:21:50.721510 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:50.721525 kubelet[2543]: W0714 22:21:50.721521 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:50.721593 kubelet[2543]: E0714 22:21:50.721530 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:50.721733 kubelet[2543]: E0714 22:21:50.721713 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:50.721733 kubelet[2543]: W0714 22:21:50.721723 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:50.721733 kubelet[2543]: E0714 22:21:50.721731 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:50.722023 kubelet[2543]: E0714 22:21:50.722001 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:50.722023 kubelet[2543]: W0714 22:21:50.722020 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:50.722095 kubelet[2543]: E0714 22:21:50.722042 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:50.722284 kubelet[2543]: E0714 22:21:50.722269 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:50.722284 kubelet[2543]: W0714 22:21:50.722281 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:50.722350 kubelet[2543]: E0714 22:21:50.722291 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:50.722492 kubelet[2543]: E0714 22:21:50.722477 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:50.722492 kubelet[2543]: W0714 22:21:50.722489 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:50.722550 kubelet[2543]: E0714 22:21:50.722498 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:50.722688 kubelet[2543]: E0714 22:21:50.722674 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:50.722688 kubelet[2543]: W0714 22:21:50.722685 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:50.722737 kubelet[2543]: E0714 22:21:50.722695 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:50.722888 kubelet[2543]: E0714 22:21:50.722873 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:50.722888 kubelet[2543]: W0714 22:21:50.722884 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:50.722944 kubelet[2543]: E0714 22:21:50.722894 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:50.723088 kubelet[2543]: E0714 22:21:50.723073 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:50.723088 kubelet[2543]: W0714 22:21:50.723084 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:50.723137 kubelet[2543]: E0714 22:21:50.723094 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:50.723303 kubelet[2543]: E0714 22:21:50.723288 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:50.723303 kubelet[2543]: W0714 22:21:50.723301 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:50.723358 kubelet[2543]: E0714 22:21:50.723311 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:50.723504 kubelet[2543]: E0714 22:21:50.723489 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:50.723504 kubelet[2543]: W0714 22:21:50.723501 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:50.723553 kubelet[2543]: E0714 22:21:50.723511 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:50.723697 kubelet[2543]: E0714 22:21:50.723683 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:50.723697 kubelet[2543]: W0714 22:21:50.723695 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:50.723749 kubelet[2543]: E0714 22:21:50.723704 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:50.723890 kubelet[2543]: E0714 22:21:50.723876 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:50.723890 kubelet[2543]: W0714 22:21:50.723887 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:50.723939 kubelet[2543]: E0714 22:21:50.723896 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:50.724084 kubelet[2543]: E0714 22:21:50.724070 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:50.724084 kubelet[2543]: W0714 22:21:50.724081 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:50.724137 kubelet[2543]: E0714 22:21:50.724090 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:50.724300 kubelet[2543]: E0714 22:21:50.724284 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:50.724300 kubelet[2543]: W0714 22:21:50.724296 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:50.724360 kubelet[2543]: E0714 22:21:50.724305 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:50.734609 kubelet[2543]: E0714 22:21:50.734585 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:50.734609 kubelet[2543]: W0714 22:21:50.734600 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:50.734678 kubelet[2543]: E0714 22:21:50.734612 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:50.734838 kubelet[2543]: E0714 22:21:50.734823 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:50.734838 kubelet[2543]: W0714 22:21:50.734832 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:50.735020 kubelet[2543]: E0714 22:21:50.734845 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:50.735076 kubelet[2543]: E0714 22:21:50.735061 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:50.735076 kubelet[2543]: W0714 22:21:50.735070 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:50.735131 kubelet[2543]: E0714 22:21:50.735083 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:50.735358 kubelet[2543]: E0714 22:21:50.735326 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:50.735358 kubelet[2543]: W0714 22:21:50.735348 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:50.735413 kubelet[2543]: E0714 22:21:50.735366 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:50.735565 kubelet[2543]: E0714 22:21:50.735549 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:50.735565 kubelet[2543]: W0714 22:21:50.735558 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:50.735613 kubelet[2543]: E0714 22:21:50.735571 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:50.735779 kubelet[2543]: E0714 22:21:50.735764 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:50.735779 kubelet[2543]: W0714 22:21:50.735774 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:50.735824 kubelet[2543]: E0714 22:21:50.735787 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:50.735969 kubelet[2543]: E0714 22:21:50.735955 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:50.735969 kubelet[2543]: W0714 22:21:50.735963 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:50.736022 kubelet[2543]: E0714 22:21:50.735974 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:50.736136 kubelet[2543]: E0714 22:21:50.736122 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:50.736136 kubelet[2543]: W0714 22:21:50.736130 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:50.736189 kubelet[2543]: E0714 22:21:50.736140 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:50.736399 kubelet[2543]: E0714 22:21:50.736376 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:50.736399 kubelet[2543]: W0714 22:21:50.736386 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:50.736462 kubelet[2543]: E0714 22:21:50.736415 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:50.736564 kubelet[2543]: E0714 22:21:50.736550 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:50.736564 kubelet[2543]: W0714 22:21:50.736558 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:50.736613 kubelet[2543]: E0714 22:21:50.736581 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:50.736722 kubelet[2543]: E0714 22:21:50.736708 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:50.736722 kubelet[2543]: W0714 22:21:50.736718 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:50.736770 kubelet[2543]: E0714 22:21:50.736728 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:50.736929 kubelet[2543]: E0714 22:21:50.736917 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:50.736929 kubelet[2543]: W0714 22:21:50.736926 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:50.736994 kubelet[2543]: E0714 22:21:50.736939 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:50.737149 kubelet[2543]: E0714 22:21:50.737133 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:50.737149 kubelet[2543]: W0714 22:21:50.737146 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:50.737213 kubelet[2543]: E0714 22:21:50.737160 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:50.737357 kubelet[2543]: E0714 22:21:50.737346 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:50.737357 kubelet[2543]: W0714 22:21:50.737355 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:50.737402 kubelet[2543]: E0714 22:21:50.737367 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:50.737539 kubelet[2543]: E0714 22:21:50.737528 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:50.737539 kubelet[2543]: W0714 22:21:50.737536 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:50.737590 kubelet[2543]: E0714 22:21:50.737547 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:50.737732 kubelet[2543]: E0714 22:21:50.737719 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:50.737732 kubelet[2543]: W0714 22:21:50.737729 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:50.737792 kubelet[2543]: E0714 22:21:50.737742 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:50.737958 kubelet[2543]: E0714 22:21:50.737944 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:50.737958 kubelet[2543]: W0714 22:21:50.737954 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:50.738007 kubelet[2543]: E0714 22:21:50.737964 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:50.738430 kubelet[2543]: E0714 22:21:50.738414 2543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:21:50.738430 kubelet[2543]: W0714 22:21:50.738426 2543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:21:50.738483 kubelet[2543]: E0714 22:21:50.738437 2543 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:21:51.446696 containerd[1464]: time="2025-07-14T22:21:51.446647882Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:21:51.447617 containerd[1464]: time="2025-07-14T22:21:51.447504571Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 14 22:21:51.448796 containerd[1464]: time="2025-07-14T22:21:51.448742726Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:21:51.451120 containerd[1464]: time="2025-07-14T22:21:51.451075837Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:21:51.451889 containerd[1464]: time="2025-07-14T22:21:51.451850492Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.356158725s" Jul 14 22:21:51.451958 containerd[1464]: time="2025-07-14T22:21:51.451893483Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 14 22:21:51.456976 containerd[1464]: time="2025-07-14T22:21:51.456944740Z" level=info msg="CreateContainer within sandbox \"9ae2b239c011d31933eabfe231b9d51958d6853392fdeb2f9a52ce9bd6816b14\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 14 22:21:51.471244 containerd[1464]: time="2025-07-14T22:21:51.471179762Z" level=info msg="CreateContainer within sandbox \"9ae2b239c011d31933eabfe231b9d51958d6853392fdeb2f9a52ce9bd6816b14\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"32508059c02ff8c78f2878b783811908d81d48b98aee0ac60b988ffc41610f6d\"" Jul 14 22:21:51.471647 containerd[1464]: time="2025-07-14T22:21:51.471618507Z" level=info msg="StartContainer for \"32508059c02ff8c78f2878b783811908d81d48b98aee0ac60b988ffc41610f6d\"" Jul 14 22:21:51.510435 systemd[1]: Started cri-containerd-32508059c02ff8c78f2878b783811908d81d48b98aee0ac60b988ffc41610f6d.scope - libcontainer container 32508059c02ff8c78f2878b783811908d81d48b98aee0ac60b988ffc41610f6d. Jul 14 22:21:51.549933 systemd[1]: cri-containerd-32508059c02ff8c78f2878b783811908d81d48b98aee0ac60b988ffc41610f6d.scope: Deactivated successfully. Jul 14 22:21:51.568977 containerd[1464]: time="2025-07-14T22:21:51.568899560Z" level=info msg="StartContainer for \"32508059c02ff8c78f2878b783811908d81d48b98aee0ac60b988ffc41610f6d\" returns successfully" Jul 14 22:21:51.583834 kubelet[2543]: E0714 22:21:51.583693 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7xmfp" podUID="a0d7e0a1-9365-4ef9-a68a-5541a9cd6ec7" Jul 14 22:21:51.648428 kubelet[2543]: I0714 22:21:51.648382 2543 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 22:21:51.874197 kubelet[2543]: E0714 22:21:51.648910 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:52.034431 containerd[1464]: time="2025-07-14T22:21:52.031828918Z" level=info msg="shim disconnected" id=32508059c02ff8c78f2878b783811908d81d48b98aee0ac60b988ffc41610f6d namespace=k8s.io Jul 14 22:21:52.034431 containerd[1464]: time="2025-07-14T22:21:52.034423671Z" level=warning msg="cleaning up after shim disconnected" id=32508059c02ff8c78f2878b783811908d81d48b98aee0ac60b988ffc41610f6d namespace=k8s.io Jul 14 22:21:52.034431 containerd[1464]: time="2025-07-14T22:21:52.034441013Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 22:21:52.106780 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-32508059c02ff8c78f2878b783811908d81d48b98aee0ac60b988ffc41610f6d-rootfs.mount: Deactivated successfully. Jul 14 22:21:52.652513 containerd[1464]: time="2025-07-14T22:21:52.652384623Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 14 22:21:53.583523 kubelet[2543]: E0714 22:21:53.583479 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7xmfp" podUID="a0d7e0a1-9365-4ef9-a68a-5541a9cd6ec7" Jul 14 22:21:55.583289 kubelet[2543]: E0714 22:21:55.583245 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7xmfp" podUID="a0d7e0a1-9365-4ef9-a68a-5541a9cd6ec7" Jul 14 22:21:55.779169 containerd[1464]: time="2025-07-14T22:21:55.779120830Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:21:55.779975 containerd[1464]: time="2025-07-14T22:21:55.779936651Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 14 22:21:55.781062 containerd[1464]: time="2025-07-14T22:21:55.781027550Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:21:55.783178 containerd[1464]: time="2025-07-14T22:21:55.783154964Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:21:55.783864 containerd[1464]: time="2025-07-14T22:21:55.783824862Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 3.131401496s" Jul 14 22:21:55.783902 containerd[1464]: time="2025-07-14T22:21:55.783864907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 14 22:21:55.786151 containerd[1464]: time="2025-07-14T22:21:55.786125010Z" level=info msg="CreateContainer within sandbox \"9ae2b239c011d31933eabfe231b9d51958d6853392fdeb2f9a52ce9bd6816b14\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 14 22:21:55.802255 containerd[1464]: time="2025-07-14T22:21:55.802190453Z" level=info msg="CreateContainer within sandbox \"9ae2b239c011d31933eabfe231b9d51958d6853392fdeb2f9a52ce9bd6816b14\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e4b8e4b27660ee78e4546123efa02fd627b93a77776ce7aecb7a856b8521cba9\"" Jul 14 22:21:55.802900 containerd[1464]: time="2025-07-14T22:21:55.802734755Z" level=info msg="StartContainer for \"e4b8e4b27660ee78e4546123efa02fd627b93a77776ce7aecb7a856b8521cba9\"" Jul 14 22:21:55.834448 systemd[1]: Started cri-containerd-e4b8e4b27660ee78e4546123efa02fd627b93a77776ce7aecb7a856b8521cba9.scope - libcontainer container e4b8e4b27660ee78e4546123efa02fd627b93a77776ce7aecb7a856b8521cba9. Jul 14 22:21:55.866833 containerd[1464]: time="2025-07-14T22:21:55.866759740Z" level=info msg="StartContainer for \"e4b8e4b27660ee78e4546123efa02fd627b93a77776ce7aecb7a856b8521cba9\" returns successfully" Jul 14 22:21:56.917589 systemd[1]: cri-containerd-e4b8e4b27660ee78e4546123efa02fd627b93a77776ce7aecb7a856b8521cba9.scope: Deactivated successfully. Jul 14 22:21:56.939429 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e4b8e4b27660ee78e4546123efa02fd627b93a77776ce7aecb7a856b8521cba9-rootfs.mount: Deactivated successfully. Jul 14 22:21:56.994372 kubelet[2543]: I0714 22:21:56.994341 2543 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 14 22:21:57.210136 containerd[1464]: time="2025-07-14T22:21:57.209976489Z" level=info msg="shim disconnected" id=e4b8e4b27660ee78e4546123efa02fd627b93a77776ce7aecb7a856b8521cba9 namespace=k8s.io Jul 14 22:21:57.210136 containerd[1464]: time="2025-07-14T22:21:57.210054876Z" level=warning msg="cleaning up after shim disconnected" id=e4b8e4b27660ee78e4546123efa02fd627b93a77776ce7aecb7a856b8521cba9 namespace=k8s.io Jul 14 22:21:57.210136 containerd[1464]: time="2025-07-14T22:21:57.210066398Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 22:21:57.434064 systemd[1]: Created slice kubepods-burstable-podb0aee35e_b810_44e1_8f6a_22ac05756d20.slice - libcontainer container kubepods-burstable-podb0aee35e_b810_44e1_8f6a_22ac05756d20.slice. Jul 14 22:21:57.442658 systemd[1]: Created slice kubepods-besteffort-pod6cdfbfb8_7af6_4e0b_8036_ff4c455ca4f3.slice - libcontainer container kubepods-besteffort-pod6cdfbfb8_7af6_4e0b_8036_ff4c455ca4f3.slice. Jul 14 22:21:57.447849 systemd[1]: Created slice kubepods-besteffort-pod3b937dec_5b75_47a6_9753_367ccbffbb4f.slice - libcontainer container kubepods-besteffort-pod3b937dec_5b75_47a6_9753_367ccbffbb4f.slice. Jul 14 22:21:57.453614 systemd[1]: Created slice kubepods-besteffort-pod24b165d4_dcd3_454b_b444_600d3f259636.slice - libcontainer container kubepods-besteffort-pod24b165d4_dcd3_454b_b444_600d3f259636.slice. Jul 14 22:21:57.459276 systemd[1]: Created slice kubepods-besteffort-poddbf074a5_827e_4266_aa97_41efb3b0eb87.slice - libcontainer container kubepods-besteffort-poddbf074a5_827e_4266_aa97_41efb3b0eb87.slice. Jul 14 22:21:57.464959 systemd[1]: Created slice kubepods-besteffort-pod233c9654_f369_4545_aeb8_b29c6d794c17.slice - libcontainer container kubepods-besteffort-pod233c9654_f369_4545_aeb8_b29c6d794c17.slice. Jul 14 22:21:57.471125 systemd[1]: Created slice kubepods-burstable-pod1d877bca_eddc_4eb9_ba2d_007238222f97.slice - libcontainer container kubepods-burstable-pod1d877bca_eddc_4eb9_ba2d_007238222f97.slice. Jul 14 22:21:57.517323 kubelet[2543]: I0714 22:21:57.517277 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvxqc\" (UniqueName: \"kubernetes.io/projected/b0aee35e-b810-44e1-8f6a-22ac05756d20-kube-api-access-nvxqc\") pod \"coredns-668d6bf9bc-vgqdz\" (UID: \"b0aee35e-b810-44e1-8f6a-22ac05756d20\") " pod="kube-system/coredns-668d6bf9bc-vgqdz" Jul 14 22:21:57.517323 kubelet[2543]: I0714 22:21:57.517327 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b0aee35e-b810-44e1-8f6a-22ac05756d20-config-volume\") pod \"coredns-668d6bf9bc-vgqdz\" (UID: \"b0aee35e-b810-44e1-8f6a-22ac05756d20\") " pod="kube-system/coredns-668d6bf9bc-vgqdz" Jul 14 22:21:57.589690 systemd[1]: Created slice kubepods-besteffort-poda0d7e0a1_9365_4ef9_a68a_5541a9cd6ec7.slice - libcontainer container kubepods-besteffort-poda0d7e0a1_9365_4ef9_a68a_5541a9cd6ec7.slice. Jul 14 22:21:57.592209 containerd[1464]: time="2025-07-14T22:21:57.592162774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7xmfp,Uid:a0d7e0a1-9365-4ef9-a68a-5541a9cd6ec7,Namespace:calico-system,Attempt:0,}" Jul 14 22:21:57.618530 kubelet[2543]: I0714 22:21:57.618484 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtrcq\" (UniqueName: \"kubernetes.io/projected/24b165d4-dcd3-454b-b444-600d3f259636-kube-api-access-xtrcq\") pod \"calico-apiserver-8cbd7cb79-jt55m\" (UID: \"24b165d4-dcd3-454b-b444-600d3f259636\") " pod="calico-apiserver/calico-apiserver-8cbd7cb79-jt55m" Jul 14 22:21:57.618530 kubelet[2543]: I0714 22:21:57.618533 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8kvm\" (UniqueName: \"kubernetes.io/projected/233c9654-f369-4545-aeb8-b29c6d794c17-kube-api-access-j8kvm\") pod \"calico-kube-controllers-66cb745f54-d6cjg\" (UID: \"233c9654-f369-4545-aeb8-b29c6d794c17\") " pod="calico-system/calico-kube-controllers-66cb745f54-d6cjg" Jul 14 22:21:57.618530 kubelet[2543]: I0714 22:21:57.618553 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kjnc\" (UniqueName: \"kubernetes.io/projected/3b937dec-5b75-47a6-9753-367ccbffbb4f-kube-api-access-8kjnc\") pod \"goldmane-768f4c5c69-gcjz6\" (UID: \"3b937dec-5b75-47a6-9753-367ccbffbb4f\") " pod="calico-system/goldmane-768f4c5c69-gcjz6" Jul 14 22:21:57.618757 kubelet[2543]: I0714 22:21:57.618569 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dbf074a5-827e-4266-aa97-41efb3b0eb87-whisker-backend-key-pair\") pod \"whisker-54bf954d5f-9lmrc\" (UID: \"dbf074a5-827e-4266-aa97-41efb3b0eb87\") " pod="calico-system/whisker-54bf954d5f-9lmrc" Jul 14 22:21:57.618757 kubelet[2543]: I0714 22:21:57.618583 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvsxf\" (UniqueName: \"kubernetes.io/projected/dbf074a5-827e-4266-aa97-41efb3b0eb87-kube-api-access-rvsxf\") pod \"whisker-54bf954d5f-9lmrc\" (UID: \"dbf074a5-827e-4266-aa97-41efb3b0eb87\") " pod="calico-system/whisker-54bf954d5f-9lmrc" Jul 14 22:21:57.618757 kubelet[2543]: I0714 22:21:57.618598 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v4cm\" (UniqueName: \"kubernetes.io/projected/1d877bca-eddc-4eb9-ba2d-007238222f97-kube-api-access-4v4cm\") pod \"coredns-668d6bf9bc-hlwkw\" (UID: \"1d877bca-eddc-4eb9-ba2d-007238222f97\") " pod="kube-system/coredns-668d6bf9bc-hlwkw" Jul 14 22:21:57.618757 kubelet[2543]: I0714 22:21:57.618654 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/233c9654-f369-4545-aeb8-b29c6d794c17-tigera-ca-bundle\") pod \"calico-kube-controllers-66cb745f54-d6cjg\" (UID: \"233c9654-f369-4545-aeb8-b29c6d794c17\") " pod="calico-system/calico-kube-controllers-66cb745f54-d6cjg" Jul 14 22:21:57.618757 kubelet[2543]: I0714 22:21:57.618677 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b937dec-5b75-47a6-9753-367ccbffbb4f-config\") pod \"goldmane-768f4c5c69-gcjz6\" (UID: \"3b937dec-5b75-47a6-9753-367ccbffbb4f\") " pod="calico-system/goldmane-768f4c5c69-gcjz6" Jul 14 22:21:57.618924 kubelet[2543]: I0714 22:21:57.618758 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b937dec-5b75-47a6-9753-367ccbffbb4f-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-gcjz6\" (UID: \"3b937dec-5b75-47a6-9753-367ccbffbb4f\") " pod="calico-system/goldmane-768f4c5c69-gcjz6" Jul 14 22:21:57.618924 kubelet[2543]: I0714 22:21:57.618794 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6cdfbfb8-7af6-4e0b-8036-ff4c455ca4f3-calico-apiserver-certs\") pod \"calico-apiserver-8cbd7cb79-zzkpk\" (UID: \"6cdfbfb8-7af6-4e0b-8036-ff4c455ca4f3\") " pod="calico-apiserver/calico-apiserver-8cbd7cb79-zzkpk" Jul 14 22:21:57.618924 kubelet[2543]: I0714 22:21:57.618816 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khpjq\" (UniqueName: \"kubernetes.io/projected/6cdfbfb8-7af6-4e0b-8036-ff4c455ca4f3-kube-api-access-khpjq\") pod \"calico-apiserver-8cbd7cb79-zzkpk\" (UID: \"6cdfbfb8-7af6-4e0b-8036-ff4c455ca4f3\") " pod="calico-apiserver/calico-apiserver-8cbd7cb79-zzkpk" Jul 14 22:21:57.618924 kubelet[2543]: I0714 22:21:57.618852 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/24b165d4-dcd3-454b-b444-600d3f259636-calico-apiserver-certs\") pod \"calico-apiserver-8cbd7cb79-jt55m\" (UID: \"24b165d4-dcd3-454b-b444-600d3f259636\") " pod="calico-apiserver/calico-apiserver-8cbd7cb79-jt55m" Jul 14 22:21:57.618924 kubelet[2543]: I0714 22:21:57.618872 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dbf074a5-827e-4266-aa97-41efb3b0eb87-whisker-ca-bundle\") pod \"whisker-54bf954d5f-9lmrc\" (UID: \"dbf074a5-827e-4266-aa97-41efb3b0eb87\") " pod="calico-system/whisker-54bf954d5f-9lmrc" Jul 14 22:21:57.619084 kubelet[2543]: I0714 22:21:57.618892 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/3b937dec-5b75-47a6-9753-367ccbffbb4f-goldmane-key-pair\") pod \"goldmane-768f4c5c69-gcjz6\" (UID: \"3b937dec-5b75-47a6-9753-367ccbffbb4f\") " pod="calico-system/goldmane-768f4c5c69-gcjz6" Jul 14 22:21:57.619084 kubelet[2543]: I0714 22:21:57.618911 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d877bca-eddc-4eb9-ba2d-007238222f97-config-volume\") pod \"coredns-668d6bf9bc-hlwkw\" (UID: \"1d877bca-eddc-4eb9-ba2d-007238222f97\") " pod="kube-system/coredns-668d6bf9bc-hlwkw" Jul 14 22:21:57.663393 containerd[1464]: time="2025-07-14T22:21:57.663343520Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 14 22:21:57.737805 kubelet[2543]: E0714 22:21:57.737775 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:57.740853 containerd[1464]: time="2025-07-14T22:21:57.740268930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vgqdz,Uid:b0aee35e-b810-44e1-8f6a-22ac05756d20,Namespace:kube-system,Attempt:0,}" Jul 14 22:21:57.909793 containerd[1464]: time="2025-07-14T22:21:57.909654040Z" level=error msg="Failed to destroy network for sandbox \"53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:21:57.910216 containerd[1464]: time="2025-07-14T22:21:57.910190197Z" level=error msg="encountered an error cleaning up failed sandbox \"53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:21:57.910278 containerd[1464]: time="2025-07-14T22:21:57.910256060Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7xmfp,Uid:a0d7e0a1-9365-4ef9-a68a-5541a9cd6ec7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:21:57.911043 containerd[1464]: time="2025-07-14T22:21:57.910496111Z" level=error msg="Failed to destroy network for sandbox \"837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:21:57.911043 containerd[1464]: time="2025-07-14T22:21:57.910836230Z" level=error msg="encountered an error cleaning up failed sandbox \"837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:21:57.911043 containerd[1464]: time="2025-07-14T22:21:57.910873419Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vgqdz,Uid:b0aee35e-b810-44e1-8f6a-22ac05756d20,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:21:57.917872 kubelet[2543]: E0714 22:21:57.917829 2543 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:21:57.917932 kubelet[2543]: E0714 22:21:57.917895 2543 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-vgqdz" Jul 14 22:21:57.917932 kubelet[2543]: E0714 22:21:57.917915 2543 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-vgqdz" Jul 14 22:21:57.918031 kubelet[2543]: E0714 22:21:57.917829 2543 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:21:57.918031 kubelet[2543]: E0714 22:21:57.917950 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-vgqdz_kube-system(b0aee35e-b810-44e1-8f6a-22ac05756d20)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-vgqdz_kube-system(b0aee35e-b810-44e1-8f6a-22ac05756d20)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-vgqdz" podUID="b0aee35e-b810-44e1-8f6a-22ac05756d20" Jul 14 22:21:57.918031 kubelet[2543]: E0714 22:21:57.917966 2543 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7xmfp" Jul 14 22:21:57.918193 kubelet[2543]: E0714 22:21:57.918011 2543 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7xmfp" Jul 14 22:21:57.918193 kubelet[2543]: E0714 22:21:57.918053 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7xmfp_calico-system(a0d7e0a1-9365-4ef9-a68a-5541a9cd6ec7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7xmfp_calico-system(a0d7e0a1-9365-4ef9-a68a-5541a9cd6ec7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7xmfp" podUID="a0d7e0a1-9365-4ef9-a68a-5541a9cd6ec7" Jul 14 22:21:57.939581 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77-shm.mount: Deactivated successfully. Jul 14 22:21:58.046533 containerd[1464]: time="2025-07-14T22:21:58.046428788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8cbd7cb79-zzkpk,Uid:6cdfbfb8-7af6-4e0b-8036-ff4c455ca4f3,Namespace:calico-apiserver,Attempt:0,}" Jul 14 22:21:58.051161 containerd[1464]: time="2025-07-14T22:21:58.051137237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-gcjz6,Uid:3b937dec-5b75-47a6-9753-367ccbffbb4f,Namespace:calico-system,Attempt:0,}" Jul 14 22:21:58.056986 containerd[1464]: time="2025-07-14T22:21:58.056951742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8cbd7cb79-jt55m,Uid:24b165d4-dcd3-454b-b444-600d3f259636,Namespace:calico-apiserver,Attempt:0,}" Jul 14 22:21:58.062154 containerd[1464]: time="2025-07-14T22:21:58.062128730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54bf954d5f-9lmrc,Uid:dbf074a5-827e-4266-aa97-41efb3b0eb87,Namespace:calico-system,Attempt:0,}" Jul 14 22:21:58.068593 containerd[1464]: time="2025-07-14T22:21:58.068565162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66cb745f54-d6cjg,Uid:233c9654-f369-4545-aeb8-b29c6d794c17,Namespace:calico-system,Attempt:0,}" Jul 14 22:21:58.074853 kubelet[2543]: E0714 22:21:58.074818 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:21:58.075175 containerd[1464]: time="2025-07-14T22:21:58.075155162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hlwkw,Uid:1d877bca-eddc-4eb9-ba2d-007238222f97,Namespace:kube-system,Attempt:0,}" Jul 14 22:21:58.252040 containerd[1464]: time="2025-07-14T22:21:58.251967109Z" level=error msg="Failed to destroy network for sandbox \"660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:21:58.253085 containerd[1464]: time="2025-07-14T22:21:58.252437251Z" level=error msg="encountered an error cleaning up failed sandbox \"660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:21:58.253085 containerd[1464]: time="2025-07-14T22:21:58.252483839Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8cbd7cb79-zzkpk,Uid:6cdfbfb8-7af6-4e0b-8036-ff4c455ca4f3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:21:58.253188 kubelet[2543]: E0714 22:21:58.252768 2543 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:21:58.253188 kubelet[2543]: E0714 22:21:58.252843 2543 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8cbd7cb79-zzkpk" Jul 14 22:21:58.253188 kubelet[2543]: E0714 22:21:58.252880 2543 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8cbd7cb79-zzkpk" Jul 14 22:21:58.253358 kubelet[2543]: E0714 22:21:58.252918 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8cbd7cb79-zzkpk_calico-apiserver(6cdfbfb8-7af6-4e0b-8036-ff4c455ca4f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8cbd7cb79-zzkpk_calico-apiserver(6cdfbfb8-7af6-4e0b-8036-ff4c455ca4f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8cbd7cb79-zzkpk" podUID="6cdfbfb8-7af6-4e0b-8036-ff4c455ca4f3" Jul 14 22:21:58.260190 containerd[1464]: time="2025-07-14T22:21:58.259826091Z" level=error msg="Failed to destroy network for sandbox \"7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:21:58.260510 containerd[1464]: time="2025-07-14T22:21:58.260460662Z" level=error msg="encountered an error cleaning up failed sandbox \"7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:21:58.260666 containerd[1464]: time="2025-07-14T22:21:58.260541685Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-gcjz6,Uid:3b937dec-5b75-47a6-9753-367ccbffbb4f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:21:58.261887 kubelet[2543]: E0714 22:21:58.260890 2543 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:21:58.261887 kubelet[2543]: E0714 22:21:58.260955 2543 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-gcjz6" Jul 14 22:21:58.261887 kubelet[2543]: E0714 22:21:58.260986 2543 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-gcjz6" Jul 14 22:21:58.262038 kubelet[2543]: E0714 22:21:58.261031 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-gcjz6_calico-system(3b937dec-5b75-47a6-9753-367ccbffbb4f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-gcjz6_calico-system(3b937dec-5b75-47a6-9753-367ccbffbb4f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-gcjz6" podUID="3b937dec-5b75-47a6-9753-367ccbffbb4f" Jul 14 22:21:58.295466 containerd[1464]: time="2025-07-14T22:21:58.295417221Z" level=error msg="Failed to destroy network for sandbox \"928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:21:58.297891 containerd[1464]: time="2025-07-14T22:21:58.297352014Z" level=error msg="Failed to destroy network for sandbox \"1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:21:58.297891 containerd[1464]: time="2025-07-14T22:21:58.297781350Z" level=error msg="encountered an error cleaning up failed sandbox \"1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:21:58.297891 containerd[1464]: time="2025-07-14T22:21:58.297822367Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54bf954d5f-9lmrc,Uid:dbf074a5-827e-4266-aa97-41efb3b0eb87,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:21:58.300265 kubelet[2543]: E0714 22:21:58.300186 2543 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:21:58.300341 kubelet[2543]: E0714 22:21:58.300317 2543 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54bf954d5f-9lmrc" Jul 14 22:21:58.300378 kubelet[2543]: E0714 22:21:58.300344 2543 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54bf954d5f-9lmrc" Jul 14 22:21:58.300437 kubelet[2543]: E0714 22:21:58.300396 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-54bf954d5f-9lmrc_calico-system(dbf074a5-827e-4266-aa97-41efb3b0eb87)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-54bf954d5f-9lmrc_calico-system(dbf074a5-827e-4266-aa97-41efb3b0eb87)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-54bf954d5f-9lmrc" podUID="dbf074a5-827e-4266-aa97-41efb3b0eb87" Jul 14 22:21:58.301844 containerd[1464]: time="2025-07-14T22:21:58.301815904Z" level=error msg="Failed to destroy network for sandbox \"7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:21:58.302041 containerd[1464]: time="2025-07-14T22:21:58.301885344Z" level=error msg="Failed to destroy network for sandbox \"2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:21:58.302327 containerd[1464]: time="2025-07-14T22:21:58.302302426Z" level=error msg="encountered an error cleaning up failed sandbox \"7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:21:58.302381 containerd[1464]: time="2025-07-14T22:21:58.302351719Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66cb745f54-d6cjg,Uid:233c9654-f369-4545-aeb8-b29c6d794c17,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:21:58.302610 containerd[1464]: time="2025-07-14T22:21:58.302411241Z" level=error msg="encountered an error cleaning up failed sandbox \"2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:21:58.302610 containerd[1464]: time="2025-07-14T22:21:58.302457859Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hlwkw,Uid:1d877bca-eddc-4eb9-ba2d-007238222f97,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:21:58.302671 kubelet[2543]: E0714 22:21:58.302481 2543 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:21:58.302671 kubelet[2543]: E0714 22:21:58.302509 2543 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66cb745f54-d6cjg" Jul 14 22:21:58.302671 kubelet[2543]: E0714 22:21:58.302523 2543 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66cb745f54-d6cjg" Jul 14 22:21:58.302757 kubelet[2543]: E0714 22:21:58.302552 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-66cb745f54-d6cjg_calico-system(233c9654-f369-4545-aeb8-b29c6d794c17)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-66cb745f54-d6cjg_calico-system(233c9654-f369-4545-aeb8-b29c6d794c17)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-66cb745f54-d6cjg" podUID="233c9654-f369-4545-aeb8-b29c6d794c17" Jul 14 22:21:58.302757 kubelet[2543]: E0714 22:21:58.302583 2543 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:21:58.302757 kubelet[2543]: E0714 22:21:58.302611 2543 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-hlwkw" Jul 14 22:21:58.302852 kubelet[2543]: E0714 22:21:58.302626 2543 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-hlwkw" Jul 14 22:21:58.302852 kubelet[2543]: E0714 22:21:58.302654 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-hlwkw_kube-system(1d877bca-eddc-4eb9-ba2d-007238222f97)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-hlwkw_kube-system(1d877bca-eddc-4eb9-ba2d-007238222f97)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-hlwkw" podUID="1d877bca-eddc-4eb9-ba2d-007238222f97" Jul 14 22:21:58.309511 containerd[1464]: time="2025-07-14T22:21:58.309461886Z" level=error msg="encountered an error cleaning up failed sandbox \"928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:21:58.309560 containerd[1464]: time="2025-07-14T22:21:58.309535514Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8cbd7cb79-jt55m,Uid:24b165d4-dcd3-454b-b444-600d3f259636,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:21:58.309702 kubelet[2543]: E0714 22:21:58.309675 2543 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:21:58.309749 kubelet[2543]: E0714 22:21:58.309702 2543 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8cbd7cb79-jt55m" Jul 14 22:21:58.309749 kubelet[2543]: E0714 22:21:58.309716 2543 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8cbd7cb79-jt55m" Jul 14 22:21:58.309912 kubelet[2543]: E0714 22:21:58.309746 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8cbd7cb79-jt55m_calico-apiserver(24b165d4-dcd3-454b-b444-600d3f259636)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8cbd7cb79-jt55m_calico-apiserver(24b165d4-dcd3-454b-b444-600d3f259636)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8cbd7cb79-jt55m" podUID="24b165d4-dcd3-454b-b444-600d3f259636" Jul 14 22:21:58.664736 kubelet[2543]: I0714 22:21:58.664606 2543 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3" Jul 14 22:21:58.665499 kubelet[2543]: I0714 22:21:58.665483 2543 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00" Jul 14 22:21:58.667251 kubelet[2543]: I0714 22:21:58.667215 2543 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77" Jul 14 22:21:58.669475 kubelet[2543]: I0714 22:21:58.669444 2543 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34" Jul 14 22:21:58.683280 containerd[1464]: time="2025-07-14T22:21:58.682948504Z" level=info msg="StopPodSandbox for \"7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34\"" Jul 14 22:21:58.683280 containerd[1464]: time="2025-07-14T22:21:58.683064923Z" level=info msg="StopPodSandbox for \"53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77\"" Jul 14 22:21:58.683454 kubelet[2543]: I0714 22:21:58.683348 2543 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d" Jul 14 22:21:58.684625 containerd[1464]: time="2025-07-14T22:21:58.684585988Z" level=info msg="StopPodSandbox for \"837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00\"" Jul 14 22:21:58.685434 containerd[1464]: time="2025-07-14T22:21:58.685402600Z" level=info msg="StopPodSandbox for \"1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3\"" Jul 14 22:21:58.686652 containerd[1464]: time="2025-07-14T22:21:58.686594097Z" level=info msg="Ensure that sandbox 53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77 in task-service has been cleanup successfully" Jul 14 22:21:58.686652 containerd[1464]: time="2025-07-14T22:21:58.686643260Z" level=info msg="Ensure that sandbox 1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3 in task-service has been cleanup successfully" Jul 14 22:21:58.692847 containerd[1464]: time="2025-07-14T22:21:58.692801239Z" level=info msg="StopPodSandbox for \"660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d\"" Jul 14 22:21:58.692993 containerd[1464]: time="2025-07-14T22:21:58.692968443Z" level=info msg="Ensure that sandbox 660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d in task-service has been cleanup successfully" Jul 14 22:21:58.694858 containerd[1464]: time="2025-07-14T22:21:58.694759656Z" level=info msg="Ensure that sandbox 837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00 in task-service has been cleanup successfully" Jul 14 22:21:58.696691 containerd[1464]: time="2025-07-14T22:21:58.694818596Z" level=info msg="Ensure that sandbox 7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34 in task-service has been cleanup successfully" Jul 14 22:21:58.696800 kubelet[2543]: I0714 22:21:58.696767 2543 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383" Jul 14 22:21:58.698009 containerd[1464]: time="2025-07-14T22:21:58.697838465Z" level=info msg="StopPodSandbox for \"2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383\"" Jul 14 22:21:58.698734 containerd[1464]: time="2025-07-14T22:21:58.698682470Z" level=info msg="Ensure that sandbox 2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383 in task-service has been cleanup successfully" Jul 14 22:21:58.700759 kubelet[2543]: I0714 22:21:58.700707 2543 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af" Jul 14 22:21:58.702706 containerd[1464]: time="2025-07-14T22:21:58.702675885Z" level=info msg="StopPodSandbox for \"928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af\"" Jul 14 22:21:58.703079 containerd[1464]: time="2025-07-14T22:21:58.702984806Z" level=info msg="Ensure that sandbox 928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af in task-service has been cleanup successfully" Jul 14 22:21:58.705541 kubelet[2543]: I0714 22:21:58.705519 2543 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f" Jul 14 22:21:58.706483 containerd[1464]: time="2025-07-14T22:21:58.706458246Z" level=info msg="StopPodSandbox for \"7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f\"" Jul 14 22:21:58.706684 containerd[1464]: time="2025-07-14T22:21:58.706667208Z" level=info msg="Ensure that sandbox 7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f in task-service has been cleanup successfully" Jul 14 22:21:58.752607 containerd[1464]: time="2025-07-14T22:21:58.752542844Z" level=error msg="StopPodSandbox for \"53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77\" failed" error="failed to destroy network for sandbox \"53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:21:58.753019 containerd[1464]: time="2025-07-14T22:21:58.752840854Z" level=error msg="StopPodSandbox for \"7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34\" failed" error="failed to destroy network for sandbox \"7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:21:58.753463 kubelet[2543]: E0714 22:21:58.753423 2543 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34" Jul 14 22:21:58.753619 kubelet[2543]: E0714 22:21:58.753570 2543 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34"} Jul 14 22:21:58.753716 kubelet[2543]: E0714 22:21:58.753698 2543 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"233c9654-f369-4545-aeb8-b29c6d794c17\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:21:58.753832 kubelet[2543]: E0714 22:21:58.753815 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"233c9654-f369-4545-aeb8-b29c6d794c17\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-66cb745f54-d6cjg" podUID="233c9654-f369-4545-aeb8-b29c6d794c17" Jul 14 22:21:58.754062 kubelet[2543]: E0714 22:21:58.754041 2543 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77" Jul 14 22:21:58.754144 kubelet[2543]: E0714 22:21:58.754132 2543 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77"} Jul 14 22:21:58.754208 kubelet[2543]: E0714 22:21:58.754196 2543 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a0d7e0a1-9365-4ef9-a68a-5541a9cd6ec7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:21:58.754308 kubelet[2543]: E0714 22:21:58.754292 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a0d7e0a1-9365-4ef9-a68a-5541a9cd6ec7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7xmfp" podUID="a0d7e0a1-9365-4ef9-a68a-5541a9cd6ec7" Jul 14 22:21:58.758562 containerd[1464]: time="2025-07-14T22:21:58.758509525Z" level=error msg="StopPodSandbox for \"660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d\" failed" error="failed to destroy network for sandbox \"660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:21:58.759049 kubelet[2543]: E0714 22:21:58.758895 2543 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d" Jul 14 22:21:58.759049 kubelet[2543]: E0714 22:21:58.758949 2543 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d"} Jul 14 22:21:58.759049 kubelet[2543]: E0714 22:21:58.758994 2543 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6cdfbfb8-7af6-4e0b-8036-ff4c455ca4f3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:21:58.759049 kubelet[2543]: E0714 22:21:58.759020 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6cdfbfb8-7af6-4e0b-8036-ff4c455ca4f3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8cbd7cb79-zzkpk" podUID="6cdfbfb8-7af6-4e0b-8036-ff4c455ca4f3" Jul 14 22:21:58.761049 containerd[1464]: time="2025-07-14T22:21:58.760999739Z" level=error msg="StopPodSandbox for \"1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3\" failed" error="failed to destroy network for sandbox \"1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:21:58.761371 kubelet[2543]: E0714 22:21:58.761340 2543 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3" Jul 14 22:21:58.761474 kubelet[2543]: E0714 22:21:58.761457 2543 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3"} Jul 14 22:21:58.761550 kubelet[2543]: E0714 22:21:58.761537 2543 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dbf074a5-827e-4266-aa97-41efb3b0eb87\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:21:58.761647 kubelet[2543]: E0714 22:21:58.761631 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dbf074a5-827e-4266-aa97-41efb3b0eb87\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-54bf954d5f-9lmrc" podUID="dbf074a5-827e-4266-aa97-41efb3b0eb87" Jul 14 22:21:58.762470 containerd[1464]: time="2025-07-14T22:21:58.762448590Z" level=error msg="StopPodSandbox for \"837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00\" failed" error="failed to destroy network for sandbox \"837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:21:58.762675 kubelet[2543]: E0714 22:21:58.762656 2543 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00" Jul 14 22:21:58.762757 kubelet[2543]: E0714 22:21:58.762745 2543 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00"} Jul 14 22:21:58.762821 kubelet[2543]: E0714 22:21:58.762808 2543 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b0aee35e-b810-44e1-8f6a-22ac05756d20\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:21:58.762907 kubelet[2543]: E0714 22:21:58.762891 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b0aee35e-b810-44e1-8f6a-22ac05756d20\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-vgqdz" podUID="b0aee35e-b810-44e1-8f6a-22ac05756d20" Jul 14 22:21:58.766199 containerd[1464]: time="2025-07-14T22:21:58.766090536Z" level=error msg="StopPodSandbox for \"2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383\" failed" error="failed to destroy network for sandbox \"2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:21:58.766339 kubelet[2543]: E0714 22:21:58.766282 2543 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383" Jul 14 22:21:58.766339 kubelet[2543]: E0714 22:21:58.766304 2543 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383"} Jul 14 22:21:58.766339 kubelet[2543]: E0714 22:21:58.766324 2543 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1d877bca-eddc-4eb9-ba2d-007238222f97\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:21:58.766516 kubelet[2543]: E0714 22:21:58.766340 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1d877bca-eddc-4eb9-ba2d-007238222f97\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-hlwkw" podUID="1d877bca-eddc-4eb9-ba2d-007238222f97" Jul 14 22:21:58.768645 containerd[1464]: time="2025-07-14T22:21:58.768603794Z" level=error msg="StopPodSandbox for \"928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af\" failed" error="failed to destroy network for sandbox \"928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:21:58.768781 kubelet[2543]: E0714 22:21:58.768742 2543 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af" Jul 14 22:21:58.768781 kubelet[2543]: E0714 22:21:58.768777 2543 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af"} Jul 14 22:21:58.768841 kubelet[2543]: E0714 22:21:58.768798 2543 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"24b165d4-dcd3-454b-b444-600d3f259636\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:21:58.768841 kubelet[2543]: E0714 22:21:58.768815 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"24b165d4-dcd3-454b-b444-600d3f259636\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8cbd7cb79-jt55m" podUID="24b165d4-dcd3-454b-b444-600d3f259636" Jul 14 22:21:58.771252 containerd[1464]: time="2025-07-14T22:21:58.771203353Z" level=error msg="StopPodSandbox for \"7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f\" failed" error="failed to destroy network for sandbox \"7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:21:58.771423 kubelet[2543]: E0714 22:21:58.771391 2543 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f" Jul 14 22:21:58.771423 kubelet[2543]: E0714 22:21:58.771416 2543 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f"} Jul 14 22:21:58.771482 kubelet[2543]: E0714 22:21:58.771435 2543 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3b937dec-5b75-47a6-9753-367ccbffbb4f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:21:58.771482 kubelet[2543]: E0714 22:21:58.771453 2543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3b937dec-5b75-47a6-9753-367ccbffbb4f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-gcjz6" podUID="3b937dec-5b75-47a6-9753-367ccbffbb4f" Jul 14 22:21:58.940770 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af-shm.mount: Deactivated successfully. Jul 14 22:21:58.940882 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f-shm.mount: Deactivated successfully. Jul 14 22:21:58.940969 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d-shm.mount: Deactivated successfully. Jul 14 22:22:01.255299 kubelet[2543]: I0714 22:22:01.255217 2543 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 22:22:01.255782 kubelet[2543]: E0714 22:22:01.255569 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:22:01.711927 kubelet[2543]: E0714 22:22:01.711788 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:22:04.455202 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2926488766.mount: Deactivated successfully. Jul 14 22:22:05.784163 containerd[1464]: time="2025-07-14T22:22:05.784103706Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:22:05.785269 containerd[1464]: time="2025-07-14T22:22:05.785201683Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 14 22:22:05.787008 containerd[1464]: time="2025-07-14T22:22:05.786967318Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:22:05.790488 containerd[1464]: time="2025-07-14T22:22:05.790441619Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:22:05.791064 containerd[1464]: time="2025-07-14T22:22:05.791019644Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 8.127618836s" Jul 14 22:22:05.791126 containerd[1464]: time="2025-07-14T22:22:05.791066896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 14 22:22:05.801539 containerd[1464]: time="2025-07-14T22:22:05.801484989Z" level=info msg="CreateContainer within sandbox \"9ae2b239c011d31933eabfe231b9d51958d6853392fdeb2f9a52ce9bd6816b14\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 14 22:22:05.821801 containerd[1464]: time="2025-07-14T22:22:05.821753539Z" level=info msg="CreateContainer within sandbox \"9ae2b239c011d31933eabfe231b9d51958d6853392fdeb2f9a52ce9bd6816b14\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"345fe0e00d5ace14b4dea2b68103bd63cd1ff4495d394c64068f71dfd6189e1c\"" Jul 14 22:22:05.822348 containerd[1464]: time="2025-07-14T22:22:05.822329410Z" level=info msg="StartContainer for \"345fe0e00d5ace14b4dea2b68103bd63cd1ff4495d394c64068f71dfd6189e1c\"" Jul 14 22:22:05.874380 systemd[1]: Started cri-containerd-345fe0e00d5ace14b4dea2b68103bd63cd1ff4495d394c64068f71dfd6189e1c.scope - libcontainer container 345fe0e00d5ace14b4dea2b68103bd63cd1ff4495d394c64068f71dfd6189e1c. Jul 14 22:22:06.010540 containerd[1464]: time="2025-07-14T22:22:06.010449935Z" level=info msg="StartContainer for \"345fe0e00d5ace14b4dea2b68103bd63cd1ff4495d394c64068f71dfd6189e1c\" returns successfully" Jul 14 22:22:06.042346 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 14 22:22:06.042953 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 14 22:22:06.110992 containerd[1464]: time="2025-07-14T22:22:06.110930265Z" level=info msg="StopPodSandbox for \"1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3\"" Jul 14 22:22:06.249052 containerd[1464]: 2025-07-14 22:22:06.166 [INFO][3808] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3" Jul 14 22:22:06.249052 containerd[1464]: 2025-07-14 22:22:06.166 [INFO][3808] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3" iface="eth0" netns="/var/run/netns/cni-e2954517-b854-a3cd-7b9f-9979da934e17" Jul 14 22:22:06.249052 containerd[1464]: 2025-07-14 22:22:06.166 [INFO][3808] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3" iface="eth0" netns="/var/run/netns/cni-e2954517-b854-a3cd-7b9f-9979da934e17" Jul 14 22:22:06.249052 containerd[1464]: 2025-07-14 22:22:06.167 [INFO][3808] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3" iface="eth0" netns="/var/run/netns/cni-e2954517-b854-a3cd-7b9f-9979da934e17" Jul 14 22:22:06.249052 containerd[1464]: 2025-07-14 22:22:06.167 [INFO][3808] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3" Jul 14 22:22:06.249052 containerd[1464]: 2025-07-14 22:22:06.167 [INFO][3808] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3" Jul 14 22:22:06.249052 containerd[1464]: 2025-07-14 22:22:06.235 [INFO][3820] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3" HandleID="k8s-pod-network.1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3" Workload="localhost-k8s-whisker--54bf954d5f--9lmrc-eth0" Jul 14 22:22:06.249052 containerd[1464]: 2025-07-14 22:22:06.236 [INFO][3820] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:22:06.249052 containerd[1464]: 2025-07-14 22:22:06.236 [INFO][3820] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:22:06.249052 containerd[1464]: 2025-07-14 22:22:06.242 [WARNING][3820] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3" HandleID="k8s-pod-network.1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3" Workload="localhost-k8s-whisker--54bf954d5f--9lmrc-eth0" Jul 14 22:22:06.249052 containerd[1464]: 2025-07-14 22:22:06.242 [INFO][3820] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3" HandleID="k8s-pod-network.1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3" Workload="localhost-k8s-whisker--54bf954d5f--9lmrc-eth0" Jul 14 22:22:06.249052 containerd[1464]: 2025-07-14 22:22:06.243 [INFO][3820] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:22:06.249052 containerd[1464]: 2025-07-14 22:22:06.246 [INFO][3808] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3" Jul 14 22:22:06.249451 containerd[1464]: time="2025-07-14T22:22:06.249202079Z" level=info msg="TearDown network for sandbox \"1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3\" successfully" Jul 14 22:22:06.249451 containerd[1464]: time="2025-07-14T22:22:06.249250863Z" level=info msg="StopPodSandbox for \"1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3\" returns successfully" Jul 14 22:22:06.267423 kubelet[2543]: I0714 22:22:06.267375 2543 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dbf074a5-827e-4266-aa97-41efb3b0eb87-whisker-ca-bundle\") pod \"dbf074a5-827e-4266-aa97-41efb3b0eb87\" (UID: \"dbf074a5-827e-4266-aa97-41efb3b0eb87\") " Jul 14 22:22:06.267423 kubelet[2543]: I0714 22:22:06.267423 2543 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dbf074a5-827e-4266-aa97-41efb3b0eb87-whisker-backend-key-pair\") pod \"dbf074a5-827e-4266-aa97-41efb3b0eb87\" (UID: \"dbf074a5-827e-4266-aa97-41efb3b0eb87\") " Jul 14 22:22:06.267925 kubelet[2543]: I0714 22:22:06.267453 2543 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvsxf\" (UniqueName: \"kubernetes.io/projected/dbf074a5-827e-4266-aa97-41efb3b0eb87-kube-api-access-rvsxf\") pod \"dbf074a5-827e-4266-aa97-41efb3b0eb87\" (UID: \"dbf074a5-827e-4266-aa97-41efb3b0eb87\") " Jul 14 22:22:06.268135 kubelet[2543]: I0714 22:22:06.268092 2543 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbf074a5-827e-4266-aa97-41efb3b0eb87-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "dbf074a5-827e-4266-aa97-41efb3b0eb87" (UID: "dbf074a5-827e-4266-aa97-41efb3b0eb87"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 14 22:22:06.272317 kubelet[2543]: I0714 22:22:06.272277 2543 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbf074a5-827e-4266-aa97-41efb3b0eb87-kube-api-access-rvsxf" (OuterVolumeSpecName: "kube-api-access-rvsxf") pod "dbf074a5-827e-4266-aa97-41efb3b0eb87" (UID: "dbf074a5-827e-4266-aa97-41efb3b0eb87"). InnerVolumeSpecName "kube-api-access-rvsxf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 14 22:22:06.272376 kubelet[2543]: I0714 22:22:06.272339 2543 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbf074a5-827e-4266-aa97-41efb3b0eb87-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "dbf074a5-827e-4266-aa97-41efb3b0eb87" (UID: "dbf074a5-827e-4266-aa97-41efb3b0eb87"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 14 22:22:06.368729 kubelet[2543]: I0714 22:22:06.368601 2543 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dbf074a5-827e-4266-aa97-41efb3b0eb87-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 14 22:22:06.368729 kubelet[2543]: I0714 22:22:06.368628 2543 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rvsxf\" (UniqueName: \"kubernetes.io/projected/dbf074a5-827e-4266-aa97-41efb3b0eb87-kube-api-access-rvsxf\") on node \"localhost\" DevicePath \"\"" Jul 14 22:22:06.368729 kubelet[2543]: I0714 22:22:06.368638 2543 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dbf074a5-827e-4266-aa97-41efb3b0eb87-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 14 22:22:06.591439 systemd[1]: Removed slice kubepods-besteffort-poddbf074a5_827e_4266_aa97_41efb3b0eb87.slice - libcontainer container kubepods-besteffort-poddbf074a5_827e_4266_aa97_41efb3b0eb87.slice. Jul 14 22:22:06.799789 systemd[1]: run-netns-cni\x2de2954517\x2db854\x2da3cd\x2d7b9f\x2d9979da934e17.mount: Deactivated successfully. Jul 14 22:22:06.799921 systemd[1]: var-lib-kubelet-pods-dbf074a5\x2d827e\x2d4266\x2daa97\x2d41efb3b0eb87-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drvsxf.mount: Deactivated successfully. Jul 14 22:22:06.800017 systemd[1]: var-lib-kubelet-pods-dbf074a5\x2d827e\x2d4266\x2daa97\x2d41efb3b0eb87-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 14 22:22:06.833938 kubelet[2543]: I0714 22:22:06.833843 2543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-z6r7n" podStartSLOduration=1.6538160130000001 podStartE2EDuration="18.833817493s" podCreationTimestamp="2025-07-14 22:21:48 +0000 UTC" firstStartedPulling="2025-07-14 22:21:48.611854381 +0000 UTC m=+22.103875254" lastFinishedPulling="2025-07-14 22:22:05.791855871 +0000 UTC m=+39.283876734" observedRunningTime="2025-07-14 22:22:06.832385018 +0000 UTC m=+40.324405891" watchObservedRunningTime="2025-07-14 22:22:06.833817493 +0000 UTC m=+40.325838366" Jul 14 22:22:06.938322 systemd[1]: Created slice kubepods-besteffort-pod93b506d9_aa46_47cd_8bfc_be4c4ffa1876.slice - libcontainer container kubepods-besteffort-pod93b506d9_aa46_47cd_8bfc_be4c4ffa1876.slice. Jul 14 22:22:06.971868 kubelet[2543]: I0714 22:22:06.971797 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93b506d9-aa46-47cd-8bfc-be4c4ffa1876-whisker-ca-bundle\") pod \"whisker-5cb9877969-chpbx\" (UID: \"93b506d9-aa46-47cd-8bfc-be4c4ffa1876\") " pod="calico-system/whisker-5cb9877969-chpbx" Jul 14 22:22:06.971868 kubelet[2543]: I0714 22:22:06.971845 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/93b506d9-aa46-47cd-8bfc-be4c4ffa1876-whisker-backend-key-pair\") pod \"whisker-5cb9877969-chpbx\" (UID: \"93b506d9-aa46-47cd-8bfc-be4c4ffa1876\") " pod="calico-system/whisker-5cb9877969-chpbx" Jul 14 22:22:06.971868 kubelet[2543]: I0714 22:22:06.971872 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cntsh\" (UniqueName: \"kubernetes.io/projected/93b506d9-aa46-47cd-8bfc-be4c4ffa1876-kube-api-access-cntsh\") pod \"whisker-5cb9877969-chpbx\" (UID: \"93b506d9-aa46-47cd-8bfc-be4c4ffa1876\") " pod="calico-system/whisker-5cb9877969-chpbx" Jul 14 22:22:07.612142 containerd[1464]: time="2025-07-14T22:22:07.612073762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5cb9877969-chpbx,Uid:93b506d9-aa46-47cd-8bfc-be4c4ffa1876,Namespace:calico-system,Attempt:0,}" Jul 14 22:22:07.612665 kernel: bpftool[3970]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 14 22:22:07.725318 kubelet[2543]: I0714 22:22:07.725280 2543 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 22:22:07.873156 systemd-networkd[1394]: vxlan.calico: Link UP Jul 14 22:22:07.873168 systemd-networkd[1394]: vxlan.calico: Gained carrier Jul 14 22:22:07.931356 systemd-networkd[1394]: caliebd051194d5: Link UP Jul 14 22:22:07.932400 systemd-networkd[1394]: caliebd051194d5: Gained carrier Jul 14 22:22:07.949218 containerd[1464]: 2025-07-14 22:22:07.838 [INFO][3972] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5cb9877969--chpbx-eth0 whisker-5cb9877969- calico-system 93b506d9-aa46-47cd-8bfc-be4c4ffa1876 899 0 2025-07-14 22:22:06 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5cb9877969 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5cb9877969-chpbx eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] caliebd051194d5 [] [] }} ContainerID="a3bbf74b4f160e66accc9d858fddb81d087056621b7c669f512420912a355700" Namespace="calico-system" Pod="whisker-5cb9877969-chpbx" WorkloadEndpoint="localhost-k8s-whisker--5cb9877969--chpbx-" Jul 14 22:22:07.949218 containerd[1464]: 2025-07-14 22:22:07.839 [INFO][3972] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a3bbf74b4f160e66accc9d858fddb81d087056621b7c669f512420912a355700" Namespace="calico-system" Pod="whisker-5cb9877969-chpbx" WorkloadEndpoint="localhost-k8s-whisker--5cb9877969--chpbx-eth0" Jul 14 22:22:07.949218 containerd[1464]: 2025-07-14 22:22:07.878 [INFO][4001] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a3bbf74b4f160e66accc9d858fddb81d087056621b7c669f512420912a355700" HandleID="k8s-pod-network.a3bbf74b4f160e66accc9d858fddb81d087056621b7c669f512420912a355700" Workload="localhost-k8s-whisker--5cb9877969--chpbx-eth0" Jul 14 22:22:07.949218 containerd[1464]: 2025-07-14 22:22:07.879 [INFO][4001] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a3bbf74b4f160e66accc9d858fddb81d087056621b7c669f512420912a355700" HandleID="k8s-pod-network.a3bbf74b4f160e66accc9d858fddb81d087056621b7c669f512420912a355700" Workload="localhost-k8s-whisker--5cb9877969--chpbx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325320), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5cb9877969-chpbx", "timestamp":"2025-07-14 22:22:07.878784811 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:22:07.949218 containerd[1464]: 2025-07-14 22:22:07.879 [INFO][4001] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:22:07.949218 containerd[1464]: 2025-07-14 22:22:07.879 [INFO][4001] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:22:07.949218 containerd[1464]: 2025-07-14 22:22:07.879 [INFO][4001] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:22:07.949218 containerd[1464]: 2025-07-14 22:22:07.886 [INFO][4001] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a3bbf74b4f160e66accc9d858fddb81d087056621b7c669f512420912a355700" host="localhost" Jul 14 22:22:07.949218 containerd[1464]: 2025-07-14 22:22:07.899 [INFO][4001] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:22:07.949218 containerd[1464]: 2025-07-14 22:22:07.905 [INFO][4001] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:22:07.949218 containerd[1464]: 2025-07-14 22:22:07.907 [INFO][4001] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:22:07.949218 containerd[1464]: 2025-07-14 22:22:07.909 [INFO][4001] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:22:07.949218 containerd[1464]: 2025-07-14 22:22:07.909 [INFO][4001] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a3bbf74b4f160e66accc9d858fddb81d087056621b7c669f512420912a355700" host="localhost" Jul 14 22:22:07.949218 containerd[1464]: 2025-07-14 22:22:07.911 [INFO][4001] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a3bbf74b4f160e66accc9d858fddb81d087056621b7c669f512420912a355700 Jul 14 22:22:07.949218 containerd[1464]: 2025-07-14 22:22:07.918 [INFO][4001] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a3bbf74b4f160e66accc9d858fddb81d087056621b7c669f512420912a355700" host="localhost" Jul 14 22:22:07.949218 containerd[1464]: 2025-07-14 22:22:07.923 [INFO][4001] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.a3bbf74b4f160e66accc9d858fddb81d087056621b7c669f512420912a355700" host="localhost" Jul 14 22:22:07.949218 containerd[1464]: 2025-07-14 22:22:07.923 [INFO][4001] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.a3bbf74b4f160e66accc9d858fddb81d087056621b7c669f512420912a355700" host="localhost" Jul 14 22:22:07.949218 containerd[1464]: 2025-07-14 22:22:07.923 [INFO][4001] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:22:07.949218 containerd[1464]: 2025-07-14 22:22:07.923 [INFO][4001] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="a3bbf74b4f160e66accc9d858fddb81d087056621b7c669f512420912a355700" HandleID="k8s-pod-network.a3bbf74b4f160e66accc9d858fddb81d087056621b7c669f512420912a355700" Workload="localhost-k8s-whisker--5cb9877969--chpbx-eth0" Jul 14 22:22:07.949838 containerd[1464]: 2025-07-14 22:22:07.928 [INFO][3972] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a3bbf74b4f160e66accc9d858fddb81d087056621b7c669f512420912a355700" Namespace="calico-system" Pod="whisker-5cb9877969-chpbx" WorkloadEndpoint="localhost-k8s-whisker--5cb9877969--chpbx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5cb9877969--chpbx-eth0", GenerateName:"whisker-5cb9877969-", Namespace:"calico-system", SelfLink:"", UID:"93b506d9-aa46-47cd-8bfc-be4c4ffa1876", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 22, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5cb9877969", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5cb9877969-chpbx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliebd051194d5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:22:07.949838 containerd[1464]: 2025-07-14 22:22:07.928 [INFO][3972] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="a3bbf74b4f160e66accc9d858fddb81d087056621b7c669f512420912a355700" Namespace="calico-system" Pod="whisker-5cb9877969-chpbx" WorkloadEndpoint="localhost-k8s-whisker--5cb9877969--chpbx-eth0" Jul 14 22:22:07.949838 containerd[1464]: 2025-07-14 22:22:07.928 [INFO][3972] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliebd051194d5 ContainerID="a3bbf74b4f160e66accc9d858fddb81d087056621b7c669f512420912a355700" Namespace="calico-system" Pod="whisker-5cb9877969-chpbx" WorkloadEndpoint="localhost-k8s-whisker--5cb9877969--chpbx-eth0" Jul 14 22:22:07.949838 containerd[1464]: 2025-07-14 22:22:07.932 [INFO][3972] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a3bbf74b4f160e66accc9d858fddb81d087056621b7c669f512420912a355700" Namespace="calico-system" Pod="whisker-5cb9877969-chpbx" WorkloadEndpoint="localhost-k8s-whisker--5cb9877969--chpbx-eth0" Jul 14 22:22:07.949838 containerd[1464]: 2025-07-14 22:22:07.932 [INFO][3972] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a3bbf74b4f160e66accc9d858fddb81d087056621b7c669f512420912a355700" Namespace="calico-system" Pod="whisker-5cb9877969-chpbx" WorkloadEndpoint="localhost-k8s-whisker--5cb9877969--chpbx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5cb9877969--chpbx-eth0", GenerateName:"whisker-5cb9877969-", Namespace:"calico-system", SelfLink:"", UID:"93b506d9-aa46-47cd-8bfc-be4c4ffa1876", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 22, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5cb9877969", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a3bbf74b4f160e66accc9d858fddb81d087056621b7c669f512420912a355700", Pod:"whisker-5cb9877969-chpbx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliebd051194d5", MAC:"46:b7:93:5e:2b:05", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:22:07.949838 containerd[1464]: 2025-07-14 22:22:07.943 [INFO][3972] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a3bbf74b4f160e66accc9d858fddb81d087056621b7c669f512420912a355700" Namespace="calico-system" Pod="whisker-5cb9877969-chpbx" WorkloadEndpoint="localhost-k8s-whisker--5cb9877969--chpbx-eth0" Jul 14 22:22:07.985241 containerd[1464]: time="2025-07-14T22:22:07.985077789Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:22:07.985241 containerd[1464]: time="2025-07-14T22:22:07.985158546Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:22:07.985241 containerd[1464]: time="2025-07-14T22:22:07.985175008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:22:07.985502 containerd[1464]: time="2025-07-14T22:22:07.985308096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:22:08.009394 systemd[1]: Started cri-containerd-a3bbf74b4f160e66accc9d858fddb81d087056621b7c669f512420912a355700.scope - libcontainer container a3bbf74b4f160e66accc9d858fddb81d087056621b7c669f512420912a355700. Jul 14 22:22:08.023186 systemd-resolved[1323]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:22:08.053894 containerd[1464]: time="2025-07-14T22:22:08.053789443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5cb9877969-chpbx,Uid:93b506d9-aa46-47cd-8bfc-be4c4ffa1876,Namespace:calico-system,Attempt:0,} returns sandbox id \"a3bbf74b4f160e66accc9d858fddb81d087056621b7c669f512420912a355700\"" Jul 14 22:22:08.059067 containerd[1464]: time="2025-07-14T22:22:08.058978933Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 14 22:22:08.586024 kubelet[2543]: I0714 22:22:08.585969 2543 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbf074a5-827e-4266-aa97-41efb3b0eb87" path="/var/lib/kubelet/pods/dbf074a5-827e-4266-aa97-41efb3b0eb87/volumes" Jul 14 22:22:09.222498 systemd-networkd[1394]: vxlan.calico: Gained IPv6LL Jul 14 22:22:09.584709 containerd[1464]: time="2025-07-14T22:22:09.584650796Z" level=info msg="StopPodSandbox for \"7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f\"" Jul 14 22:22:09.585122 containerd[1464]: time="2025-07-14T22:22:09.584787050Z" level=info msg="StopPodSandbox for \"928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af\"" Jul 14 22:22:09.606487 systemd-networkd[1394]: caliebd051194d5: Gained IPv6LL Jul 14 22:22:09.727388 containerd[1464]: time="2025-07-14T22:22:09.727329123Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:22:09.728480 containerd[1464]: time="2025-07-14T22:22:09.728404037Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 14 22:22:09.738788 containerd[1464]: time="2025-07-14T22:22:09.738265894Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:22:09.742445 containerd[1464]: time="2025-07-14T22:22:09.742403735Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:22:09.752199 containerd[1464]: time="2025-07-14T22:22:09.752137163Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.692983101s" Jul 14 22:22:09.752342 containerd[1464]: time="2025-07-14T22:22:09.752203521Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 14 22:22:09.756100 containerd[1464]: time="2025-07-14T22:22:09.755910677Z" level=info msg="CreateContainer within sandbox \"a3bbf74b4f160e66accc9d858fddb81d087056621b7c669f512420912a355700\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 14 22:22:09.774310 containerd[1464]: time="2025-07-14T22:22:09.774262250Z" level=info msg="CreateContainer within sandbox \"a3bbf74b4f160e66accc9d858fddb81d087056621b7c669f512420912a355700\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"68ac1b2533a6b2af188e7a0a2a77b74f2100bfdf6514bd080f960194bc41ede3\"" Jul 14 22:22:09.775649 containerd[1464]: 2025-07-14 22:22:09.728 [INFO][4140] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f" Jul 14 22:22:09.775649 containerd[1464]: 2025-07-14 22:22:09.729 [INFO][4140] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f" iface="eth0" netns="/var/run/netns/cni-ac50f06d-96cd-1d08-0530-937729d2acba" Jul 14 22:22:09.775649 containerd[1464]: 2025-07-14 22:22:09.729 [INFO][4140] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f" iface="eth0" netns="/var/run/netns/cni-ac50f06d-96cd-1d08-0530-937729d2acba" Jul 14 22:22:09.775649 containerd[1464]: 2025-07-14 22:22:09.731 [INFO][4140] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f" iface="eth0" netns="/var/run/netns/cni-ac50f06d-96cd-1d08-0530-937729d2acba" Jul 14 22:22:09.775649 containerd[1464]: 2025-07-14 22:22:09.731 [INFO][4140] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f" Jul 14 22:22:09.775649 containerd[1464]: 2025-07-14 22:22:09.731 [INFO][4140] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f" Jul 14 22:22:09.775649 containerd[1464]: 2025-07-14 22:22:09.760 [INFO][4169] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f" HandleID="k8s-pod-network.7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f" Workload="localhost-k8s-goldmane--768f4c5c69--gcjz6-eth0" Jul 14 22:22:09.775649 containerd[1464]: 2025-07-14 22:22:09.760 [INFO][4169] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:22:09.775649 containerd[1464]: 2025-07-14 22:22:09.761 [INFO][4169] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:22:09.775649 containerd[1464]: 2025-07-14 22:22:09.768 [WARNING][4169] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f" HandleID="k8s-pod-network.7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f" Workload="localhost-k8s-goldmane--768f4c5c69--gcjz6-eth0" Jul 14 22:22:09.775649 containerd[1464]: 2025-07-14 22:22:09.768 [INFO][4169] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f" HandleID="k8s-pod-network.7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f" Workload="localhost-k8s-goldmane--768f4c5c69--gcjz6-eth0" Jul 14 22:22:09.775649 containerd[1464]: 2025-07-14 22:22:09.770 [INFO][4169] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:22:09.775649 containerd[1464]: 2025-07-14 22:22:09.772 [INFO][4140] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f" Jul 14 22:22:09.776016 containerd[1464]: time="2025-07-14T22:22:09.775861421Z" level=info msg="TearDown network for sandbox \"7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f\" successfully" Jul 14 22:22:09.776016 containerd[1464]: time="2025-07-14T22:22:09.775883223Z" level=info msg="StopPodSandbox for \"7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f\" returns successfully" Jul 14 22:22:09.776537 containerd[1464]: time="2025-07-14T22:22:09.776167714Z" level=info msg="StartContainer for \"68ac1b2533a6b2af188e7a0a2a77b74f2100bfdf6514bd080f960194bc41ede3\"" Jul 14 22:22:09.776794 containerd[1464]: time="2025-07-14T22:22:09.776503345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-gcjz6,Uid:3b937dec-5b75-47a6-9753-367ccbffbb4f,Namespace:calico-system,Attempt:1,}" Jul 14 22:22:09.779313 systemd[1]: run-netns-cni\x2dac50f06d\x2d96cd\x2d1d08\x2d0530\x2d937729d2acba.mount: Deactivated successfully. Jul 14 22:22:09.786300 containerd[1464]: 2025-07-14 22:22:09.736 [INFO][4156] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af" Jul 14 22:22:09.786300 containerd[1464]: 2025-07-14 22:22:09.736 [INFO][4156] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af" iface="eth0" netns="/var/run/netns/cni-81689d11-a245-c9e3-814d-47039091c7d4" Jul 14 22:22:09.786300 containerd[1464]: 2025-07-14 22:22:09.737 [INFO][4156] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af" iface="eth0" netns="/var/run/netns/cni-81689d11-a245-c9e3-814d-47039091c7d4" Jul 14 22:22:09.786300 containerd[1464]: 2025-07-14 22:22:09.737 [INFO][4156] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af" iface="eth0" netns="/var/run/netns/cni-81689d11-a245-c9e3-814d-47039091c7d4" Jul 14 22:22:09.786300 containerd[1464]: 2025-07-14 22:22:09.737 [INFO][4156] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af" Jul 14 22:22:09.786300 containerd[1464]: 2025-07-14 22:22:09.737 [INFO][4156] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af" Jul 14 22:22:09.786300 containerd[1464]: 2025-07-14 22:22:09.768 [INFO][4175] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af" HandleID="k8s-pod-network.928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af" Workload="localhost-k8s-calico--apiserver--8cbd7cb79--jt55m-eth0" Jul 14 22:22:09.786300 containerd[1464]: 2025-07-14 22:22:09.769 [INFO][4175] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:22:09.786300 containerd[1464]: 2025-07-14 22:22:09.770 [INFO][4175] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:22:09.786300 containerd[1464]: 2025-07-14 22:22:09.778 [WARNING][4175] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af" HandleID="k8s-pod-network.928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af" Workload="localhost-k8s-calico--apiserver--8cbd7cb79--jt55m-eth0" Jul 14 22:22:09.786300 containerd[1464]: 2025-07-14 22:22:09.779 [INFO][4175] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af" HandleID="k8s-pod-network.928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af" Workload="localhost-k8s-calico--apiserver--8cbd7cb79--jt55m-eth0" Jul 14 22:22:09.786300 containerd[1464]: 2025-07-14 22:22:09.780 [INFO][4175] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:22:09.786300 containerd[1464]: 2025-07-14 22:22:09.783 [INFO][4156] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af" Jul 14 22:22:09.789037 containerd[1464]: time="2025-07-14T22:22:09.788994519Z" level=info msg="TearDown network for sandbox \"928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af\" successfully" Jul 14 22:22:09.789037 containerd[1464]: time="2025-07-14T22:22:09.789033815Z" level=info msg="StopPodSandbox for \"928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af\" returns successfully" Jul 14 22:22:09.789029 systemd[1]: run-netns-cni\x2d81689d11\x2da245\x2dc9e3\x2d814d\x2d47039091c7d4.mount: Deactivated successfully. Jul 14 22:22:09.790187 containerd[1464]: time="2025-07-14T22:22:09.789996962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8cbd7cb79-jt55m,Uid:24b165d4-dcd3-454b-b444-600d3f259636,Namespace:calico-apiserver,Attempt:1,}" Jul 14 22:22:09.812738 systemd[1]: Started cri-containerd-68ac1b2533a6b2af188e7a0a2a77b74f2100bfdf6514bd080f960194bc41ede3.scope - libcontainer container 68ac1b2533a6b2af188e7a0a2a77b74f2100bfdf6514bd080f960194bc41ede3. Jul 14 22:22:10.137817 containerd[1464]: time="2025-07-14T22:22:10.137771383Z" level=info msg="StartContainer for \"68ac1b2533a6b2af188e7a0a2a77b74f2100bfdf6514bd080f960194bc41ede3\" returns successfully" Jul 14 22:22:10.140075 containerd[1464]: time="2025-07-14T22:22:10.139911458Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 14 22:22:11.197207 systemd-networkd[1394]: cali39692490256: Link UP Jul 14 22:22:11.197939 systemd-networkd[1394]: cali39692490256: Gained carrier Jul 14 22:22:11.584294 containerd[1464]: time="2025-07-14T22:22:11.584160760Z" level=info msg="StopPodSandbox for \"2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383\"" Jul 14 22:22:11.626310 containerd[1464]: 2025-07-14 22:22:10.244 [INFO][4203] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--768f4c5c69--gcjz6-eth0 goldmane-768f4c5c69- calico-system 3b937dec-5b75-47a6-9753-367ccbffbb4f 913 0 2025-07-14 22:21:47 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-768f4c5c69-gcjz6 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali39692490256 [] [] }} ContainerID="4b79028289ff97eeb889c540a5a0053f4de5300f72529238d0744ca84c6e87d7" Namespace="calico-system" Pod="goldmane-768f4c5c69-gcjz6" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--gcjz6-" Jul 14 22:22:11.626310 containerd[1464]: 2025-07-14 22:22:10.244 [INFO][4203] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4b79028289ff97eeb889c540a5a0053f4de5300f72529238d0744ca84c6e87d7" Namespace="calico-system" Pod="goldmane-768f4c5c69-gcjz6" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--gcjz6-eth0" Jul 14 22:22:11.626310 containerd[1464]: 2025-07-14 22:22:10.320 [INFO][4257] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4b79028289ff97eeb889c540a5a0053f4de5300f72529238d0744ca84c6e87d7" HandleID="k8s-pod-network.4b79028289ff97eeb889c540a5a0053f4de5300f72529238d0744ca84c6e87d7" Workload="localhost-k8s-goldmane--768f4c5c69--gcjz6-eth0" Jul 14 22:22:11.626310 containerd[1464]: 2025-07-14 22:22:10.320 [INFO][4257] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4b79028289ff97eeb889c540a5a0053f4de5300f72529238d0744ca84c6e87d7" HandleID="k8s-pod-network.4b79028289ff97eeb889c540a5a0053f4de5300f72529238d0744ca84c6e87d7" Workload="localhost-k8s-goldmane--768f4c5c69--gcjz6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e730), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-768f4c5c69-gcjz6", "timestamp":"2025-07-14 22:22:10.320361323 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:22:11.626310 containerd[1464]: 2025-07-14 22:22:10.320 [INFO][4257] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:22:11.626310 containerd[1464]: 2025-07-14 22:22:10.320 [INFO][4257] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:22:11.626310 containerd[1464]: 2025-07-14 22:22:10.320 [INFO][4257] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:22:11.626310 containerd[1464]: 2025-07-14 22:22:10.329 [INFO][4257] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4b79028289ff97eeb889c540a5a0053f4de5300f72529238d0744ca84c6e87d7" host="localhost" Jul 14 22:22:11.626310 containerd[1464]: 2025-07-14 22:22:10.334 [INFO][4257] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:22:11.626310 containerd[1464]: 2025-07-14 22:22:10.338 [INFO][4257] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:22:11.626310 containerd[1464]: 2025-07-14 22:22:10.726 [INFO][4257] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:22:11.626310 containerd[1464]: 2025-07-14 22:22:10.728 [INFO][4257] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:22:11.626310 containerd[1464]: 2025-07-14 22:22:10.728 [INFO][4257] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4b79028289ff97eeb889c540a5a0053f4de5300f72529238d0744ca84c6e87d7" host="localhost" Jul 14 22:22:11.626310 containerd[1464]: 2025-07-14 22:22:10.729 [INFO][4257] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4b79028289ff97eeb889c540a5a0053f4de5300f72529238d0744ca84c6e87d7 Jul 14 22:22:11.626310 containerd[1464]: 2025-07-14 22:22:10.828 [INFO][4257] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4b79028289ff97eeb889c540a5a0053f4de5300f72529238d0744ca84c6e87d7" host="localhost" Jul 14 22:22:11.626310 containerd[1464]: 2025-07-14 22:22:11.191 [INFO][4257] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.4b79028289ff97eeb889c540a5a0053f4de5300f72529238d0744ca84c6e87d7" host="localhost" Jul 14 22:22:11.626310 containerd[1464]: 2025-07-14 22:22:11.191 [INFO][4257] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.4b79028289ff97eeb889c540a5a0053f4de5300f72529238d0744ca84c6e87d7" host="localhost" Jul 14 22:22:11.626310 containerd[1464]: 2025-07-14 22:22:11.191 [INFO][4257] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:22:11.626310 containerd[1464]: 2025-07-14 22:22:11.191 [INFO][4257] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="4b79028289ff97eeb889c540a5a0053f4de5300f72529238d0744ca84c6e87d7" HandleID="k8s-pod-network.4b79028289ff97eeb889c540a5a0053f4de5300f72529238d0744ca84c6e87d7" Workload="localhost-k8s-goldmane--768f4c5c69--gcjz6-eth0" Jul 14 22:22:11.626969 containerd[1464]: 2025-07-14 22:22:11.195 [INFO][4203] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4b79028289ff97eeb889c540a5a0053f4de5300f72529238d0744ca84c6e87d7" Namespace="calico-system" Pod="goldmane-768f4c5c69-gcjz6" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--gcjz6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--gcjz6-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"3b937dec-5b75-47a6-9753-367ccbffbb4f", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 21, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-768f4c5c69-gcjz6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali39692490256", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:22:11.626969 containerd[1464]: 2025-07-14 22:22:11.195 [INFO][4203] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="4b79028289ff97eeb889c540a5a0053f4de5300f72529238d0744ca84c6e87d7" Namespace="calico-system" Pod="goldmane-768f4c5c69-gcjz6" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--gcjz6-eth0" Jul 14 22:22:11.626969 containerd[1464]: 2025-07-14 22:22:11.195 [INFO][4203] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali39692490256 ContainerID="4b79028289ff97eeb889c540a5a0053f4de5300f72529238d0744ca84c6e87d7" Namespace="calico-system" Pod="goldmane-768f4c5c69-gcjz6" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--gcjz6-eth0" Jul 14 22:22:11.626969 containerd[1464]: 2025-07-14 22:22:11.197 [INFO][4203] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4b79028289ff97eeb889c540a5a0053f4de5300f72529238d0744ca84c6e87d7" Namespace="calico-system" Pod="goldmane-768f4c5c69-gcjz6" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--gcjz6-eth0" Jul 14 22:22:11.626969 containerd[1464]: 2025-07-14 22:22:11.198 [INFO][4203] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4b79028289ff97eeb889c540a5a0053f4de5300f72529238d0744ca84c6e87d7" Namespace="calico-system" Pod="goldmane-768f4c5c69-gcjz6" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--gcjz6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--gcjz6-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"3b937dec-5b75-47a6-9753-367ccbffbb4f", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 21, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4b79028289ff97eeb889c540a5a0053f4de5300f72529238d0744ca84c6e87d7", Pod:"goldmane-768f4c5c69-gcjz6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali39692490256", MAC:"46:32:ad:24:aa:27", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:22:11.626969 containerd[1464]: 2025-07-14 22:22:11.623 [INFO][4203] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4b79028289ff97eeb889c540a5a0053f4de5300f72529238d0744ca84c6e87d7" Namespace="calico-system" Pod="goldmane-768f4c5c69-gcjz6" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--gcjz6-eth0" Jul 14 22:22:11.777796 containerd[1464]: time="2025-07-14T22:22:11.777677761Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:22:11.777796 containerd[1464]: time="2025-07-14T22:22:11.777732337Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:22:11.777796 containerd[1464]: time="2025-07-14T22:22:11.777746484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:22:11.778552 containerd[1464]: time="2025-07-14T22:22:11.778491927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:22:11.801426 systemd[1]: Started cri-containerd-4b79028289ff97eeb889c540a5a0053f4de5300f72529238d0744ca84c6e87d7.scope - libcontainer container 4b79028289ff97eeb889c540a5a0053f4de5300f72529238d0744ca84c6e87d7. Jul 14 22:22:11.814989 systemd-resolved[1323]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:22:11.841018 containerd[1464]: time="2025-07-14T22:22:11.840791552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-gcjz6,Uid:3b937dec-5b75-47a6-9753-367ccbffbb4f,Namespace:calico-system,Attempt:1,} returns sandbox id \"4b79028289ff97eeb889c540a5a0053f4de5300f72529238d0744ca84c6e87d7\"" Jul 14 22:22:11.942670 systemd-networkd[1394]: calia82aa35ac03: Link UP Jul 14 22:22:11.944205 systemd-networkd[1394]: calia82aa35ac03: Gained carrier Jul 14 22:22:11.958045 containerd[1464]: 2025-07-14 22:22:11.918 [INFO][4284] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383" Jul 14 22:22:11.958045 containerd[1464]: 2025-07-14 22:22:11.919 [INFO][4284] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383" iface="eth0" netns="/var/run/netns/cni-37d8f77f-e628-c166-6078-90881589223e" Jul 14 22:22:11.958045 containerd[1464]: 2025-07-14 22:22:11.919 [INFO][4284] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383" iface="eth0" netns="/var/run/netns/cni-37d8f77f-e628-c166-6078-90881589223e" Jul 14 22:22:11.958045 containerd[1464]: 2025-07-14 22:22:11.919 [INFO][4284] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383" iface="eth0" netns="/var/run/netns/cni-37d8f77f-e628-c166-6078-90881589223e" Jul 14 22:22:11.958045 containerd[1464]: 2025-07-14 22:22:11.919 [INFO][4284] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383" Jul 14 22:22:11.958045 containerd[1464]: 2025-07-14 22:22:11.919 [INFO][4284] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383" Jul 14 22:22:11.958045 containerd[1464]: 2025-07-14 22:22:11.943 [INFO][4343] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383" HandleID="k8s-pod-network.2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383" Workload="localhost-k8s-coredns--668d6bf9bc--hlwkw-eth0" Jul 14 22:22:11.958045 containerd[1464]: 2025-07-14 22:22:11.943 [INFO][4343] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:22:11.958045 containerd[1464]: 2025-07-14 22:22:11.943 [INFO][4343] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:22:11.958045 containerd[1464]: 2025-07-14 22:22:11.950 [WARNING][4343] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383" HandleID="k8s-pod-network.2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383" Workload="localhost-k8s-coredns--668d6bf9bc--hlwkw-eth0" Jul 14 22:22:11.958045 containerd[1464]: 2025-07-14 22:22:11.950 [INFO][4343] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383" HandleID="k8s-pod-network.2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383" Workload="localhost-k8s-coredns--668d6bf9bc--hlwkw-eth0" Jul 14 22:22:11.958045 containerd[1464]: 2025-07-14 22:22:11.952 [INFO][4343] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:22:11.958045 containerd[1464]: 2025-07-14 22:22:11.955 [INFO][4284] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383" Jul 14 22:22:11.958614 containerd[1464]: time="2025-07-14T22:22:11.958271866Z" level=info msg="TearDown network for sandbox \"2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383\" successfully" Jul 14 22:22:11.958614 containerd[1464]: time="2025-07-14T22:22:11.958298106Z" level=info msg="StopPodSandbox for \"2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383\" returns successfully" Jul 14 22:22:11.958672 kubelet[2543]: E0714 22:22:11.958591 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:22:11.959306 containerd[1464]: time="2025-07-14T22:22:11.959219309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hlwkw,Uid:1d877bca-eddc-4eb9-ba2d-007238222f97,Namespace:kube-system,Attempt:1,}" Jul 14 22:22:11.961362 systemd[1]: run-netns-cni\x2d37d8f77f\x2de628\x2dc166\x2d6078\x2d90881589223e.mount: Deactivated successfully. Jul 14 22:22:12.096083 containerd[1464]: 2025-07-14 22:22:10.244 [INFO][4216] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--8cbd7cb79--jt55m-eth0 calico-apiserver-8cbd7cb79- calico-apiserver 24b165d4-dcd3-454b-b444-600d3f259636 912 0 2025-07-14 22:21:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8cbd7cb79 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-8cbd7cb79-jt55m eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia82aa35ac03 [] [] }} ContainerID="8456a61d2b44de74c978871b4bf92b82e921dd8934e199169225e7323e087555" Namespace="calico-apiserver" Pod="calico-apiserver-8cbd7cb79-jt55m" WorkloadEndpoint="localhost-k8s-calico--apiserver--8cbd7cb79--jt55m-" Jul 14 22:22:12.096083 containerd[1464]: 2025-07-14 22:22:10.244 [INFO][4216] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8456a61d2b44de74c978871b4bf92b82e921dd8934e199169225e7323e087555" Namespace="calico-apiserver" Pod="calico-apiserver-8cbd7cb79-jt55m" WorkloadEndpoint="localhost-k8s-calico--apiserver--8cbd7cb79--jt55m-eth0" Jul 14 22:22:12.096083 containerd[1464]: 2025-07-14 22:22:10.321 [INFO][4258] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8456a61d2b44de74c978871b4bf92b82e921dd8934e199169225e7323e087555" HandleID="k8s-pod-network.8456a61d2b44de74c978871b4bf92b82e921dd8934e199169225e7323e087555" Workload="localhost-k8s-calico--apiserver--8cbd7cb79--jt55m-eth0" Jul 14 22:22:12.096083 containerd[1464]: 2025-07-14 22:22:10.321 [INFO][4258] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8456a61d2b44de74c978871b4bf92b82e921dd8934e199169225e7323e087555" HandleID="k8s-pod-network.8456a61d2b44de74c978871b4bf92b82e921dd8934e199169225e7323e087555" Workload="localhost-k8s-calico--apiserver--8cbd7cb79--jt55m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7560), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-8cbd7cb79-jt55m", "timestamp":"2025-07-14 22:22:10.321365178 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:22:12.096083 containerd[1464]: 2025-07-14 22:22:10.321 [INFO][4258] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:22:12.096083 containerd[1464]: 2025-07-14 22:22:11.192 [INFO][4258] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:22:12.096083 containerd[1464]: 2025-07-14 22:22:11.192 [INFO][4258] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:22:12.096083 containerd[1464]: 2025-07-14 22:22:11.259 [INFO][4258] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8456a61d2b44de74c978871b4bf92b82e921dd8934e199169225e7323e087555" host="localhost" Jul 14 22:22:12.096083 containerd[1464]: 2025-07-14 22:22:11.646 [INFO][4258] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:22:12.096083 containerd[1464]: 2025-07-14 22:22:11.650 [INFO][4258] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:22:12.096083 containerd[1464]: 2025-07-14 22:22:11.919 [INFO][4258] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:22:12.096083 containerd[1464]: 2025-07-14 22:22:11.922 [INFO][4258] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:22:12.096083 containerd[1464]: 2025-07-14 22:22:11.922 [INFO][4258] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8456a61d2b44de74c978871b4bf92b82e921dd8934e199169225e7323e087555" host="localhost" Jul 14 22:22:12.096083 containerd[1464]: 2025-07-14 22:22:11.924 [INFO][4258] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8456a61d2b44de74c978871b4bf92b82e921dd8934e199169225e7323e087555 Jul 14 22:22:12.096083 containerd[1464]: 2025-07-14 22:22:11.929 [INFO][4258] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8456a61d2b44de74c978871b4bf92b82e921dd8934e199169225e7323e087555" host="localhost" Jul 14 22:22:12.096083 containerd[1464]: 2025-07-14 22:22:11.935 [INFO][4258] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.8456a61d2b44de74c978871b4bf92b82e921dd8934e199169225e7323e087555" host="localhost" Jul 14 22:22:12.096083 containerd[1464]: 2025-07-14 22:22:11.935 [INFO][4258] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.8456a61d2b44de74c978871b4bf92b82e921dd8934e199169225e7323e087555" host="localhost" Jul 14 22:22:12.096083 containerd[1464]: 2025-07-14 22:22:11.935 [INFO][4258] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:22:12.096083 containerd[1464]: 2025-07-14 22:22:11.935 [INFO][4258] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="8456a61d2b44de74c978871b4bf92b82e921dd8934e199169225e7323e087555" HandleID="k8s-pod-network.8456a61d2b44de74c978871b4bf92b82e921dd8934e199169225e7323e087555" Workload="localhost-k8s-calico--apiserver--8cbd7cb79--jt55m-eth0" Jul 14 22:22:12.118153 containerd[1464]: 2025-07-14 22:22:11.938 [INFO][4216] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8456a61d2b44de74c978871b4bf92b82e921dd8934e199169225e7323e087555" Namespace="calico-apiserver" Pod="calico-apiserver-8cbd7cb79-jt55m" WorkloadEndpoint="localhost-k8s-calico--apiserver--8cbd7cb79--jt55m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8cbd7cb79--jt55m-eth0", GenerateName:"calico-apiserver-8cbd7cb79-", Namespace:"calico-apiserver", SelfLink:"", UID:"24b165d4-dcd3-454b-b444-600d3f259636", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 21, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8cbd7cb79", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-8cbd7cb79-jt55m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia82aa35ac03", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:22:12.118153 containerd[1464]: 2025-07-14 22:22:11.938 [INFO][4216] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="8456a61d2b44de74c978871b4bf92b82e921dd8934e199169225e7323e087555" Namespace="calico-apiserver" Pod="calico-apiserver-8cbd7cb79-jt55m" WorkloadEndpoint="localhost-k8s-calico--apiserver--8cbd7cb79--jt55m-eth0" Jul 14 22:22:12.118153 containerd[1464]: 2025-07-14 22:22:11.938 [INFO][4216] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia82aa35ac03 ContainerID="8456a61d2b44de74c978871b4bf92b82e921dd8934e199169225e7323e087555" Namespace="calico-apiserver" Pod="calico-apiserver-8cbd7cb79-jt55m" WorkloadEndpoint="localhost-k8s-calico--apiserver--8cbd7cb79--jt55m-eth0" Jul 14 22:22:12.118153 containerd[1464]: 2025-07-14 22:22:11.943 [INFO][4216] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8456a61d2b44de74c978871b4bf92b82e921dd8934e199169225e7323e087555" Namespace="calico-apiserver" Pod="calico-apiserver-8cbd7cb79-jt55m" WorkloadEndpoint="localhost-k8s-calico--apiserver--8cbd7cb79--jt55m-eth0" Jul 14 22:22:12.118153 containerd[1464]: 2025-07-14 22:22:11.948 [INFO][4216] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8456a61d2b44de74c978871b4bf92b82e921dd8934e199169225e7323e087555" Namespace="calico-apiserver" Pod="calico-apiserver-8cbd7cb79-jt55m" WorkloadEndpoint="localhost-k8s-calico--apiserver--8cbd7cb79--jt55m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8cbd7cb79--jt55m-eth0", GenerateName:"calico-apiserver-8cbd7cb79-", Namespace:"calico-apiserver", SelfLink:"", UID:"24b165d4-dcd3-454b-b444-600d3f259636", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 21, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8cbd7cb79", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8456a61d2b44de74c978871b4bf92b82e921dd8934e199169225e7323e087555", Pod:"calico-apiserver-8cbd7cb79-jt55m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia82aa35ac03", MAC:"5e:46:b6:a8:8a:84", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:22:12.118153 containerd[1464]: 2025-07-14 22:22:12.092 [INFO][4216] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8456a61d2b44de74c978871b4bf92b82e921dd8934e199169225e7323e087555" Namespace="calico-apiserver" Pod="calico-apiserver-8cbd7cb79-jt55m" WorkloadEndpoint="localhost-k8s-calico--apiserver--8cbd7cb79--jt55m-eth0" Jul 14 22:22:12.138683 containerd[1464]: time="2025-07-14T22:22:12.137802611Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:22:12.138683 containerd[1464]: time="2025-07-14T22:22:12.138666020Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:22:12.138906 containerd[1464]: time="2025-07-14T22:22:12.138794539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:22:12.139902 containerd[1464]: time="2025-07-14T22:22:12.139279707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:22:12.159389 systemd[1]: Started cri-containerd-8456a61d2b44de74c978871b4bf92b82e921dd8934e199169225e7323e087555.scope - libcontainer container 8456a61d2b44de74c978871b4bf92b82e921dd8934e199169225e7323e087555. Jul 14 22:22:12.172027 systemd-resolved[1323]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:22:12.203160 containerd[1464]: time="2025-07-14T22:22:12.203068140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8cbd7cb79-jt55m,Uid:24b165d4-dcd3-454b-b444-600d3f259636,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"8456a61d2b44de74c978871b4bf92b82e921dd8934e199169225e7323e087555\"" Jul 14 22:22:12.238809 systemd-networkd[1394]: calide80579e761: Link UP Jul 14 22:22:12.239767 systemd-networkd[1394]: calide80579e761: Gained carrier Jul 14 22:22:12.251503 containerd[1464]: 2025-07-14 22:22:12.172 [INFO][4377] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--hlwkw-eth0 coredns-668d6bf9bc- kube-system 1d877bca-eddc-4eb9-ba2d-007238222f97 925 0 2025-07-14 22:21:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-hlwkw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calide80579e761 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="db2cee3e9e94cfa2c4710a8413def6ebb02fb8be0fa55c86bb4a6ae0c8f31636" Namespace="kube-system" Pod="coredns-668d6bf9bc-hlwkw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hlwkw-" Jul 14 22:22:12.251503 containerd[1464]: 2025-07-14 22:22:12.172 [INFO][4377] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="db2cee3e9e94cfa2c4710a8413def6ebb02fb8be0fa55c86bb4a6ae0c8f31636" Namespace="kube-system" Pod="coredns-668d6bf9bc-hlwkw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hlwkw-eth0" Jul 14 22:22:12.251503 containerd[1464]: 2025-07-14 22:22:12.200 [INFO][4408] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="db2cee3e9e94cfa2c4710a8413def6ebb02fb8be0fa55c86bb4a6ae0c8f31636" HandleID="k8s-pod-network.db2cee3e9e94cfa2c4710a8413def6ebb02fb8be0fa55c86bb4a6ae0c8f31636" Workload="localhost-k8s-coredns--668d6bf9bc--hlwkw-eth0" Jul 14 22:22:12.251503 containerd[1464]: 2025-07-14 22:22:12.201 [INFO][4408] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="db2cee3e9e94cfa2c4710a8413def6ebb02fb8be0fa55c86bb4a6ae0c8f31636" HandleID="k8s-pod-network.db2cee3e9e94cfa2c4710a8413def6ebb02fb8be0fa55c86bb4a6ae0c8f31636" Workload="localhost-k8s-coredns--668d6bf9bc--hlwkw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7200), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-hlwkw", "timestamp":"2025-07-14 22:22:12.20086499 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:22:12.251503 containerd[1464]: 2025-07-14 22:22:12.201 [INFO][4408] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:22:12.251503 containerd[1464]: 2025-07-14 22:22:12.201 [INFO][4408] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:22:12.251503 containerd[1464]: 2025-07-14 22:22:12.201 [INFO][4408] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:22:12.251503 containerd[1464]: 2025-07-14 22:22:12.209 [INFO][4408] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.db2cee3e9e94cfa2c4710a8413def6ebb02fb8be0fa55c86bb4a6ae0c8f31636" host="localhost" Jul 14 22:22:12.251503 containerd[1464]: 2025-07-14 22:22:12.213 [INFO][4408] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:22:12.251503 containerd[1464]: 2025-07-14 22:22:12.216 [INFO][4408] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:22:12.251503 containerd[1464]: 2025-07-14 22:22:12.218 [INFO][4408] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:22:12.251503 containerd[1464]: 2025-07-14 22:22:12.220 [INFO][4408] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:22:12.251503 containerd[1464]: 2025-07-14 22:22:12.220 [INFO][4408] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.db2cee3e9e94cfa2c4710a8413def6ebb02fb8be0fa55c86bb4a6ae0c8f31636" host="localhost" Jul 14 22:22:12.251503 containerd[1464]: 2025-07-14 22:22:12.222 [INFO][4408] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.db2cee3e9e94cfa2c4710a8413def6ebb02fb8be0fa55c86bb4a6ae0c8f31636 Jul 14 22:22:12.251503 containerd[1464]: 2025-07-14 22:22:12.227 [INFO][4408] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.db2cee3e9e94cfa2c4710a8413def6ebb02fb8be0fa55c86bb4a6ae0c8f31636" host="localhost" Jul 14 22:22:12.251503 containerd[1464]: 2025-07-14 22:22:12.233 [INFO][4408] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.db2cee3e9e94cfa2c4710a8413def6ebb02fb8be0fa55c86bb4a6ae0c8f31636" host="localhost" Jul 14 22:22:12.251503 containerd[1464]: 2025-07-14 22:22:12.233 [INFO][4408] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.db2cee3e9e94cfa2c4710a8413def6ebb02fb8be0fa55c86bb4a6ae0c8f31636" host="localhost" Jul 14 22:22:12.251503 containerd[1464]: 2025-07-14 22:22:12.233 [INFO][4408] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:22:12.251503 containerd[1464]: 2025-07-14 22:22:12.233 [INFO][4408] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="db2cee3e9e94cfa2c4710a8413def6ebb02fb8be0fa55c86bb4a6ae0c8f31636" HandleID="k8s-pod-network.db2cee3e9e94cfa2c4710a8413def6ebb02fb8be0fa55c86bb4a6ae0c8f31636" Workload="localhost-k8s-coredns--668d6bf9bc--hlwkw-eth0" Jul 14 22:22:12.252082 containerd[1464]: 2025-07-14 22:22:12.236 [INFO][4377] cni-plugin/k8s.go 418: Populated endpoint ContainerID="db2cee3e9e94cfa2c4710a8413def6ebb02fb8be0fa55c86bb4a6ae0c8f31636" Namespace="kube-system" Pod="coredns-668d6bf9bc-hlwkw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hlwkw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--hlwkw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1d877bca-eddc-4eb9-ba2d-007238222f97", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 21, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-hlwkw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calide80579e761", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:22:12.252082 containerd[1464]: 2025-07-14 22:22:12.237 [INFO][4377] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="db2cee3e9e94cfa2c4710a8413def6ebb02fb8be0fa55c86bb4a6ae0c8f31636" Namespace="kube-system" Pod="coredns-668d6bf9bc-hlwkw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hlwkw-eth0" Jul 14 22:22:12.252082 containerd[1464]: 2025-07-14 22:22:12.237 [INFO][4377] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calide80579e761 ContainerID="db2cee3e9e94cfa2c4710a8413def6ebb02fb8be0fa55c86bb4a6ae0c8f31636" Namespace="kube-system" Pod="coredns-668d6bf9bc-hlwkw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hlwkw-eth0" Jul 14 22:22:12.252082 containerd[1464]: 2025-07-14 22:22:12.239 [INFO][4377] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="db2cee3e9e94cfa2c4710a8413def6ebb02fb8be0fa55c86bb4a6ae0c8f31636" Namespace="kube-system" Pod="coredns-668d6bf9bc-hlwkw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hlwkw-eth0" Jul 14 22:22:12.252082 containerd[1464]: 2025-07-14 22:22:12.240 [INFO][4377] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="db2cee3e9e94cfa2c4710a8413def6ebb02fb8be0fa55c86bb4a6ae0c8f31636" Namespace="kube-system" Pod="coredns-668d6bf9bc-hlwkw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hlwkw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--hlwkw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1d877bca-eddc-4eb9-ba2d-007238222f97", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 21, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"db2cee3e9e94cfa2c4710a8413def6ebb02fb8be0fa55c86bb4a6ae0c8f31636", Pod:"coredns-668d6bf9bc-hlwkw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calide80579e761", MAC:"92:51:01:f1:fe:29", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:22:12.252318 containerd[1464]: 2025-07-14 22:22:12.247 [INFO][4377] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="db2cee3e9e94cfa2c4710a8413def6ebb02fb8be0fa55c86bb4a6ae0c8f31636" Namespace="kube-system" Pod="coredns-668d6bf9bc-hlwkw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hlwkw-eth0" Jul 14 22:22:12.271650 containerd[1464]: time="2025-07-14T22:22:12.271564085Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:22:12.271650 containerd[1464]: time="2025-07-14T22:22:12.271613811Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:22:12.271650 containerd[1464]: time="2025-07-14T22:22:12.271626025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:22:12.271861 containerd[1464]: time="2025-07-14T22:22:12.271699607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:22:12.295519 systemd[1]: Started cri-containerd-db2cee3e9e94cfa2c4710a8413def6ebb02fb8be0fa55c86bb4a6ae0c8f31636.scope - libcontainer container db2cee3e9e94cfa2c4710a8413def6ebb02fb8be0fa55c86bb4a6ae0c8f31636. Jul 14 22:22:12.308824 systemd-resolved[1323]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:22:12.333548 containerd[1464]: time="2025-07-14T22:22:12.333510176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hlwkw,Uid:1d877bca-eddc-4eb9-ba2d-007238222f97,Namespace:kube-system,Attempt:1,} returns sandbox id \"db2cee3e9e94cfa2c4710a8413def6ebb02fb8be0fa55c86bb4a6ae0c8f31636\"" Jul 14 22:22:12.334148 kubelet[2543]: E0714 22:22:12.334125 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:22:12.335793 containerd[1464]: time="2025-07-14T22:22:12.335746730Z" level=info msg="CreateContainer within sandbox \"db2cee3e9e94cfa2c4710a8413def6ebb02fb8be0fa55c86bb4a6ae0c8f31636\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 14 22:22:12.379433 containerd[1464]: time="2025-07-14T22:22:12.379326150Z" level=info msg="CreateContainer within sandbox \"db2cee3e9e94cfa2c4710a8413def6ebb02fb8be0fa55c86bb4a6ae0c8f31636\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e3bb429ff8965b509a4979e9fe1caa2cfe1fd2bf90edf6965f13bd6f37a40f97\"" Jul 14 22:22:12.380108 containerd[1464]: time="2025-07-14T22:22:12.379860854Z" level=info msg="StartContainer for \"e3bb429ff8965b509a4979e9fe1caa2cfe1fd2bf90edf6965f13bd6f37a40f97\"" Jul 14 22:22:12.417410 systemd[1]: Started cri-containerd-e3bb429ff8965b509a4979e9fe1caa2cfe1fd2bf90edf6965f13bd6f37a40f97.scope - libcontainer container e3bb429ff8965b509a4979e9fe1caa2cfe1fd2bf90edf6965f13bd6f37a40f97. Jul 14 22:22:12.510834 containerd[1464]: time="2025-07-14T22:22:12.510760183Z" level=info msg="StartContainer for \"e3bb429ff8965b509a4979e9fe1caa2cfe1fd2bf90edf6965f13bd6f37a40f97\" returns successfully" Jul 14 22:22:12.585591 containerd[1464]: time="2025-07-14T22:22:12.585542412Z" level=info msg="StopPodSandbox for \"53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77\"" Jul 14 22:22:12.587190 containerd[1464]: time="2025-07-14T22:22:12.585928629Z" level=info msg="StopPodSandbox for \"837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00\"" Jul 14 22:22:12.587190 containerd[1464]: time="2025-07-14T22:22:12.585936865Z" level=info msg="StopPodSandbox for \"7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34\"" Jul 14 22:22:12.722871 containerd[1464]: 2025-07-14 22:22:12.663 [INFO][4551] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34" Jul 14 22:22:12.722871 containerd[1464]: 2025-07-14 22:22:12.663 [INFO][4551] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34" iface="eth0" netns="/var/run/netns/cni-130ea366-4d3f-de48-ff9a-a5ed7551f06c" Jul 14 22:22:12.722871 containerd[1464]: 2025-07-14 22:22:12.664 [INFO][4551] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34" iface="eth0" netns="/var/run/netns/cni-130ea366-4d3f-de48-ff9a-a5ed7551f06c" Jul 14 22:22:12.722871 containerd[1464]: 2025-07-14 22:22:12.665 [INFO][4551] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34" iface="eth0" netns="/var/run/netns/cni-130ea366-4d3f-de48-ff9a-a5ed7551f06c" Jul 14 22:22:12.722871 containerd[1464]: 2025-07-14 22:22:12.666 [INFO][4551] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34" Jul 14 22:22:12.722871 containerd[1464]: 2025-07-14 22:22:12.666 [INFO][4551] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34" Jul 14 22:22:12.722871 containerd[1464]: 2025-07-14 22:22:12.701 [INFO][4566] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34" HandleID="k8s-pod-network.7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34" Workload="localhost-k8s-calico--kube--controllers--66cb745f54--d6cjg-eth0" Jul 14 22:22:12.722871 containerd[1464]: 2025-07-14 22:22:12.702 [INFO][4566] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:22:12.722871 containerd[1464]: 2025-07-14 22:22:12.702 [INFO][4566] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:22:12.722871 containerd[1464]: 2025-07-14 22:22:12.708 [WARNING][4566] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34" HandleID="k8s-pod-network.7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34" Workload="localhost-k8s-calico--kube--controllers--66cb745f54--d6cjg-eth0" Jul 14 22:22:12.722871 containerd[1464]: 2025-07-14 22:22:12.708 [INFO][4566] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34" HandleID="k8s-pod-network.7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34" Workload="localhost-k8s-calico--kube--controllers--66cb745f54--d6cjg-eth0" Jul 14 22:22:12.722871 containerd[1464]: 2025-07-14 22:22:12.710 [INFO][4566] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:22:12.722871 containerd[1464]: 2025-07-14 22:22:12.717 [INFO][4551] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34" Jul 14 22:22:12.724030 containerd[1464]: time="2025-07-14T22:22:12.723331854Z" level=info msg="TearDown network for sandbox \"7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34\" successfully" Jul 14 22:22:12.724030 containerd[1464]: time="2025-07-14T22:22:12.723358987Z" level=info msg="StopPodSandbox for \"7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34\" returns successfully" Jul 14 22:22:12.724741 containerd[1464]: time="2025-07-14T22:22:12.724688637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66cb745f54-d6cjg,Uid:233c9654-f369-4545-aeb8-b29c6d794c17,Namespace:calico-system,Attempt:1,}" Jul 14 22:22:12.727297 containerd[1464]: 2025-07-14 22:22:12.664 [INFO][4544] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77" Jul 14 22:22:12.727297 containerd[1464]: 2025-07-14 22:22:12.664 [INFO][4544] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77" iface="eth0" netns="/var/run/netns/cni-36d06faa-c1de-3741-f366-d705cc45c008" Jul 14 22:22:12.727297 containerd[1464]: 2025-07-14 22:22:12.665 [INFO][4544] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77" iface="eth0" netns="/var/run/netns/cni-36d06faa-c1de-3741-f366-d705cc45c008" Jul 14 22:22:12.727297 containerd[1464]: 2025-07-14 22:22:12.665 [INFO][4544] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77" iface="eth0" netns="/var/run/netns/cni-36d06faa-c1de-3741-f366-d705cc45c008" Jul 14 22:22:12.727297 containerd[1464]: 2025-07-14 22:22:12.666 [INFO][4544] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77" Jul 14 22:22:12.727297 containerd[1464]: 2025-07-14 22:22:12.666 [INFO][4544] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77" Jul 14 22:22:12.727297 containerd[1464]: 2025-07-14 22:22:12.709 [INFO][4568] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77" HandleID="k8s-pod-network.53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77" Workload="localhost-k8s-csi--node--driver--7xmfp-eth0" Jul 14 22:22:12.727297 containerd[1464]: 2025-07-14 22:22:12.709 [INFO][4568] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:22:12.727297 containerd[1464]: 2025-07-14 22:22:12.710 [INFO][4568] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:22:12.727297 containerd[1464]: 2025-07-14 22:22:12.719 [WARNING][4568] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77" HandleID="k8s-pod-network.53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77" Workload="localhost-k8s-csi--node--driver--7xmfp-eth0" Jul 14 22:22:12.727297 containerd[1464]: 2025-07-14 22:22:12.719 [INFO][4568] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77" HandleID="k8s-pod-network.53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77" Workload="localhost-k8s-csi--node--driver--7xmfp-eth0" Jul 14 22:22:12.727297 containerd[1464]: 2025-07-14 22:22:12.720 [INFO][4568] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:22:12.727297 containerd[1464]: 2025-07-14 22:22:12.724 [INFO][4544] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77" Jul 14 22:22:12.727632 containerd[1464]: time="2025-07-14T22:22:12.727473041Z" level=info msg="TearDown network for sandbox \"53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77\" successfully" Jul 14 22:22:12.727632 containerd[1464]: time="2025-07-14T22:22:12.727498870Z" level=info msg="StopPodSandbox for \"53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77\" returns successfully" Jul 14 22:22:12.728601 containerd[1464]: time="2025-07-14T22:22:12.728555394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7xmfp,Uid:a0d7e0a1-9365-4ef9-a68a-5541a9cd6ec7,Namespace:calico-system,Attempt:1,}" Jul 14 22:22:12.734483 containerd[1464]: 2025-07-14 22:22:12.669 [INFO][4538] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00" Jul 14 22:22:12.734483 containerd[1464]: 2025-07-14 22:22:12.669 [INFO][4538] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00" iface="eth0" netns="/var/run/netns/cni-c2df8708-dcea-a245-a51f-529fa5c332c0" Jul 14 22:22:12.734483 containerd[1464]: 2025-07-14 22:22:12.670 [INFO][4538] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00" iface="eth0" netns="/var/run/netns/cni-c2df8708-dcea-a245-a51f-529fa5c332c0" Jul 14 22:22:12.734483 containerd[1464]: 2025-07-14 22:22:12.672 [INFO][4538] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00" iface="eth0" netns="/var/run/netns/cni-c2df8708-dcea-a245-a51f-529fa5c332c0" Jul 14 22:22:12.734483 containerd[1464]: 2025-07-14 22:22:12.672 [INFO][4538] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00" Jul 14 22:22:12.734483 containerd[1464]: 2025-07-14 22:22:12.672 [INFO][4538] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00" Jul 14 22:22:12.734483 containerd[1464]: 2025-07-14 22:22:12.713 [INFO][4578] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00" HandleID="k8s-pod-network.837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00" Workload="localhost-k8s-coredns--668d6bf9bc--vgqdz-eth0" Jul 14 22:22:12.734483 containerd[1464]: 2025-07-14 22:22:12.713 [INFO][4578] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:22:12.734483 containerd[1464]: 2025-07-14 22:22:12.720 [INFO][4578] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:22:12.734483 containerd[1464]: 2025-07-14 22:22:12.725 [WARNING][4578] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00" HandleID="k8s-pod-network.837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00" Workload="localhost-k8s-coredns--668d6bf9bc--vgqdz-eth0" Jul 14 22:22:12.734483 containerd[1464]: 2025-07-14 22:22:12.725 [INFO][4578] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00" HandleID="k8s-pod-network.837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00" Workload="localhost-k8s-coredns--668d6bf9bc--vgqdz-eth0" Jul 14 22:22:12.734483 containerd[1464]: 2025-07-14 22:22:12.728 [INFO][4578] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:22:12.734483 containerd[1464]: 2025-07-14 22:22:12.731 [INFO][4538] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00" Jul 14 22:22:12.734914 containerd[1464]: time="2025-07-14T22:22:12.734619277Z" level=info msg="TearDown network for sandbox \"837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00\" successfully" Jul 14 22:22:12.734914 containerd[1464]: time="2025-07-14T22:22:12.734643104Z" level=info msg="StopPodSandbox for \"837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00\" returns successfully" Jul 14 22:22:12.734960 kubelet[2543]: E0714 22:22:12.734890 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:22:12.735349 containerd[1464]: time="2025-07-14T22:22:12.735312097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vgqdz,Uid:b0aee35e-b810-44e1-8f6a-22ac05756d20,Namespace:kube-system,Attempt:1,}" Jul 14 22:22:12.747877 kubelet[2543]: E0714 22:22:12.747664 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:22:12.784909 systemd[1]: run-netns-cni\x2d130ea366\x2d4d3f\x2dde48\x2dff9a\x2da5ed7551f06c.mount: Deactivated successfully. Jul 14 22:22:12.785013 systemd[1]: run-netns-cni\x2dc2df8708\x2ddcea\x2da245\x2da51f\x2d529fa5c332c0.mount: Deactivated successfully. Jul 14 22:22:12.785093 systemd[1]: run-netns-cni\x2d36d06faa\x2dc1de\x2d3741\x2df366\x2dd705cc45c008.mount: Deactivated successfully. Jul 14 22:22:12.910291 kubelet[2543]: I0714 22:22:12.910206 2543 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 22:22:12.929196 kubelet[2543]: I0714 22:22:12.929116 2543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-hlwkw" podStartSLOduration=39.929092205 podStartE2EDuration="39.929092205s" podCreationTimestamp="2025-07-14 22:21:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:22:12.928744312 +0000 UTC m=+46.420765186" watchObservedRunningTime="2025-07-14 22:22:12.929092205 +0000 UTC m=+46.421113088" Jul 14 22:22:13.062366 systemd-networkd[1394]: cali39692490256: Gained IPv6LL Jul 14 22:22:13.251530 systemd-networkd[1394]: cali8c2d8b77abb: Link UP Jul 14 22:22:13.252114 systemd-networkd[1394]: cali8c2d8b77abb: Gained carrier Jul 14 22:22:13.269711 containerd[1464]: 2025-07-14 22:22:13.162 [INFO][4622] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--7xmfp-eth0 csi-node-driver- calico-system a0d7e0a1-9365-4ef9-a68a-5541a9cd6ec7 944 0 2025-07-14 22:21:48 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-7xmfp eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8c2d8b77abb [] [] }} ContainerID="6a407a04a2c0976cf8118e45190b2a347bb3fa7ebadcf037c848f61eb1b3c691" Namespace="calico-system" Pod="csi-node-driver-7xmfp" WorkloadEndpoint="localhost-k8s-csi--node--driver--7xmfp-" Jul 14 22:22:13.269711 containerd[1464]: 2025-07-14 22:22:13.167 [INFO][4622] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6a407a04a2c0976cf8118e45190b2a347bb3fa7ebadcf037c848f61eb1b3c691" Namespace="calico-system" Pod="csi-node-driver-7xmfp" WorkloadEndpoint="localhost-k8s-csi--node--driver--7xmfp-eth0" Jul 14 22:22:13.269711 containerd[1464]: 2025-07-14 22:22:13.200 [INFO][4682] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6a407a04a2c0976cf8118e45190b2a347bb3fa7ebadcf037c848f61eb1b3c691" HandleID="k8s-pod-network.6a407a04a2c0976cf8118e45190b2a347bb3fa7ebadcf037c848f61eb1b3c691" Workload="localhost-k8s-csi--node--driver--7xmfp-eth0" Jul 14 22:22:13.269711 containerd[1464]: 2025-07-14 22:22:13.201 [INFO][4682] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6a407a04a2c0976cf8118e45190b2a347bb3fa7ebadcf037c848f61eb1b3c691" HandleID="k8s-pod-network.6a407a04a2c0976cf8118e45190b2a347bb3fa7ebadcf037c848f61eb1b3c691" Workload="localhost-k8s-csi--node--driver--7xmfp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034cfe0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-7xmfp", "timestamp":"2025-07-14 22:22:13.200956694 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:22:13.269711 containerd[1464]: 2025-07-14 22:22:13.201 [INFO][4682] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:22:13.269711 containerd[1464]: 2025-07-14 22:22:13.201 [INFO][4682] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:22:13.269711 containerd[1464]: 2025-07-14 22:22:13.201 [INFO][4682] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:22:13.269711 containerd[1464]: 2025-07-14 22:22:13.208 [INFO][4682] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6a407a04a2c0976cf8118e45190b2a347bb3fa7ebadcf037c848f61eb1b3c691" host="localhost" Jul 14 22:22:13.269711 containerd[1464]: 2025-07-14 22:22:13.214 [INFO][4682] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:22:13.269711 containerd[1464]: 2025-07-14 22:22:13.218 [INFO][4682] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:22:13.269711 containerd[1464]: 2025-07-14 22:22:13.220 [INFO][4682] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:22:13.269711 containerd[1464]: 2025-07-14 22:22:13.222 [INFO][4682] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:22:13.269711 containerd[1464]: 2025-07-14 22:22:13.222 [INFO][4682] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6a407a04a2c0976cf8118e45190b2a347bb3fa7ebadcf037c848f61eb1b3c691" host="localhost" Jul 14 22:22:13.269711 containerd[1464]: 2025-07-14 22:22:13.226 [INFO][4682] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6a407a04a2c0976cf8118e45190b2a347bb3fa7ebadcf037c848f61eb1b3c691 Jul 14 22:22:13.269711 containerd[1464]: 2025-07-14 22:22:13.236 [INFO][4682] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6a407a04a2c0976cf8118e45190b2a347bb3fa7ebadcf037c848f61eb1b3c691" host="localhost" Jul 14 22:22:13.269711 containerd[1464]: 2025-07-14 22:22:13.244 [INFO][4682] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.6a407a04a2c0976cf8118e45190b2a347bb3fa7ebadcf037c848f61eb1b3c691" host="localhost" Jul 14 22:22:13.269711 containerd[1464]: 2025-07-14 22:22:13.244 [INFO][4682] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.6a407a04a2c0976cf8118e45190b2a347bb3fa7ebadcf037c848f61eb1b3c691" host="localhost" Jul 14 22:22:13.269711 containerd[1464]: 2025-07-14 22:22:13.244 [INFO][4682] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:22:13.269711 containerd[1464]: 2025-07-14 22:22:13.244 [INFO][4682] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="6a407a04a2c0976cf8118e45190b2a347bb3fa7ebadcf037c848f61eb1b3c691" HandleID="k8s-pod-network.6a407a04a2c0976cf8118e45190b2a347bb3fa7ebadcf037c848f61eb1b3c691" Workload="localhost-k8s-csi--node--driver--7xmfp-eth0" Jul 14 22:22:13.270348 containerd[1464]: 2025-07-14 22:22:13.247 [INFO][4622] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6a407a04a2c0976cf8118e45190b2a347bb3fa7ebadcf037c848f61eb1b3c691" Namespace="calico-system" Pod="csi-node-driver-7xmfp" WorkloadEndpoint="localhost-k8s-csi--node--driver--7xmfp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7xmfp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a0d7e0a1-9365-4ef9-a68a-5541a9cd6ec7", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 21, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-7xmfp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8c2d8b77abb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:22:13.270348 containerd[1464]: 2025-07-14 22:22:13.247 [INFO][4622] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="6a407a04a2c0976cf8118e45190b2a347bb3fa7ebadcf037c848f61eb1b3c691" Namespace="calico-system" Pod="csi-node-driver-7xmfp" WorkloadEndpoint="localhost-k8s-csi--node--driver--7xmfp-eth0" Jul 14 22:22:13.270348 containerd[1464]: 2025-07-14 22:22:13.247 [INFO][4622] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8c2d8b77abb ContainerID="6a407a04a2c0976cf8118e45190b2a347bb3fa7ebadcf037c848f61eb1b3c691" Namespace="calico-system" Pod="csi-node-driver-7xmfp" WorkloadEndpoint="localhost-k8s-csi--node--driver--7xmfp-eth0" Jul 14 22:22:13.270348 containerd[1464]: 2025-07-14 22:22:13.252 [INFO][4622] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6a407a04a2c0976cf8118e45190b2a347bb3fa7ebadcf037c848f61eb1b3c691" Namespace="calico-system" Pod="csi-node-driver-7xmfp" WorkloadEndpoint="localhost-k8s-csi--node--driver--7xmfp-eth0" Jul 14 22:22:13.270348 containerd[1464]: 2025-07-14 22:22:13.252 [INFO][4622] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6a407a04a2c0976cf8118e45190b2a347bb3fa7ebadcf037c848f61eb1b3c691" Namespace="calico-system" Pod="csi-node-driver-7xmfp" WorkloadEndpoint="localhost-k8s-csi--node--driver--7xmfp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7xmfp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a0d7e0a1-9365-4ef9-a68a-5541a9cd6ec7", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 21, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6a407a04a2c0976cf8118e45190b2a347bb3fa7ebadcf037c848f61eb1b3c691", Pod:"csi-node-driver-7xmfp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8c2d8b77abb", MAC:"fa:e9:2d:3b:48:ee", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:22:13.270348 containerd[1464]: 2025-07-14 22:22:13.265 [INFO][4622] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6a407a04a2c0976cf8118e45190b2a347bb3fa7ebadcf037c848f61eb1b3c691" Namespace="calico-system" Pod="csi-node-driver-7xmfp" WorkloadEndpoint="localhost-k8s-csi--node--driver--7xmfp-eth0" Jul 14 22:22:13.303346 containerd[1464]: time="2025-07-14T22:22:13.303012637Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:22:13.303346 containerd[1464]: time="2025-07-14T22:22:13.303114824Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:22:13.303346 containerd[1464]: time="2025-07-14T22:22:13.303133500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:22:13.303627 containerd[1464]: time="2025-07-14T22:22:13.303331042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:22:13.332474 systemd[1]: Started cri-containerd-6a407a04a2c0976cf8118e45190b2a347bb3fa7ebadcf037c848f61eb1b3c691.scope - libcontainer container 6a407a04a2c0976cf8118e45190b2a347bb3fa7ebadcf037c848f61eb1b3c691. Jul 14 22:22:13.355673 systemd-networkd[1394]: caliecaf8c7db89: Link UP Jul 14 22:22:13.355978 systemd-networkd[1394]: caliecaf8c7db89: Gained carrier Jul 14 22:22:13.362638 systemd-resolved[1323]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:22:13.379321 containerd[1464]: 2025-07-14 22:22:13.139 [INFO][4599] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--66cb745f54--d6cjg-eth0 calico-kube-controllers-66cb745f54- calico-system 233c9654-f369-4545-aeb8-b29c6d794c17 945 0 2025-07-14 22:21:48 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:66cb745f54 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-66cb745f54-d6cjg eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliecaf8c7db89 [] [] }} ContainerID="d201787043558ed5eec62f14fae6059ee3cbba14d3eaf7b22084de951f712204" Namespace="calico-system" Pod="calico-kube-controllers-66cb745f54-d6cjg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66cb745f54--d6cjg-" Jul 14 22:22:13.379321 containerd[1464]: 2025-07-14 22:22:13.140 [INFO][4599] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d201787043558ed5eec62f14fae6059ee3cbba14d3eaf7b22084de951f712204" Namespace="calico-system" Pod="calico-kube-controllers-66cb745f54-d6cjg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66cb745f54--d6cjg-eth0" Jul 14 22:22:13.379321 containerd[1464]: 2025-07-14 22:22:13.204 [INFO][4670] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d201787043558ed5eec62f14fae6059ee3cbba14d3eaf7b22084de951f712204" HandleID="k8s-pod-network.d201787043558ed5eec62f14fae6059ee3cbba14d3eaf7b22084de951f712204" Workload="localhost-k8s-calico--kube--controllers--66cb745f54--d6cjg-eth0" Jul 14 22:22:13.379321 containerd[1464]: 2025-07-14 22:22:13.204 [INFO][4670] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d201787043558ed5eec62f14fae6059ee3cbba14d3eaf7b22084de951f712204" HandleID="k8s-pod-network.d201787043558ed5eec62f14fae6059ee3cbba14d3eaf7b22084de951f712204" Workload="localhost-k8s-calico--kube--controllers--66cb745f54--d6cjg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fac0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-66cb745f54-d6cjg", "timestamp":"2025-07-14 22:22:13.204403844 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:22:13.379321 containerd[1464]: 2025-07-14 22:22:13.204 [INFO][4670] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:22:13.379321 containerd[1464]: 2025-07-14 22:22:13.244 [INFO][4670] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:22:13.379321 containerd[1464]: 2025-07-14 22:22:13.244 [INFO][4670] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:22:13.379321 containerd[1464]: 2025-07-14 22:22:13.309 [INFO][4670] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d201787043558ed5eec62f14fae6059ee3cbba14d3eaf7b22084de951f712204" host="localhost" Jul 14 22:22:13.379321 containerd[1464]: 2025-07-14 22:22:13.317 [INFO][4670] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:22:13.379321 containerd[1464]: 2025-07-14 22:22:13.324 [INFO][4670] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:22:13.379321 containerd[1464]: 2025-07-14 22:22:13.326 [INFO][4670] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:22:13.379321 containerd[1464]: 2025-07-14 22:22:13.329 [INFO][4670] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:22:13.379321 containerd[1464]: 2025-07-14 22:22:13.329 [INFO][4670] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d201787043558ed5eec62f14fae6059ee3cbba14d3eaf7b22084de951f712204" host="localhost" Jul 14 22:22:13.379321 containerd[1464]: 2025-07-14 22:22:13.331 [INFO][4670] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d201787043558ed5eec62f14fae6059ee3cbba14d3eaf7b22084de951f712204 Jul 14 22:22:13.379321 containerd[1464]: 2025-07-14 22:22:13.336 [INFO][4670] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d201787043558ed5eec62f14fae6059ee3cbba14d3eaf7b22084de951f712204" host="localhost" Jul 14 22:22:13.379321 containerd[1464]: 2025-07-14 22:22:13.344 [INFO][4670] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.d201787043558ed5eec62f14fae6059ee3cbba14d3eaf7b22084de951f712204" host="localhost" Jul 14 22:22:13.379321 containerd[1464]: 2025-07-14 22:22:13.345 [INFO][4670] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.d201787043558ed5eec62f14fae6059ee3cbba14d3eaf7b22084de951f712204" host="localhost" Jul 14 22:22:13.379321 containerd[1464]: 2025-07-14 22:22:13.345 [INFO][4670] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:22:13.379321 containerd[1464]: 2025-07-14 22:22:13.345 [INFO][4670] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="d201787043558ed5eec62f14fae6059ee3cbba14d3eaf7b22084de951f712204" HandleID="k8s-pod-network.d201787043558ed5eec62f14fae6059ee3cbba14d3eaf7b22084de951f712204" Workload="localhost-k8s-calico--kube--controllers--66cb745f54--d6cjg-eth0" Jul 14 22:22:13.380101 containerd[1464]: 2025-07-14 22:22:13.348 [INFO][4599] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d201787043558ed5eec62f14fae6059ee3cbba14d3eaf7b22084de951f712204" Namespace="calico-system" Pod="calico-kube-controllers-66cb745f54-d6cjg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66cb745f54--d6cjg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--66cb745f54--d6cjg-eth0", GenerateName:"calico-kube-controllers-66cb745f54-", Namespace:"calico-system", SelfLink:"", UID:"233c9654-f369-4545-aeb8-b29c6d794c17", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 21, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66cb745f54", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-66cb745f54-d6cjg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliecaf8c7db89", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:22:13.380101 containerd[1464]: 2025-07-14 22:22:13.349 [INFO][4599] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="d201787043558ed5eec62f14fae6059ee3cbba14d3eaf7b22084de951f712204" Namespace="calico-system" Pod="calico-kube-controllers-66cb745f54-d6cjg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66cb745f54--d6cjg-eth0" Jul 14 22:22:13.380101 containerd[1464]: 2025-07-14 22:22:13.349 [INFO][4599] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliecaf8c7db89 ContainerID="d201787043558ed5eec62f14fae6059ee3cbba14d3eaf7b22084de951f712204" Namespace="calico-system" Pod="calico-kube-controllers-66cb745f54-d6cjg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66cb745f54--d6cjg-eth0" Jul 14 22:22:13.380101 containerd[1464]: 2025-07-14 22:22:13.358 [INFO][4599] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d201787043558ed5eec62f14fae6059ee3cbba14d3eaf7b22084de951f712204" Namespace="calico-system" Pod="calico-kube-controllers-66cb745f54-d6cjg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66cb745f54--d6cjg-eth0" Jul 14 22:22:13.380101 containerd[1464]: 2025-07-14 22:22:13.359 [INFO][4599] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d201787043558ed5eec62f14fae6059ee3cbba14d3eaf7b22084de951f712204" Namespace="calico-system" Pod="calico-kube-controllers-66cb745f54-d6cjg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66cb745f54--d6cjg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--66cb745f54--d6cjg-eth0", GenerateName:"calico-kube-controllers-66cb745f54-", Namespace:"calico-system", SelfLink:"", UID:"233c9654-f369-4545-aeb8-b29c6d794c17", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 21, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66cb745f54", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d201787043558ed5eec62f14fae6059ee3cbba14d3eaf7b22084de951f712204", Pod:"calico-kube-controllers-66cb745f54-d6cjg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliecaf8c7db89", MAC:"82:33:5f:a9:29:18", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:22:13.380101 containerd[1464]: 2025-07-14 22:22:13.372 [INFO][4599] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d201787043558ed5eec62f14fae6059ee3cbba14d3eaf7b22084de951f712204" Namespace="calico-system" Pod="calico-kube-controllers-66cb745f54-d6cjg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66cb745f54--d6cjg-eth0" Jul 14 22:22:13.386832 containerd[1464]: time="2025-07-14T22:22:13.386781190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7xmfp,Uid:a0d7e0a1-9365-4ef9-a68a-5541a9cd6ec7,Namespace:calico-system,Attempt:1,} returns sandbox id \"6a407a04a2c0976cf8118e45190b2a347bb3fa7ebadcf037c848f61eb1b3c691\"" Jul 14 22:22:13.424422 containerd[1464]: time="2025-07-14T22:22:13.423204616Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:22:13.424422 containerd[1464]: time="2025-07-14T22:22:13.423325219Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:22:13.424617 containerd[1464]: time="2025-07-14T22:22:13.423351270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:22:13.425359 containerd[1464]: time="2025-07-14T22:22:13.425286559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:22:13.447446 systemd[1]: Started cri-containerd-d201787043558ed5eec62f14fae6059ee3cbba14d3eaf7b22084de951f712204.scope - libcontainer container d201787043558ed5eec62f14fae6059ee3cbba14d3eaf7b22084de951f712204. Jul 14 22:22:13.464199 systemd-resolved[1323]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:22:13.496916 containerd[1464]: time="2025-07-14T22:22:13.496875983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66cb745f54-d6cjg,Uid:233c9654-f369-4545-aeb8-b29c6d794c17,Namespace:calico-system,Attempt:1,} returns sandbox id \"d201787043558ed5eec62f14fae6059ee3cbba14d3eaf7b22084de951f712204\"" Jul 14 22:22:13.510365 systemd-networkd[1394]: calia82aa35ac03: Gained IPv6LL Jul 14 22:22:13.584253 containerd[1464]: time="2025-07-14T22:22:13.584110922Z" level=info msg="StopPodSandbox for \"660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d\"" Jul 14 22:22:13.754101 kubelet[2543]: E0714 22:22:13.754059 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:22:13.786633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4209512259.mount: Deactivated successfully. Jul 14 22:22:13.830480 systemd-networkd[1394]: calide80579e761: Gained IPv6LL Jul 14 22:22:13.910478 systemd-networkd[1394]: cali8ff0e93dbf4: Link UP Jul 14 22:22:13.911141 systemd-networkd[1394]: cali8ff0e93dbf4: Gained carrier Jul 14 22:22:14.147653 containerd[1464]: 2025-07-14 22:22:13.166 [INFO][4627] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--vgqdz-eth0 coredns-668d6bf9bc- kube-system b0aee35e-b810-44e1-8f6a-22ac05756d20 946 0 2025-07-14 22:21:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-vgqdz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8ff0e93dbf4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7ad59e0f08fee5aa9ccd3451f07b1d9db1ec1b58d5276f58f65c006ad851bed7" Namespace="kube-system" Pod="coredns-668d6bf9bc-vgqdz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vgqdz-" Jul 14 22:22:14.147653 containerd[1464]: 2025-07-14 22:22:13.166 [INFO][4627] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7ad59e0f08fee5aa9ccd3451f07b1d9db1ec1b58d5276f58f65c006ad851bed7" Namespace="kube-system" Pod="coredns-668d6bf9bc-vgqdz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vgqdz-eth0" Jul 14 22:22:14.147653 containerd[1464]: 2025-07-14 22:22:13.233 [INFO][4680] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7ad59e0f08fee5aa9ccd3451f07b1d9db1ec1b58d5276f58f65c006ad851bed7" HandleID="k8s-pod-network.7ad59e0f08fee5aa9ccd3451f07b1d9db1ec1b58d5276f58f65c006ad851bed7" Workload="localhost-k8s-coredns--668d6bf9bc--vgqdz-eth0" Jul 14 22:22:14.147653 containerd[1464]: 2025-07-14 22:22:13.233 [INFO][4680] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7ad59e0f08fee5aa9ccd3451f07b1d9db1ec1b58d5276f58f65c006ad851bed7" HandleID="k8s-pod-network.7ad59e0f08fee5aa9ccd3451f07b1d9db1ec1b58d5276f58f65c006ad851bed7" Workload="localhost-k8s-coredns--668d6bf9bc--vgqdz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000583f30), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-vgqdz", "timestamp":"2025-07-14 22:22:13.23350789 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:22:14.147653 containerd[1464]: 2025-07-14 22:22:13.233 [INFO][4680] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:22:14.147653 containerd[1464]: 2025-07-14 22:22:13.345 [INFO][4680] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:22:14.147653 containerd[1464]: 2025-07-14 22:22:13.345 [INFO][4680] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:22:14.147653 containerd[1464]: 2025-07-14 22:22:13.409 [INFO][4680] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7ad59e0f08fee5aa9ccd3451f07b1d9db1ec1b58d5276f58f65c006ad851bed7" host="localhost" Jul 14 22:22:14.147653 containerd[1464]: 2025-07-14 22:22:13.417 [INFO][4680] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:22:14.147653 containerd[1464]: 2025-07-14 22:22:13.424 [INFO][4680] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:22:14.147653 containerd[1464]: 2025-07-14 22:22:13.426 [INFO][4680] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:22:14.147653 containerd[1464]: 2025-07-14 22:22:13.428 [INFO][4680] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:22:14.147653 containerd[1464]: 2025-07-14 22:22:13.428 [INFO][4680] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7ad59e0f08fee5aa9ccd3451f07b1d9db1ec1b58d5276f58f65c006ad851bed7" host="localhost" Jul 14 22:22:14.147653 containerd[1464]: 2025-07-14 22:22:13.431 [INFO][4680] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7ad59e0f08fee5aa9ccd3451f07b1d9db1ec1b58d5276f58f65c006ad851bed7 Jul 14 22:22:14.147653 containerd[1464]: 2025-07-14 22:22:13.655 [INFO][4680] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7ad59e0f08fee5aa9ccd3451f07b1d9db1ec1b58d5276f58f65c006ad851bed7" host="localhost" Jul 14 22:22:14.147653 containerd[1464]: 2025-07-14 22:22:13.904 [INFO][4680] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.7ad59e0f08fee5aa9ccd3451f07b1d9db1ec1b58d5276f58f65c006ad851bed7" host="localhost" Jul 14 22:22:14.147653 containerd[1464]: 2025-07-14 22:22:13.904 [INFO][4680] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.7ad59e0f08fee5aa9ccd3451f07b1d9db1ec1b58d5276f58f65c006ad851bed7" host="localhost" Jul 14 22:22:14.147653 containerd[1464]: 2025-07-14 22:22:13.904 [INFO][4680] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:22:14.147653 containerd[1464]: 2025-07-14 22:22:13.904 [INFO][4680] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="7ad59e0f08fee5aa9ccd3451f07b1d9db1ec1b58d5276f58f65c006ad851bed7" HandleID="k8s-pod-network.7ad59e0f08fee5aa9ccd3451f07b1d9db1ec1b58d5276f58f65c006ad851bed7" Workload="localhost-k8s-coredns--668d6bf9bc--vgqdz-eth0" Jul 14 22:22:14.148648 containerd[1464]: 2025-07-14 22:22:13.908 [INFO][4627] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7ad59e0f08fee5aa9ccd3451f07b1d9db1ec1b58d5276f58f65c006ad851bed7" Namespace="kube-system" Pod="coredns-668d6bf9bc-vgqdz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vgqdz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--vgqdz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b0aee35e-b810-44e1-8f6a-22ac05756d20", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 21, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-vgqdz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8ff0e93dbf4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:22:14.148648 containerd[1464]: 2025-07-14 22:22:13.908 [INFO][4627] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="7ad59e0f08fee5aa9ccd3451f07b1d9db1ec1b58d5276f58f65c006ad851bed7" Namespace="kube-system" Pod="coredns-668d6bf9bc-vgqdz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vgqdz-eth0" Jul 14 22:22:14.148648 containerd[1464]: 2025-07-14 22:22:13.908 [INFO][4627] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8ff0e93dbf4 ContainerID="7ad59e0f08fee5aa9ccd3451f07b1d9db1ec1b58d5276f58f65c006ad851bed7" Namespace="kube-system" Pod="coredns-668d6bf9bc-vgqdz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vgqdz-eth0" Jul 14 22:22:14.148648 containerd[1464]: 2025-07-14 22:22:13.911 [INFO][4627] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7ad59e0f08fee5aa9ccd3451f07b1d9db1ec1b58d5276f58f65c006ad851bed7" Namespace="kube-system" Pod="coredns-668d6bf9bc-vgqdz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vgqdz-eth0" Jul 14 22:22:14.148648 containerd[1464]: 2025-07-14 22:22:13.912 [INFO][4627] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7ad59e0f08fee5aa9ccd3451f07b1d9db1ec1b58d5276f58f65c006ad851bed7" Namespace="kube-system" Pod="coredns-668d6bf9bc-vgqdz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vgqdz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--vgqdz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b0aee35e-b810-44e1-8f6a-22ac05756d20", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 21, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7ad59e0f08fee5aa9ccd3451f07b1d9db1ec1b58d5276f58f65c006ad851bed7", Pod:"coredns-668d6bf9bc-vgqdz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8ff0e93dbf4", MAC:"f6:f0:04:6f:54:b5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:22:14.148823 containerd[1464]: 2025-07-14 22:22:14.145 [INFO][4627] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7ad59e0f08fee5aa9ccd3451f07b1d9db1ec1b58d5276f58f65c006ad851bed7" Namespace="kube-system" Pod="coredns-668d6bf9bc-vgqdz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vgqdz-eth0" Jul 14 22:22:14.534466 systemd-networkd[1394]: caliecaf8c7db89: Gained IPv6LL Jul 14 22:22:14.713260 containerd[1464]: 2025-07-14 22:22:14.222 [INFO][4836] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d" Jul 14 22:22:14.713260 containerd[1464]: 2025-07-14 22:22:14.222 [INFO][4836] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d" iface="eth0" netns="/var/run/netns/cni-1db9698f-919b-083f-eccb-d21686d1adfb" Jul 14 22:22:14.713260 containerd[1464]: 2025-07-14 22:22:14.222 [INFO][4836] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d" iface="eth0" netns="/var/run/netns/cni-1db9698f-919b-083f-eccb-d21686d1adfb" Jul 14 22:22:14.713260 containerd[1464]: 2025-07-14 22:22:14.222 [INFO][4836] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d" iface="eth0" netns="/var/run/netns/cni-1db9698f-919b-083f-eccb-d21686d1adfb" Jul 14 22:22:14.713260 containerd[1464]: 2025-07-14 22:22:14.222 [INFO][4836] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d" Jul 14 22:22:14.713260 containerd[1464]: 2025-07-14 22:22:14.222 [INFO][4836] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d" Jul 14 22:22:14.713260 containerd[1464]: 2025-07-14 22:22:14.243 [INFO][4855] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d" HandleID="k8s-pod-network.660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d" Workload="localhost-k8s-calico--apiserver--8cbd7cb79--zzkpk-eth0" Jul 14 22:22:14.713260 containerd[1464]: 2025-07-14 22:22:14.243 [INFO][4855] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:22:14.713260 containerd[1464]: 2025-07-14 22:22:14.243 [INFO][4855] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:22:14.713260 containerd[1464]: 2025-07-14 22:22:14.704 [WARNING][4855] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d" HandleID="k8s-pod-network.660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d" Workload="localhost-k8s-calico--apiserver--8cbd7cb79--zzkpk-eth0" Jul 14 22:22:14.713260 containerd[1464]: 2025-07-14 22:22:14.704 [INFO][4855] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d" HandleID="k8s-pod-network.660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d" Workload="localhost-k8s-calico--apiserver--8cbd7cb79--zzkpk-eth0" Jul 14 22:22:14.713260 containerd[1464]: 2025-07-14 22:22:14.706 [INFO][4855] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:22:14.713260 containerd[1464]: 2025-07-14 22:22:14.710 [INFO][4836] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d" Jul 14 22:22:14.713658 containerd[1464]: time="2025-07-14T22:22:14.713424555Z" level=info msg="TearDown network for sandbox \"660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d\" successfully" Jul 14 22:22:14.713658 containerd[1464]: time="2025-07-14T22:22:14.713449153Z" level=info msg="StopPodSandbox for \"660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d\" returns successfully" Jul 14 22:22:14.714312 containerd[1464]: time="2025-07-14T22:22:14.714071093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8cbd7cb79-zzkpk,Uid:6cdfbfb8-7af6-4e0b-8036-ff4c455ca4f3,Namespace:calico-apiserver,Attempt:1,}" Jul 14 22:22:14.716450 systemd[1]: run-netns-cni\x2d1db9698f\x2d919b\x2d083f\x2deccb\x2dd21686d1adfb.mount: Deactivated successfully. Jul 14 22:22:14.755390 kubelet[2543]: E0714 22:22:14.755348 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:22:14.840287 containerd[1464]: time="2025-07-14T22:22:14.839613189Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:22:14.840287 containerd[1464]: time="2025-07-14T22:22:14.839687933Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:22:14.840287 containerd[1464]: time="2025-07-14T22:22:14.839702692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:22:14.840287 containerd[1464]: time="2025-07-14T22:22:14.839810971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:22:14.866390 systemd[1]: Started cri-containerd-7ad59e0f08fee5aa9ccd3451f07b1d9db1ec1b58d5276f58f65c006ad851bed7.scope - libcontainer container 7ad59e0f08fee5aa9ccd3451f07b1d9db1ec1b58d5276f58f65c006ad851bed7. Jul 14 22:22:14.881342 systemd-resolved[1323]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:22:14.916511 containerd[1464]: time="2025-07-14T22:22:14.916461436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vgqdz,Uid:b0aee35e-b810-44e1-8f6a-22ac05756d20,Namespace:kube-system,Attempt:1,} returns sandbox id \"7ad59e0f08fee5aa9ccd3451f07b1d9db1ec1b58d5276f58f65c006ad851bed7\"" Jul 14 22:22:14.917302 kubelet[2543]: E0714 22:22:14.917276 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:22:14.919434 containerd[1464]: time="2025-07-14T22:22:14.919398957Z" level=info msg="CreateContainer within sandbox \"7ad59e0f08fee5aa9ccd3451f07b1d9db1ec1b58d5276f58f65c006ad851bed7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 14 22:22:15.046414 systemd-networkd[1394]: cali8c2d8b77abb: Gained IPv6LL Jul 14 22:22:15.238734 systemd-networkd[1394]: cali8ff0e93dbf4: Gained IPv6LL Jul 14 22:22:15.338606 containerd[1464]: time="2025-07-14T22:22:15.338526425Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:22:15.401441 containerd[1464]: time="2025-07-14T22:22:15.401385296Z" level=info msg="CreateContainer within sandbox \"7ad59e0f08fee5aa9ccd3451f07b1d9db1ec1b58d5276f58f65c006ad851bed7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cac80c9c60231aeeb78745825f3a755dcf83c45b750f4ae55d01662d22f5fb4c\"" Jul 14 22:22:15.402933 containerd[1464]: time="2025-07-14T22:22:15.402788432Z" level=info msg="StartContainer for \"cac80c9c60231aeeb78745825f3a755dcf83c45b750f4ae55d01662d22f5fb4c\"" Jul 14 22:22:15.404071 containerd[1464]: time="2025-07-14T22:22:15.403241516Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 14 22:22:15.427214 containerd[1464]: time="2025-07-14T22:22:15.427089798Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:22:15.437801 containerd[1464]: time="2025-07-14T22:22:15.436051094Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:22:15.440349 containerd[1464]: time="2025-07-14T22:22:15.439801718Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 5.299858949s" Jul 14 22:22:15.440349 containerd[1464]: time="2025-07-14T22:22:15.439841575Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 14 22:22:15.446644 systemd[1]: Started cri-containerd-cac80c9c60231aeeb78745825f3a755dcf83c45b750f4ae55d01662d22f5fb4c.scope - libcontainer container cac80c9c60231aeeb78745825f3a755dcf83c45b750f4ae55d01662d22f5fb4c. Jul 14 22:22:15.451665 containerd[1464]: time="2025-07-14T22:22:15.451617981Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 14 22:22:15.453730 containerd[1464]: time="2025-07-14T22:22:15.453687062Z" level=info msg="CreateContainer within sandbox \"a3bbf74b4f160e66accc9d858fddb81d087056621b7c669f512420912a355700\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 14 22:22:15.552070 containerd[1464]: time="2025-07-14T22:22:15.551879308Z" level=info msg="StartContainer for \"cac80c9c60231aeeb78745825f3a755dcf83c45b750f4ae55d01662d22f5fb4c\" returns successfully" Jul 14 22:22:15.573594 containerd[1464]: time="2025-07-14T22:22:15.573540531Z" level=info msg="CreateContainer within sandbox \"a3bbf74b4f160e66accc9d858fddb81d087056621b7c669f512420912a355700\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"2aae3f1e571fa4c10130618439775b4d0f2191983860f8c9e6a56b88251e0e15\"" Jul 14 22:22:15.576579 containerd[1464]: time="2025-07-14T22:22:15.576469220Z" level=info msg="StartContainer for \"2aae3f1e571fa4c10130618439775b4d0f2191983860f8c9e6a56b88251e0e15\"" Jul 14 22:22:15.630368 systemd[1]: Started cri-containerd-2aae3f1e571fa4c10130618439775b4d0f2191983860f8c9e6a56b88251e0e15.scope - libcontainer container 2aae3f1e571fa4c10130618439775b4d0f2191983860f8c9e6a56b88251e0e15. Jul 14 22:22:15.750948 containerd[1464]: time="2025-07-14T22:22:15.749894496Z" level=info msg="StartContainer for \"2aae3f1e571fa4c10130618439775b4d0f2191983860f8c9e6a56b88251e0e15\" returns successfully" Jul 14 22:22:15.760647 systemd-networkd[1394]: cali68314e80603: Link UP Jul 14 22:22:15.762077 systemd-networkd[1394]: cali68314e80603: Gained carrier Jul 14 22:22:15.767253 kubelet[2543]: E0714 22:22:15.765945 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:22:15.790465 containerd[1464]: 2025-07-14 22:22:15.432 [INFO][4907] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--8cbd7cb79--zzkpk-eth0 calico-apiserver-8cbd7cb79- calico-apiserver 6cdfbfb8-7af6-4e0b-8036-ff4c455ca4f3 974 0 2025-07-14 22:21:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8cbd7cb79 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-8cbd7cb79-zzkpk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali68314e80603 [] [] }} ContainerID="01a9f6301ac5e1a5b3ffafb28538b5e00227b870c675b251ecc2c3c6c10d38ec" Namespace="calico-apiserver" Pod="calico-apiserver-8cbd7cb79-zzkpk" WorkloadEndpoint="localhost-k8s-calico--apiserver--8cbd7cb79--zzkpk-" Jul 14 22:22:15.790465 containerd[1464]: 2025-07-14 22:22:15.433 [INFO][4907] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="01a9f6301ac5e1a5b3ffafb28538b5e00227b870c675b251ecc2c3c6c10d38ec" Namespace="calico-apiserver" Pod="calico-apiserver-8cbd7cb79-zzkpk" WorkloadEndpoint="localhost-k8s-calico--apiserver--8cbd7cb79--zzkpk-eth0" Jul 14 22:22:15.790465 containerd[1464]: 2025-07-14 22:22:15.575 [INFO][4960] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="01a9f6301ac5e1a5b3ffafb28538b5e00227b870c675b251ecc2c3c6c10d38ec" HandleID="k8s-pod-network.01a9f6301ac5e1a5b3ffafb28538b5e00227b870c675b251ecc2c3c6c10d38ec" Workload="localhost-k8s-calico--apiserver--8cbd7cb79--zzkpk-eth0" Jul 14 22:22:15.790465 containerd[1464]: 2025-07-14 22:22:15.575 [INFO][4960] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="01a9f6301ac5e1a5b3ffafb28538b5e00227b870c675b251ecc2c3c6c10d38ec" HandleID="k8s-pod-network.01a9f6301ac5e1a5b3ffafb28538b5e00227b870c675b251ecc2c3c6c10d38ec" Workload="localhost-k8s-calico--apiserver--8cbd7cb79--zzkpk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00042f620), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-8cbd7cb79-zzkpk", "timestamp":"2025-07-14 22:22:15.575209169 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:22:15.790465 containerd[1464]: 2025-07-14 22:22:15.575 [INFO][4960] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:22:15.790465 containerd[1464]: 2025-07-14 22:22:15.575 [INFO][4960] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:22:15.790465 containerd[1464]: 2025-07-14 22:22:15.575 [INFO][4960] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:22:15.790465 containerd[1464]: 2025-07-14 22:22:15.588 [INFO][4960] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.01a9f6301ac5e1a5b3ffafb28538b5e00227b870c675b251ecc2c3c6c10d38ec" host="localhost" Jul 14 22:22:15.790465 containerd[1464]: 2025-07-14 22:22:15.606 [INFO][4960] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:22:15.790465 containerd[1464]: 2025-07-14 22:22:15.612 [INFO][4960] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:22:15.790465 containerd[1464]: 2025-07-14 22:22:15.613 [INFO][4960] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:22:15.790465 containerd[1464]: 2025-07-14 22:22:15.616 [INFO][4960] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:22:15.790465 containerd[1464]: 2025-07-14 22:22:15.616 [INFO][4960] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.01a9f6301ac5e1a5b3ffafb28538b5e00227b870c675b251ecc2c3c6c10d38ec" host="localhost" Jul 14 22:22:15.790465 containerd[1464]: 2025-07-14 22:22:15.618 [INFO][4960] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.01a9f6301ac5e1a5b3ffafb28538b5e00227b870c675b251ecc2c3c6c10d38ec Jul 14 22:22:15.790465 containerd[1464]: 2025-07-14 22:22:15.700 [INFO][4960] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.01a9f6301ac5e1a5b3ffafb28538b5e00227b870c675b251ecc2c3c6c10d38ec" host="localhost" Jul 14 22:22:15.790465 containerd[1464]: 2025-07-14 22:22:15.749 [INFO][4960] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.01a9f6301ac5e1a5b3ffafb28538b5e00227b870c675b251ecc2c3c6c10d38ec" host="localhost" Jul 14 22:22:15.790465 containerd[1464]: 2025-07-14 22:22:15.749 [INFO][4960] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.01a9f6301ac5e1a5b3ffafb28538b5e00227b870c675b251ecc2c3c6c10d38ec" host="localhost" Jul 14 22:22:15.790465 containerd[1464]: 2025-07-14 22:22:15.749 [INFO][4960] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:22:15.790465 containerd[1464]: 2025-07-14 22:22:15.749 [INFO][4960] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="01a9f6301ac5e1a5b3ffafb28538b5e00227b870c675b251ecc2c3c6c10d38ec" HandleID="k8s-pod-network.01a9f6301ac5e1a5b3ffafb28538b5e00227b870c675b251ecc2c3c6c10d38ec" Workload="localhost-k8s-calico--apiserver--8cbd7cb79--zzkpk-eth0" Jul 14 22:22:15.791881 containerd[1464]: 2025-07-14 22:22:15.755 [INFO][4907] cni-plugin/k8s.go 418: Populated endpoint ContainerID="01a9f6301ac5e1a5b3ffafb28538b5e00227b870c675b251ecc2c3c6c10d38ec" Namespace="calico-apiserver" Pod="calico-apiserver-8cbd7cb79-zzkpk" WorkloadEndpoint="localhost-k8s-calico--apiserver--8cbd7cb79--zzkpk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8cbd7cb79--zzkpk-eth0", GenerateName:"calico-apiserver-8cbd7cb79-", Namespace:"calico-apiserver", SelfLink:"", UID:"6cdfbfb8-7af6-4e0b-8036-ff4c455ca4f3", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 21, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8cbd7cb79", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-8cbd7cb79-zzkpk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali68314e80603", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:22:15.791881 containerd[1464]: 2025-07-14 22:22:15.755 [INFO][4907] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="01a9f6301ac5e1a5b3ffafb28538b5e00227b870c675b251ecc2c3c6c10d38ec" Namespace="calico-apiserver" Pod="calico-apiserver-8cbd7cb79-zzkpk" WorkloadEndpoint="localhost-k8s-calico--apiserver--8cbd7cb79--zzkpk-eth0" Jul 14 22:22:15.791881 containerd[1464]: 2025-07-14 22:22:15.755 [INFO][4907] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali68314e80603 ContainerID="01a9f6301ac5e1a5b3ffafb28538b5e00227b870c675b251ecc2c3c6c10d38ec" Namespace="calico-apiserver" Pod="calico-apiserver-8cbd7cb79-zzkpk" WorkloadEndpoint="localhost-k8s-calico--apiserver--8cbd7cb79--zzkpk-eth0" Jul 14 22:22:15.791881 containerd[1464]: 2025-07-14 22:22:15.763 [INFO][4907] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="01a9f6301ac5e1a5b3ffafb28538b5e00227b870c675b251ecc2c3c6c10d38ec" Namespace="calico-apiserver" Pod="calico-apiserver-8cbd7cb79-zzkpk" WorkloadEndpoint="localhost-k8s-calico--apiserver--8cbd7cb79--zzkpk-eth0" Jul 14 22:22:15.791881 containerd[1464]: 2025-07-14 22:22:15.763 [INFO][4907] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="01a9f6301ac5e1a5b3ffafb28538b5e00227b870c675b251ecc2c3c6c10d38ec" Namespace="calico-apiserver" Pod="calico-apiserver-8cbd7cb79-zzkpk" WorkloadEndpoint="localhost-k8s-calico--apiserver--8cbd7cb79--zzkpk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8cbd7cb79--zzkpk-eth0", GenerateName:"calico-apiserver-8cbd7cb79-", Namespace:"calico-apiserver", SelfLink:"", UID:"6cdfbfb8-7af6-4e0b-8036-ff4c455ca4f3", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 21, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8cbd7cb79", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"01a9f6301ac5e1a5b3ffafb28538b5e00227b870c675b251ecc2c3c6c10d38ec", Pod:"calico-apiserver-8cbd7cb79-zzkpk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali68314e80603", MAC:"1e:e7:49:d2:00:3f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:22:15.791881 containerd[1464]: 2025-07-14 22:22:15.781 [INFO][4907] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="01a9f6301ac5e1a5b3ffafb28538b5e00227b870c675b251ecc2c3c6c10d38ec" Namespace="calico-apiserver" Pod="calico-apiserver-8cbd7cb79-zzkpk" WorkloadEndpoint="localhost-k8s-calico--apiserver--8cbd7cb79--zzkpk-eth0" Jul 14 22:22:15.818761 containerd[1464]: time="2025-07-14T22:22:15.818335874Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:22:15.818761 containerd[1464]: time="2025-07-14T22:22:15.818400219Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:22:15.818761 containerd[1464]: time="2025-07-14T22:22:15.818423554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:22:15.818761 containerd[1464]: time="2025-07-14T22:22:15.818530179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:22:15.822604 kubelet[2543]: I0714 22:22:15.822255 2543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5cb9877969-chpbx" podStartSLOduration=2.436855939 podStartE2EDuration="9.822236709s" podCreationTimestamp="2025-07-14 22:22:06 +0000 UTC" firstStartedPulling="2025-07-14 22:22:08.057001285 +0000 UTC m=+41.549022168" lastFinishedPulling="2025-07-14 22:22:15.442382065 +0000 UTC m=+48.934402938" observedRunningTime="2025-07-14 22:22:15.784381419 +0000 UTC m=+49.276402292" watchObservedRunningTime="2025-07-14 22:22:15.822236709 +0000 UTC m=+49.314257572" Jul 14 22:22:15.824476 kubelet[2543]: I0714 22:22:15.824204 2543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-vgqdz" podStartSLOduration=42.824154057 podStartE2EDuration="42.824154057s" podCreationTimestamp="2025-07-14 22:21:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:22:15.82172379 +0000 UTC m=+49.313744663" watchObservedRunningTime="2025-07-14 22:22:15.824154057 +0000 UTC m=+49.316175020" Jul 14 22:22:15.847387 systemd[1]: Started cri-containerd-01a9f6301ac5e1a5b3ffafb28538b5e00227b870c675b251ecc2c3c6c10d38ec.scope - libcontainer container 01a9f6301ac5e1a5b3ffafb28538b5e00227b870c675b251ecc2c3c6c10d38ec. Jul 14 22:22:15.864896 systemd-resolved[1323]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:22:15.894788 containerd[1464]: time="2025-07-14T22:22:15.894745113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8cbd7cb79-zzkpk,Uid:6cdfbfb8-7af6-4e0b-8036-ff4c455ca4f3,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"01a9f6301ac5e1a5b3ffafb28538b5e00227b870c675b251ecc2c3c6c10d38ec\"" Jul 14 22:22:16.770314 kubelet[2543]: E0714 22:22:16.769900 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:22:17.478410 systemd-networkd[1394]: cali68314e80603: Gained IPv6LL Jul 14 22:22:17.773175 kubelet[2543]: E0714 22:22:17.772859 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:22:19.285764 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount831124280.mount: Deactivated successfully. Jul 14 22:22:20.057852 containerd[1464]: time="2025-07-14T22:22:20.057786851Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:22:20.058982 containerd[1464]: time="2025-07-14T22:22:20.058911372Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 14 22:22:20.059940 containerd[1464]: time="2025-07-14T22:22:20.059911445Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:22:20.062484 containerd[1464]: time="2025-07-14T22:22:20.062445145Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:22:20.063275 containerd[1464]: time="2025-07-14T22:22:20.063242969Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 4.611558912s" Jul 14 22:22:20.063335 containerd[1464]: time="2025-07-14T22:22:20.063278678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 14 22:22:20.064290 containerd[1464]: time="2025-07-14T22:22:20.064262128Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 14 22:22:20.065431 containerd[1464]: time="2025-07-14T22:22:20.065373465Z" level=info msg="CreateContainer within sandbox \"4b79028289ff97eeb889c540a5a0053f4de5300f72529238d0744ca84c6e87d7\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 14 22:22:20.080160 containerd[1464]: time="2025-07-14T22:22:20.080109251Z" level=info msg="CreateContainer within sandbox \"4b79028289ff97eeb889c540a5a0053f4de5300f72529238d0744ca84c6e87d7\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"8d32d6c0586e44be626acca167f8f72ef0f16c1061f2d59769259d9b7fee393d\"" Jul 14 22:22:20.080661 containerd[1464]: time="2025-07-14T22:22:20.080637537Z" level=info msg="StartContainer for \"8d32d6c0586e44be626acca167f8f72ef0f16c1061f2d59769259d9b7fee393d\"" Jul 14 22:22:20.116377 systemd[1]: Started cri-containerd-8d32d6c0586e44be626acca167f8f72ef0f16c1061f2d59769259d9b7fee393d.scope - libcontainer container 8d32d6c0586e44be626acca167f8f72ef0f16c1061f2d59769259d9b7fee393d. Jul 14 22:22:20.160283 containerd[1464]: time="2025-07-14T22:22:20.160205658Z" level=info msg="StartContainer for \"8d32d6c0586e44be626acca167f8f72ef0f16c1061f2d59769259d9b7fee393d\" returns successfully" Jul 14 22:22:22.839642 containerd[1464]: time="2025-07-14T22:22:22.839599031Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:22:22.840713 containerd[1464]: time="2025-07-14T22:22:22.840681398Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 14 22:22:22.842329 containerd[1464]: time="2025-07-14T22:22:22.842294356Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:22:22.844773 containerd[1464]: time="2025-07-14T22:22:22.844744590Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:22:22.845376 containerd[1464]: time="2025-07-14T22:22:22.845338941Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 2.781047826s" Jul 14 22:22:22.845376 containerd[1464]: time="2025-07-14T22:22:22.845372044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 14 22:22:22.846489 containerd[1464]: time="2025-07-14T22:22:22.846328741Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 14 22:22:22.847495 containerd[1464]: time="2025-07-14T22:22:22.847471455Z" level=info msg="CreateContainer within sandbox \"8456a61d2b44de74c978871b4bf92b82e921dd8934e199169225e7323e087555\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 14 22:22:22.865270 containerd[1464]: time="2025-07-14T22:22:22.865196344Z" level=info msg="CreateContainer within sandbox \"8456a61d2b44de74c978871b4bf92b82e921dd8934e199169225e7323e087555\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0bca6aa4940ce13406f517eaa2ed7f854e86d50c2e40f7c2ce92464b787efb39\"" Jul 14 22:22:22.865902 containerd[1464]: time="2025-07-14T22:22:22.865862704Z" level=info msg="StartContainer for \"0bca6aa4940ce13406f517eaa2ed7f854e86d50c2e40f7c2ce92464b787efb39\"" Jul 14 22:22:22.901382 systemd[1]: Started cri-containerd-0bca6aa4940ce13406f517eaa2ed7f854e86d50c2e40f7c2ce92464b787efb39.scope - libcontainer container 0bca6aa4940ce13406f517eaa2ed7f854e86d50c2e40f7c2ce92464b787efb39. Jul 14 22:22:22.941360 containerd[1464]: time="2025-07-14T22:22:22.941305856Z" level=info msg="StartContainer for \"0bca6aa4940ce13406f517eaa2ed7f854e86d50c2e40f7c2ce92464b787efb39\" returns successfully" Jul 14 22:22:24.033015 kubelet[2543]: I0714 22:22:24.032931 2543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8cbd7cb79-jt55m" podStartSLOduration=28.391656033 podStartE2EDuration="39.032909337s" podCreationTimestamp="2025-07-14 22:21:45 +0000 UTC" firstStartedPulling="2025-07-14 22:22:12.204782304 +0000 UTC m=+45.696803177" lastFinishedPulling="2025-07-14 22:22:22.846035608 +0000 UTC m=+56.338056481" observedRunningTime="2025-07-14 22:22:24.03253537 +0000 UTC m=+57.524556243" watchObservedRunningTime="2025-07-14 22:22:24.032909337 +0000 UTC m=+57.524930210" Jul 14 22:22:24.033586 kubelet[2543]: I0714 22:22:24.033308 2543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-gcjz6" podStartSLOduration=28.811596025 podStartE2EDuration="37.033300217s" podCreationTimestamp="2025-07-14 22:21:47 +0000 UTC" firstStartedPulling="2025-07-14 22:22:11.842358403 +0000 UTC m=+45.334379276" lastFinishedPulling="2025-07-14 22:22:20.064062595 +0000 UTC m=+53.556083468" observedRunningTime="2025-07-14 22:22:20.795701368 +0000 UTC m=+54.287722251" watchObservedRunningTime="2025-07-14 22:22:24.033300217 +0000 UTC m=+57.525321090" Jul 14 22:22:24.816330 kubelet[2543]: I0714 22:22:24.816292 2543 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 22:22:26.570942 containerd[1464]: time="2025-07-14T22:22:26.570895638Z" level=info msg="StopPodSandbox for \"1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3\"" Jul 14 22:22:26.681050 containerd[1464]: 2025-07-14 22:22:26.624 [WARNING][5239] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3" WorkloadEndpoint="localhost-k8s-whisker--54bf954d5f--9lmrc-eth0" Jul 14 22:22:26.681050 containerd[1464]: 2025-07-14 22:22:26.625 [INFO][5239] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3" Jul 14 22:22:26.681050 containerd[1464]: 2025-07-14 22:22:26.625 [INFO][5239] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3" iface="eth0" netns="" Jul 14 22:22:26.681050 containerd[1464]: 2025-07-14 22:22:26.625 [INFO][5239] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3" Jul 14 22:22:26.681050 containerd[1464]: 2025-07-14 22:22:26.625 [INFO][5239] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3" Jul 14 22:22:26.681050 containerd[1464]: 2025-07-14 22:22:26.658 [INFO][5249] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3" HandleID="k8s-pod-network.1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3" Workload="localhost-k8s-whisker--54bf954d5f--9lmrc-eth0" Jul 14 22:22:26.681050 containerd[1464]: 2025-07-14 22:22:26.658 [INFO][5249] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:22:26.681050 containerd[1464]: 2025-07-14 22:22:26.659 [INFO][5249] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:22:26.681050 containerd[1464]: 2025-07-14 22:22:26.669 [WARNING][5249] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3" HandleID="k8s-pod-network.1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3" Workload="localhost-k8s-whisker--54bf954d5f--9lmrc-eth0" Jul 14 22:22:26.681050 containerd[1464]: 2025-07-14 22:22:26.669 [INFO][5249] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3" HandleID="k8s-pod-network.1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3" Workload="localhost-k8s-whisker--54bf954d5f--9lmrc-eth0" Jul 14 22:22:26.681050 containerd[1464]: 2025-07-14 22:22:26.672 [INFO][5249] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:22:26.681050 containerd[1464]: 2025-07-14 22:22:26.677 [INFO][5239] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3" Jul 14 22:22:26.681468 containerd[1464]: time="2025-07-14T22:22:26.681097996Z" level=info msg="TearDown network for sandbox \"1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3\" successfully" Jul 14 22:22:26.681468 containerd[1464]: time="2025-07-14T22:22:26.681134385Z" level=info msg="StopPodSandbox for \"1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3\" returns successfully" Jul 14 22:22:26.748936 containerd[1464]: time="2025-07-14T22:22:26.748855689Z" level=info msg="RemovePodSandbox for \"1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3\"" Jul 14 22:22:26.751054 containerd[1464]: time="2025-07-14T22:22:26.751008272Z" level=info msg="Forcibly stopping sandbox \"1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3\"" Jul 14 22:22:27.161180 containerd[1464]: 2025-07-14 22:22:26.908 [WARNING][5266] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3" WorkloadEndpoint="localhost-k8s-whisker--54bf954d5f--9lmrc-eth0" Jul 14 22:22:27.161180 containerd[1464]: 2025-07-14 22:22:26.908 [INFO][5266] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3" Jul 14 22:22:27.161180 containerd[1464]: 2025-07-14 22:22:26.908 [INFO][5266] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3" iface="eth0" netns="" Jul 14 22:22:27.161180 containerd[1464]: 2025-07-14 22:22:26.909 [INFO][5266] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3" Jul 14 22:22:27.161180 containerd[1464]: 2025-07-14 22:22:26.909 [INFO][5266] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3" Jul 14 22:22:27.161180 containerd[1464]: 2025-07-14 22:22:26.933 [INFO][5279] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3" HandleID="k8s-pod-network.1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3" Workload="localhost-k8s-whisker--54bf954d5f--9lmrc-eth0" Jul 14 22:22:27.161180 containerd[1464]: 2025-07-14 22:22:26.933 [INFO][5279] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:22:27.161180 containerd[1464]: 2025-07-14 22:22:26.933 [INFO][5279] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:22:27.161180 containerd[1464]: 2025-07-14 22:22:27.153 [WARNING][5279] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3" HandleID="k8s-pod-network.1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3" Workload="localhost-k8s-whisker--54bf954d5f--9lmrc-eth0" Jul 14 22:22:27.161180 containerd[1464]: 2025-07-14 22:22:27.153 [INFO][5279] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3" HandleID="k8s-pod-network.1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3" Workload="localhost-k8s-whisker--54bf954d5f--9lmrc-eth0" Jul 14 22:22:27.161180 containerd[1464]: 2025-07-14 22:22:27.155 [INFO][5279] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:22:27.161180 containerd[1464]: 2025-07-14 22:22:27.158 [INFO][5266] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3" Jul 14 22:22:27.161736 containerd[1464]: time="2025-07-14T22:22:27.161246459Z" level=info msg="TearDown network for sandbox \"1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3\" successfully" Jul 14 22:22:27.188179 systemd[1]: Started sshd@7-10.0.0.137:22-10.0.0.1:50632.service - OpenSSH per-connection server daemon (10.0.0.1:50632). Jul 14 22:22:27.205787 containerd[1464]: time="2025-07-14T22:22:27.205715016Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:22:27.206758 containerd[1464]: time="2025-07-14T22:22:27.206717946Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 14 22:22:27.208061 containerd[1464]: time="2025-07-14T22:22:27.208029648Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:22:27.208280 containerd[1464]: time="2025-07-14T22:22:27.208101195Z" level=info msg="RemovePodSandbox \"1e5b86c78f7d986bb6542cae65c879461ab3f8432421c49be3b4588d4b6553e3\" returns successfully" Jul 14 22:22:27.208641 containerd[1464]: time="2025-07-14T22:22:27.208606803Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:22:27.213462 containerd[1464]: time="2025-07-14T22:22:27.213200768Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:22:27.213703 containerd[1464]: time="2025-07-14T22:22:27.213677040Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 4.367179235s" Jul 14 22:22:27.213810 containerd[1464]: time="2025-07-14T22:22:27.213778074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 14 22:22:27.215164 containerd[1464]: time="2025-07-14T22:22:27.215135803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 14 22:22:27.216375 containerd[1464]: time="2025-07-14T22:22:27.216301194Z" level=info msg="CreateContainer within sandbox \"6a407a04a2c0976cf8118e45190b2a347bb3fa7ebadcf037c848f61eb1b3c691\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 14 22:22:27.222256 containerd[1464]: time="2025-07-14T22:22:27.221996489Z" level=info msg="StopPodSandbox for \"928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af\"" Jul 14 22:22:27.282359 containerd[1464]: time="2025-07-14T22:22:27.281427702Z" level=info msg="CreateContainer within sandbox \"6a407a04a2c0976cf8118e45190b2a347bb3fa7ebadcf037c848f61eb1b3c691\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"0a3cb71c29cc3e0777116cd75db8183f30078546de9b8d4647a982023accece6\"" Jul 14 22:22:27.283454 containerd[1464]: time="2025-07-14T22:22:27.283415087Z" level=info msg="StartContainer for \"0a3cb71c29cc3e0777116cd75db8183f30078546de9b8d4647a982023accece6\"" Jul 14 22:22:27.299936 sshd[5290]: Accepted publickey for core from 10.0.0.1 port 50632 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:22:27.303170 sshd[5290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:22:27.309819 systemd-logind[1440]: New session 8 of user core. Jul 14 22:22:27.318473 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 14 22:22:27.323883 containerd[1464]: 2025-07-14 22:22:27.272 [WARNING][5301] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8cbd7cb79--jt55m-eth0", GenerateName:"calico-apiserver-8cbd7cb79-", Namespace:"calico-apiserver", SelfLink:"", UID:"24b165d4-dcd3-454b-b444-600d3f259636", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 21, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8cbd7cb79", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8456a61d2b44de74c978871b4bf92b82e921dd8934e199169225e7323e087555", Pod:"calico-apiserver-8cbd7cb79-jt55m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia82aa35ac03", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:22:27.323883 containerd[1464]: 2025-07-14 22:22:27.273 [INFO][5301] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af" Jul 14 22:22:27.323883 containerd[1464]: 2025-07-14 22:22:27.273 [INFO][5301] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af" iface="eth0" netns="" Jul 14 22:22:27.323883 containerd[1464]: 2025-07-14 22:22:27.273 [INFO][5301] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af" Jul 14 22:22:27.323883 containerd[1464]: 2025-07-14 22:22:27.273 [INFO][5301] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af" Jul 14 22:22:27.323883 containerd[1464]: 2025-07-14 22:22:27.306 [INFO][5312] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af" HandleID="k8s-pod-network.928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af" Workload="localhost-k8s-calico--apiserver--8cbd7cb79--jt55m-eth0" Jul 14 22:22:27.323883 containerd[1464]: 2025-07-14 22:22:27.306 [INFO][5312] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:22:27.323883 containerd[1464]: 2025-07-14 22:22:27.306 [INFO][5312] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:22:27.323883 containerd[1464]: 2025-07-14 22:22:27.315 [WARNING][5312] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af" HandleID="k8s-pod-network.928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af" Workload="localhost-k8s-calico--apiserver--8cbd7cb79--jt55m-eth0" Jul 14 22:22:27.323883 containerd[1464]: 2025-07-14 22:22:27.315 [INFO][5312] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af" HandleID="k8s-pod-network.928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af" Workload="localhost-k8s-calico--apiserver--8cbd7cb79--jt55m-eth0" Jul 14 22:22:27.323883 containerd[1464]: 2025-07-14 22:22:27.316 [INFO][5312] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:22:27.323883 containerd[1464]: 2025-07-14 22:22:27.320 [INFO][5301] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af" Jul 14 22:22:27.324763 containerd[1464]: time="2025-07-14T22:22:27.324596911Z" level=info msg="TearDown network for sandbox \"928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af\" successfully" Jul 14 22:22:27.324763 containerd[1464]: time="2025-07-14T22:22:27.324634723Z" level=info msg="StopPodSandbox for \"928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af\" returns successfully" Jul 14 22:22:27.325448 containerd[1464]: time="2025-07-14T22:22:27.325417602Z" level=info msg="RemovePodSandbox for \"928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af\"" Jul 14 22:22:27.325494 containerd[1464]: time="2025-07-14T22:22:27.325476184Z" level=info msg="Forcibly stopping sandbox \"928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af\"" Jul 14 22:22:27.371450 systemd[1]: Started cri-containerd-0a3cb71c29cc3e0777116cd75db8183f30078546de9b8d4647a982023accece6.scope - libcontainer container 0a3cb71c29cc3e0777116cd75db8183f30078546de9b8d4647a982023accece6. Jul 14 22:22:27.551749 containerd[1464]: 2025-07-14 22:22:27.500 [WARNING][5331] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8cbd7cb79--jt55m-eth0", GenerateName:"calico-apiserver-8cbd7cb79-", Namespace:"calico-apiserver", SelfLink:"", UID:"24b165d4-dcd3-454b-b444-600d3f259636", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 21, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8cbd7cb79", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8456a61d2b44de74c978871b4bf92b82e921dd8934e199169225e7323e087555", Pod:"calico-apiserver-8cbd7cb79-jt55m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia82aa35ac03", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:22:27.551749 containerd[1464]: 2025-07-14 22:22:27.501 [INFO][5331] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af" Jul 14 22:22:27.551749 containerd[1464]: 2025-07-14 22:22:27.501 [INFO][5331] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af" iface="eth0" netns="" Jul 14 22:22:27.551749 containerd[1464]: 2025-07-14 22:22:27.501 [INFO][5331] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af" Jul 14 22:22:27.551749 containerd[1464]: 2025-07-14 22:22:27.501 [INFO][5331] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af" Jul 14 22:22:27.551749 containerd[1464]: 2025-07-14 22:22:27.528 [INFO][5382] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af" HandleID="k8s-pod-network.928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af" Workload="localhost-k8s-calico--apiserver--8cbd7cb79--jt55m-eth0" Jul 14 22:22:27.551749 containerd[1464]: 2025-07-14 22:22:27.528 [INFO][5382] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:22:27.551749 containerd[1464]: 2025-07-14 22:22:27.529 [INFO][5382] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:22:27.551749 containerd[1464]: 2025-07-14 22:22:27.535 [WARNING][5382] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af" HandleID="k8s-pod-network.928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af" Workload="localhost-k8s-calico--apiserver--8cbd7cb79--jt55m-eth0" Jul 14 22:22:27.551749 containerd[1464]: 2025-07-14 22:22:27.535 [INFO][5382] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af" HandleID="k8s-pod-network.928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af" Workload="localhost-k8s-calico--apiserver--8cbd7cb79--jt55m-eth0" Jul 14 22:22:27.551749 containerd[1464]: 2025-07-14 22:22:27.538 [INFO][5382] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:22:27.551749 containerd[1464]: 2025-07-14 22:22:27.545 [INFO][5331] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af" Jul 14 22:22:27.552476 containerd[1464]: time="2025-07-14T22:22:27.551793639Z" level=info msg="TearDown network for sandbox \"928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af\" successfully" Jul 14 22:22:27.572556 containerd[1464]: time="2025-07-14T22:22:27.572482288Z" level=info msg="StartContainer for \"0a3cb71c29cc3e0777116cd75db8183f30078546de9b8d4647a982023accece6\" returns successfully" Jul 14 22:22:27.589110 containerd[1464]: time="2025-07-14T22:22:27.589053835Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:22:27.589260 containerd[1464]: time="2025-07-14T22:22:27.589129621Z" level=info msg="RemovePodSandbox \"928431751f0518163104bad43bc375455f6353a8de0bd1599d24afd6855a68af\" returns successfully" Jul 14 22:22:27.589755 containerd[1464]: time="2025-07-14T22:22:27.589734338Z" level=info msg="StopPodSandbox for \"53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77\"" Jul 14 22:22:27.674190 containerd[1464]: 2025-07-14 22:22:27.630 [WARNING][5401] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7xmfp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a0d7e0a1-9365-4ef9-a68a-5541a9cd6ec7", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 21, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6a407a04a2c0976cf8118e45190b2a347bb3fa7ebadcf037c848f61eb1b3c691", Pod:"csi-node-driver-7xmfp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8c2d8b77abb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:22:27.674190 containerd[1464]: 2025-07-14 22:22:27.631 [INFO][5401] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77" Jul 14 22:22:27.674190 containerd[1464]: 2025-07-14 22:22:27.631 [INFO][5401] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77" iface="eth0" netns="" Jul 14 22:22:27.674190 containerd[1464]: 2025-07-14 22:22:27.631 [INFO][5401] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77" Jul 14 22:22:27.674190 containerd[1464]: 2025-07-14 22:22:27.631 [INFO][5401] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77" Jul 14 22:22:27.674190 containerd[1464]: 2025-07-14 22:22:27.659 [INFO][5409] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77" HandleID="k8s-pod-network.53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77" Workload="localhost-k8s-csi--node--driver--7xmfp-eth0" Jul 14 22:22:27.674190 containerd[1464]: 2025-07-14 22:22:27.659 [INFO][5409] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:22:27.674190 containerd[1464]: 2025-07-14 22:22:27.659 [INFO][5409] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:22:27.674190 containerd[1464]: 2025-07-14 22:22:27.666 [WARNING][5409] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77" HandleID="k8s-pod-network.53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77" Workload="localhost-k8s-csi--node--driver--7xmfp-eth0" Jul 14 22:22:27.674190 containerd[1464]: 2025-07-14 22:22:27.666 [INFO][5409] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77" HandleID="k8s-pod-network.53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77" Workload="localhost-k8s-csi--node--driver--7xmfp-eth0" Jul 14 22:22:27.674190 containerd[1464]: 2025-07-14 22:22:27.667 [INFO][5409] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:22:27.674190 containerd[1464]: 2025-07-14 22:22:27.671 [INFO][5401] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77" Jul 14 22:22:27.676196 containerd[1464]: time="2025-07-14T22:22:27.674218016Z" level=info msg="TearDown network for sandbox \"53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77\" successfully" Jul 14 22:22:27.676196 containerd[1464]: time="2025-07-14T22:22:27.674267701Z" level=info msg="StopPodSandbox for \"53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77\" returns successfully" Jul 14 22:22:27.676196 containerd[1464]: time="2025-07-14T22:22:27.674810019Z" level=info msg="RemovePodSandbox for \"53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77\"" Jul 14 22:22:27.676196 containerd[1464]: time="2025-07-14T22:22:27.674839265Z" level=info msg="Forcibly stopping sandbox \"53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77\"" Jul 14 22:22:27.700299 sshd[5290]: pam_unix(sshd:session): session closed for user core Jul 14 22:22:27.704665 systemd[1]: sshd@7-10.0.0.137:22-10.0.0.1:50632.service: Deactivated successfully. Jul 14 22:22:27.705026 systemd-logind[1440]: Session 8 logged out. Waiting for processes to exit. Jul 14 22:22:27.707782 systemd[1]: session-8.scope: Deactivated successfully. Jul 14 22:22:27.710442 systemd-logind[1440]: Removed session 8. Jul 14 22:22:27.749782 containerd[1464]: 2025-07-14 22:22:27.715 [WARNING][5426] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7xmfp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a0d7e0a1-9365-4ef9-a68a-5541a9cd6ec7", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 21, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6a407a04a2c0976cf8118e45190b2a347bb3fa7ebadcf037c848f61eb1b3c691", Pod:"csi-node-driver-7xmfp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8c2d8b77abb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:22:27.749782 containerd[1464]: 2025-07-14 22:22:27.715 [INFO][5426] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77" Jul 14 22:22:27.749782 containerd[1464]: 2025-07-14 22:22:27.715 [INFO][5426] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77" iface="eth0" netns="" Jul 14 22:22:27.749782 containerd[1464]: 2025-07-14 22:22:27.715 [INFO][5426] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77" Jul 14 22:22:27.749782 containerd[1464]: 2025-07-14 22:22:27.715 [INFO][5426] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77" Jul 14 22:22:27.749782 containerd[1464]: 2025-07-14 22:22:27.737 [INFO][5436] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77" HandleID="k8s-pod-network.53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77" Workload="localhost-k8s-csi--node--driver--7xmfp-eth0" Jul 14 22:22:27.749782 containerd[1464]: 2025-07-14 22:22:27.737 [INFO][5436] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:22:27.749782 containerd[1464]: 2025-07-14 22:22:27.737 [INFO][5436] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:22:27.749782 containerd[1464]: 2025-07-14 22:22:27.742 [WARNING][5436] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77" HandleID="k8s-pod-network.53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77" Workload="localhost-k8s-csi--node--driver--7xmfp-eth0" Jul 14 22:22:27.749782 containerd[1464]: 2025-07-14 22:22:27.742 [INFO][5436] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77" HandleID="k8s-pod-network.53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77" Workload="localhost-k8s-csi--node--driver--7xmfp-eth0" Jul 14 22:22:27.749782 containerd[1464]: 2025-07-14 22:22:27.743 [INFO][5436] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:22:27.749782 containerd[1464]: 2025-07-14 22:22:27.746 [INFO][5426] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77" Jul 14 22:22:27.750557 containerd[1464]: time="2025-07-14T22:22:27.749829437Z" level=info msg="TearDown network for sandbox \"53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77\" successfully" Jul 14 22:22:27.797616 containerd[1464]: time="2025-07-14T22:22:27.797573485Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:22:27.797745 containerd[1464]: time="2025-07-14T22:22:27.797640433Z" level=info msg="RemovePodSandbox \"53bd573ab001a3e37fc4225406273a4c5b1cf97c0804dd6e16c2fd7e86ef8d77\" returns successfully" Jul 14 22:22:27.798154 containerd[1464]: time="2025-07-14T22:22:27.798115323Z" level=info msg="StopPodSandbox for \"660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d\"" Jul 14 22:22:27.875465 containerd[1464]: 2025-07-14 22:22:27.831 [WARNING][5455] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8cbd7cb79--zzkpk-eth0", GenerateName:"calico-apiserver-8cbd7cb79-", Namespace:"calico-apiserver", SelfLink:"", UID:"6cdfbfb8-7af6-4e0b-8036-ff4c455ca4f3", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 21, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8cbd7cb79", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"01a9f6301ac5e1a5b3ffafb28538b5e00227b870c675b251ecc2c3c6c10d38ec", Pod:"calico-apiserver-8cbd7cb79-zzkpk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali68314e80603", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:22:27.875465 containerd[1464]: 2025-07-14 22:22:27.832 [INFO][5455] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d" Jul 14 22:22:27.875465 containerd[1464]: 2025-07-14 22:22:27.832 [INFO][5455] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d" iface="eth0" netns="" Jul 14 22:22:27.875465 containerd[1464]: 2025-07-14 22:22:27.832 [INFO][5455] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d" Jul 14 22:22:27.875465 containerd[1464]: 2025-07-14 22:22:27.832 [INFO][5455] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d" Jul 14 22:22:27.875465 containerd[1464]: 2025-07-14 22:22:27.851 [INFO][5464] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d" HandleID="k8s-pod-network.660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d" Workload="localhost-k8s-calico--apiserver--8cbd7cb79--zzkpk-eth0" Jul 14 22:22:27.875465 containerd[1464]: 2025-07-14 22:22:27.851 [INFO][5464] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:22:27.875465 containerd[1464]: 2025-07-14 22:22:27.851 [INFO][5464] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:22:27.875465 containerd[1464]: 2025-07-14 22:22:27.864 [WARNING][5464] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d" HandleID="k8s-pod-network.660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d" Workload="localhost-k8s-calico--apiserver--8cbd7cb79--zzkpk-eth0" Jul 14 22:22:27.875465 containerd[1464]: 2025-07-14 22:22:27.864 [INFO][5464] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d" HandleID="k8s-pod-network.660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d" Workload="localhost-k8s-calico--apiserver--8cbd7cb79--zzkpk-eth0" Jul 14 22:22:27.875465 containerd[1464]: 2025-07-14 22:22:27.866 [INFO][5464] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:22:27.875465 containerd[1464]: 2025-07-14 22:22:27.870 [INFO][5455] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d" Jul 14 22:22:27.875465 containerd[1464]: time="2025-07-14T22:22:27.875453920Z" level=info msg="TearDown network for sandbox \"660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d\" successfully" Jul 14 22:22:27.875966 containerd[1464]: time="2025-07-14T22:22:27.875506200Z" level=info msg="StopPodSandbox for \"660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d\" returns successfully" Jul 14 22:22:27.875966 containerd[1464]: time="2025-07-14T22:22:27.875927077Z" level=info msg="RemovePodSandbox for \"660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d\"" Jul 14 22:22:27.875966 containerd[1464]: time="2025-07-14T22:22:27.875952786Z" level=info msg="Forcibly stopping sandbox \"660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d\"" Jul 14 22:22:27.958597 containerd[1464]: 2025-07-14 22:22:27.931 [WARNING][5481] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8cbd7cb79--zzkpk-eth0", GenerateName:"calico-apiserver-8cbd7cb79-", Namespace:"calico-apiserver", SelfLink:"", UID:"6cdfbfb8-7af6-4e0b-8036-ff4c455ca4f3", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 21, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8cbd7cb79", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"01a9f6301ac5e1a5b3ffafb28538b5e00227b870c675b251ecc2c3c6c10d38ec", Pod:"calico-apiserver-8cbd7cb79-zzkpk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali68314e80603", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:22:27.958597 containerd[1464]: 2025-07-14 22:22:27.931 [INFO][5481] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d" Jul 14 22:22:27.958597 containerd[1464]: 2025-07-14 22:22:27.931 [INFO][5481] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d" iface="eth0" netns="" Jul 14 22:22:27.958597 containerd[1464]: 2025-07-14 22:22:27.931 [INFO][5481] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d" Jul 14 22:22:27.958597 containerd[1464]: 2025-07-14 22:22:27.931 [INFO][5481] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d" Jul 14 22:22:27.958597 containerd[1464]: 2025-07-14 22:22:27.948 [INFO][5490] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d" HandleID="k8s-pod-network.660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d" Workload="localhost-k8s-calico--apiserver--8cbd7cb79--zzkpk-eth0" Jul 14 22:22:27.958597 containerd[1464]: 2025-07-14 22:22:27.948 [INFO][5490] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:22:27.958597 containerd[1464]: 2025-07-14 22:22:27.948 [INFO][5490] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:22:27.958597 containerd[1464]: 2025-07-14 22:22:27.952 [WARNING][5490] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d" HandleID="k8s-pod-network.660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d" Workload="localhost-k8s-calico--apiserver--8cbd7cb79--zzkpk-eth0" Jul 14 22:22:27.958597 containerd[1464]: 2025-07-14 22:22:27.952 [INFO][5490] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d" HandleID="k8s-pod-network.660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d" Workload="localhost-k8s-calico--apiserver--8cbd7cb79--zzkpk-eth0" Jul 14 22:22:27.958597 containerd[1464]: 2025-07-14 22:22:27.953 [INFO][5490] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:22:27.958597 containerd[1464]: 2025-07-14 22:22:27.956 [INFO][5481] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d" Jul 14 22:22:27.959008 containerd[1464]: time="2025-07-14T22:22:27.958640825Z" level=info msg="TearDown network for sandbox \"660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d\" successfully" Jul 14 22:22:27.962835 containerd[1464]: time="2025-07-14T22:22:27.962783005Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:22:27.962835 containerd[1464]: time="2025-07-14T22:22:27.962847098Z" level=info msg="RemovePodSandbox \"660a71b0938f51beaf37dd625fc36568dd3ead8633393cfaeba42216d0a1bb6d\" returns successfully" Jul 14 22:22:27.963428 containerd[1464]: time="2025-07-14T22:22:27.963329803Z" level=info msg="StopPodSandbox for \"837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00\"" Jul 14 22:22:28.021361 containerd[1464]: 2025-07-14 22:22:27.992 [WARNING][5507] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--vgqdz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b0aee35e-b810-44e1-8f6a-22ac05756d20", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 21, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7ad59e0f08fee5aa9ccd3451f07b1d9db1ec1b58d5276f58f65c006ad851bed7", Pod:"coredns-668d6bf9bc-vgqdz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8ff0e93dbf4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:22:28.021361 containerd[1464]: 2025-07-14 22:22:27.992 [INFO][5507] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00" Jul 14 22:22:28.021361 containerd[1464]: 2025-07-14 22:22:27.992 [INFO][5507] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00" iface="eth0" netns="" Jul 14 22:22:28.021361 containerd[1464]: 2025-07-14 22:22:27.992 [INFO][5507] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00" Jul 14 22:22:28.021361 containerd[1464]: 2025-07-14 22:22:27.992 [INFO][5507] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00" Jul 14 22:22:28.021361 containerd[1464]: 2025-07-14 22:22:28.010 [INFO][5515] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00" HandleID="k8s-pod-network.837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00" Workload="localhost-k8s-coredns--668d6bf9bc--vgqdz-eth0" Jul 14 22:22:28.021361 containerd[1464]: 2025-07-14 22:22:28.010 [INFO][5515] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:22:28.021361 containerd[1464]: 2025-07-14 22:22:28.010 [INFO][5515] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:22:28.021361 containerd[1464]: 2025-07-14 22:22:28.015 [WARNING][5515] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00" HandleID="k8s-pod-network.837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00" Workload="localhost-k8s-coredns--668d6bf9bc--vgqdz-eth0" Jul 14 22:22:28.021361 containerd[1464]: 2025-07-14 22:22:28.015 [INFO][5515] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00" HandleID="k8s-pod-network.837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00" Workload="localhost-k8s-coredns--668d6bf9bc--vgqdz-eth0" Jul 14 22:22:28.021361 containerd[1464]: 2025-07-14 22:22:28.016 [INFO][5515] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:22:28.021361 containerd[1464]: 2025-07-14 22:22:28.018 [INFO][5507] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00" Jul 14 22:22:28.021872 containerd[1464]: time="2025-07-14T22:22:28.021396263Z" level=info msg="TearDown network for sandbox \"837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00\" successfully" Jul 14 22:22:28.021872 containerd[1464]: time="2025-07-14T22:22:28.021419868Z" level=info msg="StopPodSandbox for \"837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00\" returns successfully" Jul 14 22:22:28.021872 containerd[1464]: time="2025-07-14T22:22:28.021815325Z" level=info msg="RemovePodSandbox for \"837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00\"" Jul 14 22:22:28.021872 containerd[1464]: time="2025-07-14T22:22:28.021836986Z" level=info msg="Forcibly stopping sandbox \"837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00\"" Jul 14 22:22:28.083903 containerd[1464]: 2025-07-14 22:22:28.052 [WARNING][5532] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--vgqdz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b0aee35e-b810-44e1-8f6a-22ac05756d20", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 21, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7ad59e0f08fee5aa9ccd3451f07b1d9db1ec1b58d5276f58f65c006ad851bed7", Pod:"coredns-668d6bf9bc-vgqdz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8ff0e93dbf4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:22:28.083903 containerd[1464]: 2025-07-14 22:22:28.053 [INFO][5532] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00" Jul 14 22:22:28.083903 containerd[1464]: 2025-07-14 22:22:28.053 [INFO][5532] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00" iface="eth0" netns="" Jul 14 22:22:28.083903 containerd[1464]: 2025-07-14 22:22:28.053 [INFO][5532] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00" Jul 14 22:22:28.083903 containerd[1464]: 2025-07-14 22:22:28.053 [INFO][5532] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00" Jul 14 22:22:28.083903 containerd[1464]: 2025-07-14 22:22:28.072 [INFO][5540] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00" HandleID="k8s-pod-network.837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00" Workload="localhost-k8s-coredns--668d6bf9bc--vgqdz-eth0" Jul 14 22:22:28.083903 containerd[1464]: 2025-07-14 22:22:28.072 [INFO][5540] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:22:28.083903 containerd[1464]: 2025-07-14 22:22:28.072 [INFO][5540] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:22:28.083903 containerd[1464]: 2025-07-14 22:22:28.077 [WARNING][5540] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00" HandleID="k8s-pod-network.837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00" Workload="localhost-k8s-coredns--668d6bf9bc--vgqdz-eth0" Jul 14 22:22:28.083903 containerd[1464]: 2025-07-14 22:22:28.077 [INFO][5540] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00" HandleID="k8s-pod-network.837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00" Workload="localhost-k8s-coredns--668d6bf9bc--vgqdz-eth0" Jul 14 22:22:28.083903 containerd[1464]: 2025-07-14 22:22:28.078 [INFO][5540] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:22:28.083903 containerd[1464]: 2025-07-14 22:22:28.081 [INFO][5532] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00" Jul 14 22:22:28.084336 containerd[1464]: time="2025-07-14T22:22:28.083944142Z" level=info msg="TearDown network for sandbox \"837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00\" successfully" Jul 14 22:22:28.087774 containerd[1464]: time="2025-07-14T22:22:28.087742100Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:22:28.087825 containerd[1464]: time="2025-07-14T22:22:28.087785894Z" level=info msg="RemovePodSandbox \"837363c6cb107b38cb7ba6ea5cbe6a12d22d4f4e44eec2e4adcd9a522d87ff00\" returns successfully" Jul 14 22:22:28.088173 containerd[1464]: time="2025-07-14T22:22:28.088156914Z" level=info msg="StopPodSandbox for \"2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383\"" Jul 14 22:22:28.164877 containerd[1464]: 2025-07-14 22:22:28.117 [WARNING][5558] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--hlwkw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1d877bca-eddc-4eb9-ba2d-007238222f97", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 21, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"db2cee3e9e94cfa2c4710a8413def6ebb02fb8be0fa55c86bb4a6ae0c8f31636", Pod:"coredns-668d6bf9bc-hlwkw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calide80579e761", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:22:28.164877 containerd[1464]: 2025-07-14 22:22:28.117 [INFO][5558] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383" Jul 14 22:22:28.164877 containerd[1464]: 2025-07-14 22:22:28.117 [INFO][5558] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383" iface="eth0" netns="" Jul 14 22:22:28.164877 containerd[1464]: 2025-07-14 22:22:28.117 [INFO][5558] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383" Jul 14 22:22:28.164877 containerd[1464]: 2025-07-14 22:22:28.117 [INFO][5558] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383" Jul 14 22:22:28.164877 containerd[1464]: 2025-07-14 22:22:28.150 [INFO][5567] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383" HandleID="k8s-pod-network.2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383" Workload="localhost-k8s-coredns--668d6bf9bc--hlwkw-eth0" Jul 14 22:22:28.164877 containerd[1464]: 2025-07-14 22:22:28.151 [INFO][5567] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:22:28.164877 containerd[1464]: 2025-07-14 22:22:28.151 [INFO][5567] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:22:28.164877 containerd[1464]: 2025-07-14 22:22:28.156 [WARNING][5567] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383" HandleID="k8s-pod-network.2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383" Workload="localhost-k8s-coredns--668d6bf9bc--hlwkw-eth0" Jul 14 22:22:28.164877 containerd[1464]: 2025-07-14 22:22:28.156 [INFO][5567] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383" HandleID="k8s-pod-network.2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383" Workload="localhost-k8s-coredns--668d6bf9bc--hlwkw-eth0" Jul 14 22:22:28.164877 containerd[1464]: 2025-07-14 22:22:28.157 [INFO][5567] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:22:28.164877 containerd[1464]: 2025-07-14 22:22:28.161 [INFO][5558] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383" Jul 14 22:22:28.164877 containerd[1464]: time="2025-07-14T22:22:28.164832739Z" level=info msg="TearDown network for sandbox \"2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383\" successfully" Jul 14 22:22:28.164877 containerd[1464]: time="2025-07-14T22:22:28.164858248Z" level=info msg="StopPodSandbox for \"2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383\" returns successfully" Jul 14 22:22:28.165547 containerd[1464]: time="2025-07-14T22:22:28.165440011Z" level=info msg="RemovePodSandbox for \"2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383\"" Jul 14 22:22:28.165547 containerd[1464]: time="2025-07-14T22:22:28.165472804Z" level=info msg="Forcibly stopping sandbox \"2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383\"" Jul 14 22:22:28.245970 containerd[1464]: 2025-07-14 22:22:28.207 [WARNING][5583] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--hlwkw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1d877bca-eddc-4eb9-ba2d-007238222f97", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 21, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"db2cee3e9e94cfa2c4710a8413def6ebb02fb8be0fa55c86bb4a6ae0c8f31636", Pod:"coredns-668d6bf9bc-hlwkw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calide80579e761", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:22:28.245970 containerd[1464]: 2025-07-14 22:22:28.207 [INFO][5583] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383" Jul 14 22:22:28.245970 containerd[1464]: 2025-07-14 22:22:28.207 [INFO][5583] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383" iface="eth0" netns="" Jul 14 22:22:28.245970 containerd[1464]: 2025-07-14 22:22:28.207 [INFO][5583] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383" Jul 14 22:22:28.245970 containerd[1464]: 2025-07-14 22:22:28.207 [INFO][5583] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383" Jul 14 22:22:28.245970 containerd[1464]: 2025-07-14 22:22:28.232 [INFO][5598] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383" HandleID="k8s-pod-network.2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383" Workload="localhost-k8s-coredns--668d6bf9bc--hlwkw-eth0" Jul 14 22:22:28.245970 containerd[1464]: 2025-07-14 22:22:28.232 [INFO][5598] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:22:28.245970 containerd[1464]: 2025-07-14 22:22:28.232 [INFO][5598] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:22:28.245970 containerd[1464]: 2025-07-14 22:22:28.237 [WARNING][5598] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383" HandleID="k8s-pod-network.2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383" Workload="localhost-k8s-coredns--668d6bf9bc--hlwkw-eth0" Jul 14 22:22:28.245970 containerd[1464]: 2025-07-14 22:22:28.237 [INFO][5598] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383" HandleID="k8s-pod-network.2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383" Workload="localhost-k8s-coredns--668d6bf9bc--hlwkw-eth0" Jul 14 22:22:28.245970 containerd[1464]: 2025-07-14 22:22:28.239 [INFO][5598] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:22:28.245970 containerd[1464]: 2025-07-14 22:22:28.242 [INFO][5583] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383" Jul 14 22:22:28.246477 containerd[1464]: time="2025-07-14T22:22:28.246013044Z" level=info msg="TearDown network for sandbox \"2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383\" successfully" Jul 14 22:22:28.317245 containerd[1464]: time="2025-07-14T22:22:28.317166952Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:22:28.317378 containerd[1464]: time="2025-07-14T22:22:28.317274598Z" level=info msg="RemovePodSandbox \"2e396a4eef28a3f7a5be099a477d7eeda0b765b8fc3883611acf015911727383\" returns successfully" Jul 14 22:22:28.318123 containerd[1464]: time="2025-07-14T22:22:28.317855499Z" level=info msg="StopPodSandbox for \"7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f\"" Jul 14 22:22:28.413051 containerd[1464]: 2025-07-14 22:22:28.378 [WARNING][5616] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--gcjz6-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"3b937dec-5b75-47a6-9753-367ccbffbb4f", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 21, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4b79028289ff97eeb889c540a5a0053f4de5300f72529238d0744ca84c6e87d7", Pod:"goldmane-768f4c5c69-gcjz6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali39692490256", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:22:28.413051 containerd[1464]: 2025-07-14 22:22:28.378 [INFO][5616] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f" Jul 14 22:22:28.413051 containerd[1464]: 2025-07-14 22:22:28.378 [INFO][5616] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f" iface="eth0" netns="" Jul 14 22:22:28.413051 containerd[1464]: 2025-07-14 22:22:28.378 [INFO][5616] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f" Jul 14 22:22:28.413051 containerd[1464]: 2025-07-14 22:22:28.378 [INFO][5616] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f" Jul 14 22:22:28.413051 containerd[1464]: 2025-07-14 22:22:28.400 [INFO][5624] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f" HandleID="k8s-pod-network.7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f" Workload="localhost-k8s-goldmane--768f4c5c69--gcjz6-eth0" Jul 14 22:22:28.413051 containerd[1464]: 2025-07-14 22:22:28.401 [INFO][5624] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:22:28.413051 containerd[1464]: 2025-07-14 22:22:28.401 [INFO][5624] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:22:28.413051 containerd[1464]: 2025-07-14 22:22:28.405 [WARNING][5624] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f" HandleID="k8s-pod-network.7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f" Workload="localhost-k8s-goldmane--768f4c5c69--gcjz6-eth0" Jul 14 22:22:28.413051 containerd[1464]: 2025-07-14 22:22:28.405 [INFO][5624] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f" HandleID="k8s-pod-network.7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f" Workload="localhost-k8s-goldmane--768f4c5c69--gcjz6-eth0" Jul 14 22:22:28.413051 containerd[1464]: 2025-07-14 22:22:28.407 [INFO][5624] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:22:28.413051 containerd[1464]: 2025-07-14 22:22:28.410 [INFO][5616] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f" Jul 14 22:22:28.413702 containerd[1464]: time="2025-07-14T22:22:28.413112330Z" level=info msg="TearDown network for sandbox \"7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f\" successfully" Jul 14 22:22:28.413702 containerd[1464]: time="2025-07-14T22:22:28.413142818Z" level=info msg="StopPodSandbox for \"7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f\" returns successfully" Jul 14 22:22:28.413815 containerd[1464]: time="2025-07-14T22:22:28.413765671Z" level=info msg="RemovePodSandbox for \"7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f\"" Jul 14 22:22:28.413815 containerd[1464]: time="2025-07-14T22:22:28.413802731Z" level=info msg="Forcibly stopping sandbox \"7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f\"" Jul 14 22:22:28.491257 containerd[1464]: 2025-07-14 22:22:28.455 [WARNING][5642] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--gcjz6-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"3b937dec-5b75-47a6-9753-367ccbffbb4f", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 21, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4b79028289ff97eeb889c540a5a0053f4de5300f72529238d0744ca84c6e87d7", Pod:"goldmane-768f4c5c69-gcjz6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali39692490256", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:22:28.491257 containerd[1464]: 2025-07-14 22:22:28.455 [INFO][5642] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f" Jul 14 22:22:28.491257 containerd[1464]: 2025-07-14 22:22:28.456 [INFO][5642] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f" iface="eth0" netns="" Jul 14 22:22:28.491257 containerd[1464]: 2025-07-14 22:22:28.456 [INFO][5642] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f" Jul 14 22:22:28.491257 containerd[1464]: 2025-07-14 22:22:28.456 [INFO][5642] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f" Jul 14 22:22:28.491257 containerd[1464]: 2025-07-14 22:22:28.478 [INFO][5651] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f" HandleID="k8s-pod-network.7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f" Workload="localhost-k8s-goldmane--768f4c5c69--gcjz6-eth0" Jul 14 22:22:28.491257 containerd[1464]: 2025-07-14 22:22:28.478 [INFO][5651] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:22:28.491257 containerd[1464]: 2025-07-14 22:22:28.478 [INFO][5651] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:22:28.491257 containerd[1464]: 2025-07-14 22:22:28.484 [WARNING][5651] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f" HandleID="k8s-pod-network.7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f" Workload="localhost-k8s-goldmane--768f4c5c69--gcjz6-eth0" Jul 14 22:22:28.491257 containerd[1464]: 2025-07-14 22:22:28.484 [INFO][5651] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f" HandleID="k8s-pod-network.7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f" Workload="localhost-k8s-goldmane--768f4c5c69--gcjz6-eth0" Jul 14 22:22:28.491257 containerd[1464]: 2025-07-14 22:22:28.485 [INFO][5651] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:22:28.491257 containerd[1464]: 2025-07-14 22:22:28.488 [INFO][5642] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f" Jul 14 22:22:28.491835 containerd[1464]: time="2025-07-14T22:22:28.491312171Z" level=info msg="TearDown network for sandbox \"7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f\" successfully" Jul 14 22:22:28.977419 containerd[1464]: time="2025-07-14T22:22:28.977245830Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:22:28.977419 containerd[1464]: time="2025-07-14T22:22:28.977350701Z" level=info msg="RemovePodSandbox \"7bb9b9cf8f82876871b02c12167aba2ddb825501d747a620475b8935e18a073f\" returns successfully" Jul 14 22:22:28.978841 containerd[1464]: time="2025-07-14T22:22:28.978798752Z" level=info msg="StopPodSandbox for \"7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34\"" Jul 14 22:22:29.103296 containerd[1464]: 2025-07-14 22:22:29.055 [WARNING][5670] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--66cb745f54--d6cjg-eth0", GenerateName:"calico-kube-controllers-66cb745f54-", Namespace:"calico-system", SelfLink:"", UID:"233c9654-f369-4545-aeb8-b29c6d794c17", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 21, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66cb745f54", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d201787043558ed5eec62f14fae6059ee3cbba14d3eaf7b22084de951f712204", Pod:"calico-kube-controllers-66cb745f54-d6cjg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliecaf8c7db89", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:22:29.103296 containerd[1464]: 2025-07-14 22:22:29.056 [INFO][5670] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34" Jul 14 22:22:29.103296 containerd[1464]: 2025-07-14 22:22:29.056 [INFO][5670] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34" iface="eth0" netns="" Jul 14 22:22:29.103296 containerd[1464]: 2025-07-14 22:22:29.056 [INFO][5670] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34" Jul 14 22:22:29.103296 containerd[1464]: 2025-07-14 22:22:29.056 [INFO][5670] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34" Jul 14 22:22:29.103296 containerd[1464]: 2025-07-14 22:22:29.079 [INFO][5679] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34" HandleID="k8s-pod-network.7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34" Workload="localhost-k8s-calico--kube--controllers--66cb745f54--d6cjg-eth0" Jul 14 22:22:29.103296 containerd[1464]: 2025-07-14 22:22:29.080 [INFO][5679] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:22:29.103296 containerd[1464]: 2025-07-14 22:22:29.080 [INFO][5679] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:22:29.103296 containerd[1464]: 2025-07-14 22:22:29.088 [WARNING][5679] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34" HandleID="k8s-pod-network.7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34" Workload="localhost-k8s-calico--kube--controllers--66cb745f54--d6cjg-eth0" Jul 14 22:22:29.103296 containerd[1464]: 2025-07-14 22:22:29.088 [INFO][5679] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34" HandleID="k8s-pod-network.7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34" Workload="localhost-k8s-calico--kube--controllers--66cb745f54--d6cjg-eth0" Jul 14 22:22:29.103296 containerd[1464]: 2025-07-14 22:22:29.090 [INFO][5679] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:22:29.103296 containerd[1464]: 2025-07-14 22:22:29.096 [INFO][5670] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34" Jul 14 22:22:29.104003 containerd[1464]: time="2025-07-14T22:22:29.103356346Z" level=info msg="TearDown network for sandbox \"7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34\" successfully" Jul 14 22:22:29.104003 containerd[1464]: time="2025-07-14T22:22:29.103393587Z" level=info msg="StopPodSandbox for \"7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34\" returns successfully" Jul 14 22:22:29.104003 containerd[1464]: time="2025-07-14T22:22:29.103812929Z" level=info msg="RemovePodSandbox for \"7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34\"" Jul 14 22:22:29.104003 containerd[1464]: time="2025-07-14T22:22:29.103834950Z" level=info msg="Forcibly stopping sandbox \"7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34\"" Jul 14 22:22:29.204407 containerd[1464]: 2025-07-14 22:22:29.166 [WARNING][5696] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--66cb745f54--d6cjg-eth0", GenerateName:"calico-kube-controllers-66cb745f54-", Namespace:"calico-system", SelfLink:"", UID:"233c9654-f369-4545-aeb8-b29c6d794c17", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 21, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66cb745f54", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d201787043558ed5eec62f14fae6059ee3cbba14d3eaf7b22084de951f712204", Pod:"calico-kube-controllers-66cb745f54-d6cjg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliecaf8c7db89", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:22:29.204407 containerd[1464]: 2025-07-14 22:22:29.167 [INFO][5696] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34" Jul 14 22:22:29.204407 containerd[1464]: 2025-07-14 22:22:29.167 [INFO][5696] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34" iface="eth0" netns="" Jul 14 22:22:29.204407 containerd[1464]: 2025-07-14 22:22:29.167 [INFO][5696] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34" Jul 14 22:22:29.204407 containerd[1464]: 2025-07-14 22:22:29.167 [INFO][5696] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34" Jul 14 22:22:29.204407 containerd[1464]: 2025-07-14 22:22:29.188 [INFO][5705] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34" HandleID="k8s-pod-network.7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34" Workload="localhost-k8s-calico--kube--controllers--66cb745f54--d6cjg-eth0" Jul 14 22:22:29.204407 containerd[1464]: 2025-07-14 22:22:29.189 [INFO][5705] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:22:29.204407 containerd[1464]: 2025-07-14 22:22:29.189 [INFO][5705] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:22:29.204407 containerd[1464]: 2025-07-14 22:22:29.197 [WARNING][5705] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34" HandleID="k8s-pod-network.7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34" Workload="localhost-k8s-calico--kube--controllers--66cb745f54--d6cjg-eth0" Jul 14 22:22:29.204407 containerd[1464]: 2025-07-14 22:22:29.197 [INFO][5705] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34" HandleID="k8s-pod-network.7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34" Workload="localhost-k8s-calico--kube--controllers--66cb745f54--d6cjg-eth0" Jul 14 22:22:29.204407 containerd[1464]: 2025-07-14 22:22:29.198 [INFO][5705] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:22:29.204407 containerd[1464]: 2025-07-14 22:22:29.201 [INFO][5696] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34" Jul 14 22:22:29.204914 containerd[1464]: time="2025-07-14T22:22:29.204436976Z" level=info msg="TearDown network for sandbox \"7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34\" successfully" Jul 14 22:22:29.464838 containerd[1464]: time="2025-07-14T22:22:29.464763475Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:22:29.464987 containerd[1464]: time="2025-07-14T22:22:29.464851744Z" level=info msg="RemovePodSandbox \"7cea0ad9005a2a91a0b6000cef1a0c776bf86ac8332cee5dd335bfa3d67c5b34\" returns successfully" Jul 14 22:22:31.620713 containerd[1464]: time="2025-07-14T22:22:31.620666566Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:22:31.621890 containerd[1464]: time="2025-07-14T22:22:31.621783490Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 14 22:22:31.623303 containerd[1464]: time="2025-07-14T22:22:31.623211459Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:22:31.626064 containerd[1464]: time="2025-07-14T22:22:31.626017150Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:22:31.626760 containerd[1464]: time="2025-07-14T22:22:31.626721186Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 4.411551447s" Jul 14 22:22:31.626760 containerd[1464]: time="2025-07-14T22:22:31.626756553Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 14 22:22:31.627958 containerd[1464]: time="2025-07-14T22:22:31.627791311Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 14 22:22:31.642258 containerd[1464]: time="2025-07-14T22:22:31.640973750Z" level=info msg="CreateContainer within sandbox \"d201787043558ed5eec62f14fae6059ee3cbba14d3eaf7b22084de951f712204\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 14 22:22:31.784022 containerd[1464]: time="2025-07-14T22:22:31.783963321Z" level=info msg="CreateContainer within sandbox \"d201787043558ed5eec62f14fae6059ee3cbba14d3eaf7b22084de951f712204\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"1d169285e856aa1691e638c82cffeea995fd687483a6e222764a7dd5b3df2c96\"" Jul 14 22:22:31.784549 containerd[1464]: time="2025-07-14T22:22:31.784503904Z" level=info msg="StartContainer for \"1d169285e856aa1691e638c82cffeea995fd687483a6e222764a7dd5b3df2c96\"" Jul 14 22:22:31.830399 systemd[1]: Started cri-containerd-1d169285e856aa1691e638c82cffeea995fd687483a6e222764a7dd5b3df2c96.scope - libcontainer container 1d169285e856aa1691e638c82cffeea995fd687483a6e222764a7dd5b3df2c96. Jul 14 22:22:31.877688 containerd[1464]: time="2025-07-14T22:22:31.877352864Z" level=info msg="StartContainer for \"1d169285e856aa1691e638c82cffeea995fd687483a6e222764a7dd5b3df2c96\" returns successfully" Jul 14 22:22:32.078583 containerd[1464]: time="2025-07-14T22:22:32.077793868Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:22:32.080299 containerd[1464]: time="2025-07-14T22:22:32.079816070Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 14 22:22:32.081289 containerd[1464]: time="2025-07-14T22:22:32.081261091Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 453.441746ms" Jul 14 22:22:32.081326 containerd[1464]: time="2025-07-14T22:22:32.081289986Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 14 22:22:32.083660 containerd[1464]: time="2025-07-14T22:22:32.082429041Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 14 22:22:32.083882 containerd[1464]: time="2025-07-14T22:22:32.083838082Z" level=info msg="CreateContainer within sandbox \"01a9f6301ac5e1a5b3ffafb28538b5e00227b870c675b251ecc2c3c6c10d38ec\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 14 22:22:32.098904 containerd[1464]: time="2025-07-14T22:22:32.098839515Z" level=info msg="CreateContainer within sandbox \"01a9f6301ac5e1a5b3ffafb28538b5e00227b870c675b251ecc2c3c6c10d38ec\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1bc1a0a10ce17add76e96fbd0d9c9d27bf31b40eb5e45484cc30f77ee751359e\"" Jul 14 22:22:32.099474 containerd[1464]: time="2025-07-14T22:22:32.099430855Z" level=info msg="StartContainer for \"1bc1a0a10ce17add76e96fbd0d9c9d27bf31b40eb5e45484cc30f77ee751359e\"" Jul 14 22:22:32.130408 systemd[1]: Started cri-containerd-1bc1a0a10ce17add76e96fbd0d9c9d27bf31b40eb5e45484cc30f77ee751359e.scope - libcontainer container 1bc1a0a10ce17add76e96fbd0d9c9d27bf31b40eb5e45484cc30f77ee751359e. Jul 14 22:22:32.342164 containerd[1464]: time="2025-07-14T22:22:32.342111041Z" level=info msg="StartContainer for \"1bc1a0a10ce17add76e96fbd0d9c9d27bf31b40eb5e45484cc30f77ee751359e\" returns successfully" Jul 14 22:22:32.722839 systemd[1]: Started sshd@8-10.0.0.137:22-10.0.0.1:33950.service - OpenSSH per-connection server daemon (10.0.0.1:33950). Jul 14 22:22:32.846002 sshd[5831]: Accepted publickey for core from 10.0.0.1 port 33950 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:22:32.846699 sshd[5831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:22:32.853218 systemd-logind[1440]: New session 9 of user core. Jul 14 22:22:32.857961 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 14 22:22:32.918065 kubelet[2543]: I0714 22:22:32.917459 2543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8cbd7cb79-zzkpk" podStartSLOduration=31.731261714 podStartE2EDuration="47.917436295s" podCreationTimestamp="2025-07-14 22:21:45 +0000 UTC" firstStartedPulling="2025-07-14 22:22:15.896037375 +0000 UTC m=+49.388058248" lastFinishedPulling="2025-07-14 22:22:32.082211956 +0000 UTC m=+65.574232829" observedRunningTime="2025-07-14 22:22:32.907876125 +0000 UTC m=+66.399896998" watchObservedRunningTime="2025-07-14 22:22:32.917436295 +0000 UTC m=+66.409457168" Jul 14 22:22:32.943590 kubelet[2543]: I0714 22:22:32.943513 2543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-66cb745f54-d6cjg" podStartSLOduration=26.814116225 podStartE2EDuration="44.943494836s" podCreationTimestamp="2025-07-14 22:21:48 +0000 UTC" firstStartedPulling="2025-07-14 22:22:13.498330142 +0000 UTC m=+46.990351015" lastFinishedPulling="2025-07-14 22:22:31.627708743 +0000 UTC m=+65.119729626" observedRunningTime="2025-07-14 22:22:32.922295095 +0000 UTC m=+66.414315978" watchObservedRunningTime="2025-07-14 22:22:32.943494836 +0000 UTC m=+66.435515709" Jul 14 22:22:33.262554 sshd[5831]: pam_unix(sshd:session): session closed for user core Jul 14 22:22:33.268555 systemd[1]: sshd@8-10.0.0.137:22-10.0.0.1:33950.service: Deactivated successfully. Jul 14 22:22:33.270517 systemd[1]: session-9.scope: Deactivated successfully. Jul 14 22:22:33.272282 systemd-logind[1440]: Session 9 logged out. Waiting for processes to exit. Jul 14 22:22:33.273552 systemd-logind[1440]: Removed session 9. Jul 14 22:22:34.646663 containerd[1464]: time="2025-07-14T22:22:34.646594868Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:22:34.647567 containerd[1464]: time="2025-07-14T22:22:34.647531606Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 14 22:22:34.648895 containerd[1464]: time="2025-07-14T22:22:34.648840494Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:22:34.653236 containerd[1464]: time="2025-07-14T22:22:34.653160207Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:22:34.653733 containerd[1464]: time="2025-07-14T22:22:34.653706650Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 2.57123654s" Jul 14 22:22:34.653772 containerd[1464]: time="2025-07-14T22:22:34.653738691Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 14 22:22:34.655970 containerd[1464]: time="2025-07-14T22:22:34.655941665Z" level=info msg="CreateContainer within sandbox \"6a407a04a2c0976cf8118e45190b2a347bb3fa7ebadcf037c848f61eb1b3c691\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 14 22:22:34.858603 containerd[1464]: time="2025-07-14T22:22:34.858537351Z" level=info msg="CreateContainer within sandbox \"6a407a04a2c0976cf8118e45190b2a347bb3fa7ebadcf037c848f61eb1b3c691\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d2a09c9b89636de9a51c04ffae5424b6883eb62c6dbd3112ea402fcad418761a\"" Jul 14 22:22:34.859392 containerd[1464]: time="2025-07-14T22:22:34.859341124Z" level=info msg="StartContainer for \"d2a09c9b89636de9a51c04ffae5424b6883eb62c6dbd3112ea402fcad418761a\"" Jul 14 22:22:34.952546 systemd[1]: Started cri-containerd-d2a09c9b89636de9a51c04ffae5424b6883eb62c6dbd3112ea402fcad418761a.scope - libcontainer container d2a09c9b89636de9a51c04ffae5424b6883eb62c6dbd3112ea402fcad418761a. Jul 14 22:22:34.989629 containerd[1464]: time="2025-07-14T22:22:34.989569988Z" level=info msg="StartContainer for \"d2a09c9b89636de9a51c04ffae5424b6883eb62c6dbd3112ea402fcad418761a\" returns successfully" Jul 14 22:22:35.737106 kubelet[2543]: I0714 22:22:35.737048 2543 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 14 22:22:35.737106 kubelet[2543]: I0714 22:22:35.737083 2543 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 14 22:22:35.882436 kubelet[2543]: I0714 22:22:35.882364 2543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-7xmfp" podStartSLOduration=26.616082981 podStartE2EDuration="47.88233509s" podCreationTimestamp="2025-07-14 22:21:48 +0000 UTC" firstStartedPulling="2025-07-14 22:22:13.388319251 +0000 UTC m=+46.880340114" lastFinishedPulling="2025-07-14 22:22:34.65457135 +0000 UTC m=+68.146592223" observedRunningTime="2025-07-14 22:22:35.882108648 +0000 UTC m=+69.374129531" watchObservedRunningTime="2025-07-14 22:22:35.88233509 +0000 UTC m=+69.374355963" Jul 14 22:22:38.278673 systemd[1]: Started sshd@9-10.0.0.137:22-10.0.0.1:33960.service - OpenSSH per-connection server daemon (10.0.0.1:33960). Jul 14 22:22:38.342098 sshd[5917]: Accepted publickey for core from 10.0.0.1 port 33960 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:22:38.344008 sshd[5917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:22:38.348942 systemd-logind[1440]: New session 10 of user core. Jul 14 22:22:38.364509 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 14 22:22:38.598978 sshd[5917]: pam_unix(sshd:session): session closed for user core Jul 14 22:22:38.603779 systemd[1]: sshd@9-10.0.0.137:22-10.0.0.1:33960.service: Deactivated successfully. Jul 14 22:22:38.606208 systemd[1]: session-10.scope: Deactivated successfully. Jul 14 22:22:38.606919 systemd-logind[1440]: Session 10 logged out. Waiting for processes to exit. Jul 14 22:22:38.607877 systemd-logind[1440]: Removed session 10. Jul 14 22:22:39.611037 kubelet[2543]: I0714 22:22:39.610698 2543 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 22:22:43.625551 systemd[1]: Started sshd@10-10.0.0.137:22-10.0.0.1:34300.service - OpenSSH per-connection server daemon (10.0.0.1:34300). Jul 14 22:22:43.663390 sshd[5960]: Accepted publickey for core from 10.0.0.1 port 34300 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:22:43.665256 sshd[5960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:22:43.669260 systemd-logind[1440]: New session 11 of user core. Jul 14 22:22:43.679404 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 14 22:22:43.828624 sshd[5960]: pam_unix(sshd:session): session closed for user core Jul 14 22:22:43.833019 systemd[1]: sshd@10-10.0.0.137:22-10.0.0.1:34300.service: Deactivated successfully. Jul 14 22:22:43.835090 systemd[1]: session-11.scope: Deactivated successfully. Jul 14 22:22:43.835694 systemd-logind[1440]: Session 11 logged out. Waiting for processes to exit. Jul 14 22:22:43.836554 systemd-logind[1440]: Removed session 11. Jul 14 22:22:44.586855 kubelet[2543]: E0714 22:22:44.586812 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:22:46.589880 kubelet[2543]: E0714 22:22:46.589831 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:22:48.847755 systemd[1]: Started sshd@11-10.0.0.137:22-10.0.0.1:34302.service - OpenSSH per-connection server daemon (10.0.0.1:34302). Jul 14 22:22:48.879641 sshd[5982]: Accepted publickey for core from 10.0.0.1 port 34302 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:22:48.881219 sshd[5982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:22:48.885239 systemd-logind[1440]: New session 12 of user core. Jul 14 22:22:48.893381 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 14 22:22:49.022637 sshd[5982]: pam_unix(sshd:session): session closed for user core Jul 14 22:22:49.026733 systemd[1]: sshd@11-10.0.0.137:22-10.0.0.1:34302.service: Deactivated successfully. Jul 14 22:22:49.029271 systemd[1]: session-12.scope: Deactivated successfully. Jul 14 22:22:49.030061 systemd-logind[1440]: Session 12 logged out. Waiting for processes to exit. Jul 14 22:22:49.031406 systemd-logind[1440]: Removed session 12. Jul 14 22:22:50.584498 kubelet[2543]: E0714 22:22:50.584465 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:22:54.033305 systemd[1]: Started sshd@12-10.0.0.137:22-10.0.0.1:41064.service - OpenSSH per-connection server daemon (10.0.0.1:41064). Jul 14 22:22:54.068459 sshd[6020]: Accepted publickey for core from 10.0.0.1 port 41064 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:22:54.070077 sshd[6020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:22:54.073676 systemd-logind[1440]: New session 13 of user core. Jul 14 22:22:54.080377 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 14 22:22:54.256519 sshd[6020]: pam_unix(sshd:session): session closed for user core Jul 14 22:22:54.264192 systemd[1]: sshd@12-10.0.0.137:22-10.0.0.1:41064.service: Deactivated successfully. Jul 14 22:22:54.267750 systemd[1]: session-13.scope: Deactivated successfully. Jul 14 22:22:54.269696 systemd-logind[1440]: Session 13 logged out. Waiting for processes to exit. Jul 14 22:22:54.281635 systemd[1]: Started sshd@13-10.0.0.137:22-10.0.0.1:41072.service - OpenSSH per-connection server daemon (10.0.0.1:41072). Jul 14 22:22:54.282747 systemd-logind[1440]: Removed session 13. Jul 14 22:22:54.312931 sshd[6035]: Accepted publickey for core from 10.0.0.1 port 41072 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:22:54.314506 sshd[6035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:22:54.319158 systemd-logind[1440]: New session 14 of user core. Jul 14 22:22:54.328563 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 14 22:22:54.707671 sshd[6035]: pam_unix(sshd:session): session closed for user core Jul 14 22:22:54.718051 systemd[1]: sshd@13-10.0.0.137:22-10.0.0.1:41072.service: Deactivated successfully. Jul 14 22:22:54.719759 systemd[1]: session-14.scope: Deactivated successfully. Jul 14 22:22:54.721417 systemd-logind[1440]: Session 14 logged out. Waiting for processes to exit. Jul 14 22:22:54.731475 systemd[1]: Started sshd@14-10.0.0.137:22-10.0.0.1:41086.service - OpenSSH per-connection server daemon (10.0.0.1:41086). Jul 14 22:22:54.732443 systemd-logind[1440]: Removed session 14. Jul 14 22:22:54.759346 sshd[6048]: Accepted publickey for core from 10.0.0.1 port 41086 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:22:54.760951 sshd[6048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:22:54.765354 systemd-logind[1440]: New session 15 of user core. Jul 14 22:22:54.783384 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 14 22:22:54.921409 sshd[6048]: pam_unix(sshd:session): session closed for user core Jul 14 22:22:54.926457 systemd[1]: sshd@14-10.0.0.137:22-10.0.0.1:41086.service: Deactivated successfully. Jul 14 22:22:54.929294 systemd[1]: session-15.scope: Deactivated successfully. Jul 14 22:22:54.930029 systemd-logind[1440]: Session 15 logged out. Waiting for processes to exit. Jul 14 22:22:54.931174 systemd-logind[1440]: Removed session 15. Jul 14 22:22:58.584269 kubelet[2543]: E0714 22:22:58.584198 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:22:59.935423 systemd[1]: Started sshd@15-10.0.0.137:22-10.0.0.1:51476.service - OpenSSH per-connection server daemon (10.0.0.1:51476). Jul 14 22:22:59.983714 sshd[6069]: Accepted publickey for core from 10.0.0.1 port 51476 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:22:59.985401 sshd[6069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:22:59.989207 systemd-logind[1440]: New session 16 of user core. Jul 14 22:22:59.995381 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 14 22:23:00.205122 sshd[6069]: pam_unix(sshd:session): session closed for user core Jul 14 22:23:00.209794 systemd[1]: sshd@15-10.0.0.137:22-10.0.0.1:51476.service: Deactivated successfully. Jul 14 22:23:00.212625 systemd[1]: session-16.scope: Deactivated successfully. Jul 14 22:23:00.213321 systemd-logind[1440]: Session 16 logged out. Waiting for processes to exit. Jul 14 22:23:00.214166 systemd-logind[1440]: Removed session 16. Jul 14 22:23:05.222551 systemd[1]: Started sshd@16-10.0.0.137:22-10.0.0.1:51492.service - OpenSSH per-connection server daemon (10.0.0.1:51492). Jul 14 22:23:05.251864 sshd[6106]: Accepted publickey for core from 10.0.0.1 port 51492 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:23:05.253705 sshd[6106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:23:05.258140 systemd-logind[1440]: New session 17 of user core. Jul 14 22:23:05.264375 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 14 22:23:05.415501 sshd[6106]: pam_unix(sshd:session): session closed for user core Jul 14 22:23:05.420941 systemd[1]: sshd@16-10.0.0.137:22-10.0.0.1:51492.service: Deactivated successfully. Jul 14 22:23:05.423661 systemd[1]: session-17.scope: Deactivated successfully. Jul 14 22:23:05.424385 systemd-logind[1440]: Session 17 logged out. Waiting for processes to exit. Jul 14 22:23:05.425431 systemd-logind[1440]: Removed session 17. Jul 14 22:23:10.428706 systemd[1]: Started sshd@17-10.0.0.137:22-10.0.0.1:45206.service - OpenSSH per-connection server daemon (10.0.0.1:45206). Jul 14 22:23:10.463645 sshd[6122]: Accepted publickey for core from 10.0.0.1 port 45206 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:23:10.465460 sshd[6122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:23:10.469600 systemd-logind[1440]: New session 18 of user core. Jul 14 22:23:10.478372 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 14 22:23:10.603393 sshd[6122]: pam_unix(sshd:session): session closed for user core Jul 14 22:23:10.607544 systemd[1]: sshd@17-10.0.0.137:22-10.0.0.1:45206.service: Deactivated successfully. Jul 14 22:23:10.609570 systemd[1]: session-18.scope: Deactivated successfully. Jul 14 22:23:10.610180 systemd-logind[1440]: Session 18 logged out. Waiting for processes to exit. Jul 14 22:23:10.611013 systemd-logind[1440]: Removed session 18. Jul 14 22:23:15.616304 systemd[1]: Started sshd@18-10.0.0.137:22-10.0.0.1:45208.service - OpenSSH per-connection server daemon (10.0.0.1:45208). Jul 14 22:23:15.650416 sshd[6158]: Accepted publickey for core from 10.0.0.1 port 45208 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:23:15.652153 sshd[6158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:23:15.656549 systemd-logind[1440]: New session 19 of user core. Jul 14 22:23:15.663395 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 14 22:23:15.802822 sshd[6158]: pam_unix(sshd:session): session closed for user core Jul 14 22:23:15.807530 systemd[1]: sshd@18-10.0.0.137:22-10.0.0.1:45208.service: Deactivated successfully. Jul 14 22:23:15.809354 systemd[1]: session-19.scope: Deactivated successfully. Jul 14 22:23:15.810110 systemd-logind[1440]: Session 19 logged out. Waiting for processes to exit. Jul 14 22:23:15.811075 systemd-logind[1440]: Removed session 19. Jul 14 22:23:18.584409 kubelet[2543]: E0714 22:23:18.584362 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:23:20.814503 systemd[1]: Started sshd@19-10.0.0.137:22-10.0.0.1:53950.service - OpenSSH per-connection server daemon (10.0.0.1:53950). Jul 14 22:23:20.852220 sshd[6172]: Accepted publickey for core from 10.0.0.1 port 53950 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:23:20.853785 sshd[6172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:23:20.857791 systemd-logind[1440]: New session 20 of user core. Jul 14 22:23:20.865386 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 14 22:23:21.028557 sshd[6172]: pam_unix(sshd:session): session closed for user core Jul 14 22:23:21.037410 systemd[1]: sshd@19-10.0.0.137:22-10.0.0.1:53950.service: Deactivated successfully. Jul 14 22:23:21.039391 systemd[1]: session-20.scope: Deactivated successfully. Jul 14 22:23:21.041083 systemd-logind[1440]: Session 20 logged out. Waiting for processes to exit. Jul 14 22:23:21.047081 systemd[1]: Started sshd@20-10.0.0.137:22-10.0.0.1:53952.service - OpenSSH per-connection server daemon (10.0.0.1:53952). Jul 14 22:23:21.048417 systemd-logind[1440]: Removed session 20. Jul 14 22:23:21.077043 sshd[6186]: Accepted publickey for core from 10.0.0.1 port 53952 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:23:21.078506 sshd[6186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:23:21.082319 systemd-logind[1440]: New session 21 of user core. Jul 14 22:23:21.086388 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 14 22:23:25.988827 systemd[1]: run-containerd-runc-k8s.io-1d169285e856aa1691e638c82cffeea995fd687483a6e222764a7dd5b3df2c96-runc.nN9gtO.mount: Deactivated successfully. Jul 14 22:23:26.340362 sshd[6186]: pam_unix(sshd:session): session closed for user core Jul 14 22:23:26.353505 systemd[1]: sshd@20-10.0.0.137:22-10.0.0.1:53952.service: Deactivated successfully. Jul 14 22:23:26.355673 systemd[1]: session-21.scope: Deactivated successfully. Jul 14 22:23:26.357655 systemd-logind[1440]: Session 21 logged out. Waiting for processes to exit. Jul 14 22:23:26.366137 systemd[1]: Started sshd@21-10.0.0.137:22-10.0.0.1:53960.service - OpenSSH per-connection server daemon (10.0.0.1:53960). Jul 14 22:23:26.367395 systemd-logind[1440]: Removed session 21. Jul 14 22:23:26.404005 sshd[6239]: Accepted publickey for core from 10.0.0.1 port 53960 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:23:26.405565 sshd[6239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:23:26.409940 systemd-logind[1440]: New session 22 of user core. Jul 14 22:23:26.420456 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 14 22:23:28.583814 kubelet[2543]: E0714 22:23:28.583766 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:23:33.583890 kubelet[2543]: E0714 22:23:33.583847 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:23:41.970758 sshd[6239]: pam_unix(sshd:session): session closed for user core Jul 14 22:23:41.979510 systemd[1]: sshd@21-10.0.0.137:22-10.0.0.1:53960.service: Deactivated successfully. Jul 14 22:23:41.981357 systemd[1]: session-22.scope: Deactivated successfully. Jul 14 22:23:41.984671 systemd-logind[1440]: Session 22 logged out. Waiting for processes to exit. Jul 14 22:23:41.993587 systemd[1]: Started sshd@22-10.0.0.137:22-10.0.0.1:43074.service - OpenSSH per-connection server daemon (10.0.0.1:43074). Jul 14 22:23:41.995292 systemd-logind[1440]: Removed session 22. Jul 14 22:23:42.050112 sshd[6333]: Accepted publickey for core from 10.0.0.1 port 43074 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:23:42.055299 sshd[6333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:23:42.063064 systemd-logind[1440]: New session 23 of user core. Jul 14 22:23:42.071800 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 14 22:23:42.439022 sshd[6333]: pam_unix(sshd:session): session closed for user core Jul 14 22:23:42.448447 systemd[1]: sshd@22-10.0.0.137:22-10.0.0.1:43074.service: Deactivated successfully. Jul 14 22:23:42.450381 systemd[1]: session-23.scope: Deactivated successfully. Jul 14 22:23:42.451990 systemd-logind[1440]: Session 23 logged out. Waiting for processes to exit. Jul 14 22:23:42.457504 systemd[1]: Started sshd@23-10.0.0.137:22-10.0.0.1:43086.service - OpenSSH per-connection server daemon (10.0.0.1:43086). Jul 14 22:23:42.458419 systemd-logind[1440]: Removed session 23. Jul 14 22:23:42.495860 sshd[6347]: Accepted publickey for core from 10.0.0.1 port 43086 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:23:42.497525 sshd[6347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:23:42.501820 systemd-logind[1440]: New session 24 of user core. Jul 14 22:23:42.509368 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 14 22:23:42.622354 sshd[6347]: pam_unix(sshd:session): session closed for user core Jul 14 22:23:42.626522 systemd[1]: sshd@23-10.0.0.137:22-10.0.0.1:43086.service: Deactivated successfully. Jul 14 22:23:42.628422 systemd[1]: session-24.scope: Deactivated successfully. Jul 14 22:23:42.629064 systemd-logind[1440]: Session 24 logged out. Waiting for processes to exit. Jul 14 22:23:42.629862 systemd-logind[1440]: Removed session 24. Jul 14 22:23:47.634343 systemd[1]: Started sshd@24-10.0.0.137:22-10.0.0.1:43102.service - OpenSSH per-connection server daemon (10.0.0.1:43102). Jul 14 22:23:47.669687 sshd[6387]: Accepted publickey for core from 10.0.0.1 port 43102 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:23:47.671530 sshd[6387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:23:47.675781 systemd-logind[1440]: New session 25 of user core. Jul 14 22:23:47.686368 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 14 22:23:47.811573 sshd[6387]: pam_unix(sshd:session): session closed for user core Jul 14 22:23:47.815988 systemd[1]: sshd@24-10.0.0.137:22-10.0.0.1:43102.service: Deactivated successfully. Jul 14 22:23:47.818188 systemd[1]: session-25.scope: Deactivated successfully. Jul 14 22:23:47.818798 systemd-logind[1440]: Session 25 logged out. Waiting for processes to exit. Jul 14 22:23:47.819628 systemd-logind[1440]: Removed session 25. Jul 14 22:23:52.583803 kubelet[2543]: E0714 22:23:52.583768 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:23:52.834580 systemd[1]: Started sshd@25-10.0.0.137:22-10.0.0.1:38570.service - OpenSSH per-connection server daemon (10.0.0.1:38570). Jul 14 22:23:52.886862 sshd[6423]: Accepted publickey for core from 10.0.0.1 port 38570 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:23:52.888701 sshd[6423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:23:52.893133 systemd-logind[1440]: New session 26 of user core. Jul 14 22:23:52.900349 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 14 22:23:53.162462 sshd[6423]: pam_unix(sshd:session): session closed for user core Jul 14 22:23:53.166561 systemd[1]: sshd@25-10.0.0.137:22-10.0.0.1:38570.service: Deactivated successfully. Jul 14 22:23:53.168416 systemd[1]: session-26.scope: Deactivated successfully. Jul 14 22:23:53.173628 systemd-logind[1440]: Session 26 logged out. Waiting for processes to exit. Jul 14 22:23:53.178126 systemd-logind[1440]: Removed session 26. Jul 14 22:23:55.584188 kubelet[2543]: E0714 22:23:55.584130 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:23:58.184527 systemd[1]: Started sshd@26-10.0.0.137:22-10.0.0.1:38572.service - OpenSSH per-connection server daemon (10.0.0.1:38572). Jul 14 22:23:58.211917 sshd[6438]: Accepted publickey for core from 10.0.0.1 port 38572 ssh2: RSA SHA256:RLJcxOrQt4GmabkHhO9YLwty0S0pCwAp6uPPBH4jyLg Jul 14 22:23:58.213636 sshd[6438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:23:58.217983 systemd-logind[1440]: New session 27 of user core. Jul 14 22:23:58.226443 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 14 22:23:58.382113 sshd[6438]: pam_unix(sshd:session): session closed for user core Jul 14 22:23:58.386642 systemd[1]: sshd@26-10.0.0.137:22-10.0.0.1:38572.service: Deactivated successfully. Jul 14 22:23:58.388752 systemd[1]: session-27.scope: Deactivated successfully. Jul 14 22:23:58.389442 systemd-logind[1440]: Session 27 logged out. Waiting for processes to exit. Jul 14 22:23:58.390271 systemd-logind[1440]: Removed session 27. Jul 14 22:23:59.583371 kubelet[2543]: E0714 22:23:59.583328 2543 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"