Aug 13 07:15:58.901309 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 22:14:58 -00 2025 Aug 13 07:15:58.901331 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:15:58.901342 kernel: BIOS-provided physical RAM map: Aug 13 07:15:58.901348 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Aug 13 07:15:58.901355 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Aug 13 07:15:58.901361 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Aug 13 07:15:58.901368 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Aug 13 07:15:58.901374 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Aug 13 07:15:58.901380 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Aug 13 07:15:58.901387 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Aug 13 07:15:58.901395 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Aug 13 07:15:58.901402 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Aug 13 07:15:58.901411 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Aug 13 07:15:58.901418 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Aug 13 07:15:58.901428 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Aug 13 07:15:58.901435 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Aug 13 07:15:58.901444 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Aug 13 07:15:58.901451 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Aug 13 07:15:58.901458 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Aug 13 07:15:58.901464 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Aug 13 07:15:58.901471 kernel: NX (Execute Disable) protection: active Aug 13 07:15:58.901478 kernel: APIC: Static calls initialized Aug 13 07:15:58.901484 kernel: efi: EFI v2.7 by EDK II Aug 13 07:15:58.901491 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Aug 13 07:15:58.901498 kernel: SMBIOS 2.8 present. Aug 13 07:15:58.901504 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Aug 13 07:15:58.901511 kernel: Hypervisor detected: KVM Aug 13 07:15:58.901520 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 07:15:58.901527 kernel: kvm-clock: using sched offset of 4994229767 cycles Aug 13 07:15:58.901534 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 07:15:58.901541 kernel: tsc: Detected 2794.750 MHz processor Aug 13 07:15:58.901548 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 07:15:58.901555 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 07:15:58.901562 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Aug 13 07:15:58.901569 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Aug 13 07:15:58.901576 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 07:15:58.901586 kernel: Using GB pages for direct mapping Aug 13 07:15:58.901592 kernel: Secure boot disabled Aug 13 07:15:58.901599 kernel: ACPI: Early table checksum verification disabled Aug 13 07:15:58.901606 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Aug 13 07:15:58.901617 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Aug 13 07:15:58.901625 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:15:58.901632 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:15:58.901641 kernel: ACPI: FACS 0x000000009CBDD000 000040 Aug 13 07:15:58.901649 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:15:58.901658 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:15:58.901665 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:15:58.901673 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:15:58.901680 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Aug 13 07:15:58.901687 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Aug 13 07:15:58.901697 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Aug 13 07:15:58.901704 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Aug 13 07:15:58.901712 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Aug 13 07:15:58.901719 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Aug 13 07:15:58.901726 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Aug 13 07:15:58.901733 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Aug 13 07:15:58.901741 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Aug 13 07:15:58.901748 kernel: No NUMA configuration found Aug 13 07:15:58.901757 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Aug 13 07:15:58.901767 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Aug 13 07:15:58.901775 kernel: Zone ranges: Aug 13 07:15:58.901794 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 07:15:58.901801 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Aug 13 07:15:58.901809 kernel: Normal empty Aug 13 07:15:58.901816 kernel: Movable zone start for each node Aug 13 07:15:58.901823 kernel: Early memory node ranges Aug 13 07:15:58.901830 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Aug 13 07:15:58.901837 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Aug 13 07:15:58.901844 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Aug 13 07:15:58.901855 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Aug 13 07:15:58.901862 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Aug 13 07:15:58.901869 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Aug 13 07:15:58.901878 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Aug 13 07:15:58.901886 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 07:15:58.901893 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Aug 13 07:15:58.901900 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Aug 13 07:15:58.901907 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 07:15:58.901914 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Aug 13 07:15:58.901924 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Aug 13 07:15:58.901931 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Aug 13 07:15:58.901938 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 07:15:58.901946 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 07:15:58.901953 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 07:15:58.901967 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 07:15:58.901974 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 07:15:58.901981 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 07:15:58.901989 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 07:15:58.901998 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 07:15:58.902006 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 07:15:58.902013 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 07:15:58.902021 kernel: TSC deadline timer available Aug 13 07:15:58.902028 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Aug 13 07:15:58.902035 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 13 07:15:58.902042 kernel: kvm-guest: KVM setup pv remote TLB flush Aug 13 07:15:58.902049 kernel: kvm-guest: setup PV sched yield Aug 13 07:15:58.902056 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Aug 13 07:15:58.902066 kernel: Booting paravirtualized kernel on KVM Aug 13 07:15:58.902074 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 07:15:58.902081 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Aug 13 07:15:58.902088 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u524288 Aug 13 07:15:58.902096 kernel: pcpu-alloc: s197096 r8192 d32280 u524288 alloc=1*2097152 Aug 13 07:15:58.902103 kernel: pcpu-alloc: [0] 0 1 2 3 Aug 13 07:15:58.902110 kernel: kvm-guest: PV spinlocks enabled Aug 13 07:15:58.902117 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 07:15:58.902125 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:15:58.902138 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 07:15:58.902145 kernel: random: crng init done Aug 13 07:15:58.902152 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 07:15:58.902160 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 07:15:58.902167 kernel: Fallback order for Node 0: 0 Aug 13 07:15:58.902174 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Aug 13 07:15:58.902181 kernel: Policy zone: DMA32 Aug 13 07:15:58.902189 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 07:15:58.902199 kernel: Memory: 2395616K/2567000K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42876K init, 2316K bss, 171124K reserved, 0K cma-reserved) Aug 13 07:15:58.902206 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Aug 13 07:15:58.902213 kernel: ftrace: allocating 37968 entries in 149 pages Aug 13 07:15:58.902220 kernel: ftrace: allocated 149 pages with 4 groups Aug 13 07:15:58.902228 kernel: Dynamic Preempt: voluntary Aug 13 07:15:58.902243 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 07:15:58.902258 kernel: rcu: RCU event tracing is enabled. Aug 13 07:15:58.902265 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Aug 13 07:15:58.902273 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 07:15:58.902281 kernel: Rude variant of Tasks RCU enabled. Aug 13 07:15:58.902289 kernel: Tracing variant of Tasks RCU enabled. Aug 13 07:15:58.902296 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 07:15:58.902306 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Aug 13 07:15:58.902314 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Aug 13 07:15:58.902324 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 07:15:58.902332 kernel: Console: colour dummy device 80x25 Aug 13 07:15:58.902339 kernel: printk: console [ttyS0] enabled Aug 13 07:15:58.902349 kernel: ACPI: Core revision 20230628 Aug 13 07:15:58.902357 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 13 07:15:58.902365 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 07:15:58.902372 kernel: x2apic enabled Aug 13 07:15:58.902380 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 07:15:58.902387 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Aug 13 07:15:58.902395 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Aug 13 07:15:58.902403 kernel: kvm-guest: setup PV IPIs Aug 13 07:15:58.902410 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 07:15:58.902420 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Aug 13 07:15:58.902428 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Aug 13 07:15:58.902436 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 13 07:15:58.902443 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Aug 13 07:15:58.902451 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Aug 13 07:15:58.902458 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 07:15:58.902466 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 07:15:58.902474 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 07:15:58.902481 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Aug 13 07:15:58.902491 kernel: RETBleed: Mitigation: untrained return thunk Aug 13 07:15:58.902499 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 07:15:58.902507 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 07:15:58.902514 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Aug 13 07:15:58.902525 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Aug 13 07:15:58.902533 kernel: x86/bugs: return thunk changed Aug 13 07:15:58.902540 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Aug 13 07:15:58.902548 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 07:15:58.902558 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 07:15:58.902565 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 07:15:58.902573 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 07:15:58.902581 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Aug 13 07:15:58.902588 kernel: Freeing SMP alternatives memory: 32K Aug 13 07:15:58.902596 kernel: pid_max: default: 32768 minimum: 301 Aug 13 07:15:58.902603 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 07:15:58.902611 kernel: landlock: Up and running. Aug 13 07:15:58.902618 kernel: SELinux: Initializing. Aug 13 07:15:58.902626 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 07:15:58.902636 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 07:15:58.902644 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Aug 13 07:15:58.902651 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 13 07:15:58.902659 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 13 07:15:58.902667 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 13 07:15:58.902674 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Aug 13 07:15:58.902682 kernel: ... version: 0 Aug 13 07:15:58.902689 kernel: ... bit width: 48 Aug 13 07:15:58.902699 kernel: ... generic registers: 6 Aug 13 07:15:58.902707 kernel: ... value mask: 0000ffffffffffff Aug 13 07:15:58.902714 kernel: ... max period: 00007fffffffffff Aug 13 07:15:58.902722 kernel: ... fixed-purpose events: 0 Aug 13 07:15:58.902729 kernel: ... event mask: 000000000000003f Aug 13 07:15:58.902737 kernel: signal: max sigframe size: 1776 Aug 13 07:15:58.902744 kernel: rcu: Hierarchical SRCU implementation. Aug 13 07:15:58.902752 kernel: rcu: Max phase no-delay instances is 400. Aug 13 07:15:58.902760 kernel: smp: Bringing up secondary CPUs ... Aug 13 07:15:58.902769 kernel: smpboot: x86: Booting SMP configuration: Aug 13 07:15:58.902777 kernel: .... node #0, CPUs: #1 #2 #3 Aug 13 07:15:58.902797 kernel: smp: Brought up 1 node, 4 CPUs Aug 13 07:15:58.902804 kernel: smpboot: Max logical packages: 1 Aug 13 07:15:58.902812 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Aug 13 07:15:58.902819 kernel: devtmpfs: initialized Aug 13 07:15:58.902827 kernel: x86/mm: Memory block size: 128MB Aug 13 07:15:58.902834 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Aug 13 07:15:58.902842 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Aug 13 07:15:58.902850 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Aug 13 07:15:58.902860 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Aug 13 07:15:58.902868 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Aug 13 07:15:58.902876 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 07:15:58.902884 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Aug 13 07:15:58.902891 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 07:15:58.902901 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 07:15:58.902909 kernel: audit: initializing netlink subsys (disabled) Aug 13 07:15:58.902917 kernel: audit: type=2000 audit(1755069357.981:1): state=initialized audit_enabled=0 res=1 Aug 13 07:15:58.902926 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 07:15:58.902934 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 07:15:58.902941 kernel: cpuidle: using governor menu Aug 13 07:15:58.902949 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 07:15:58.902956 kernel: dca service started, version 1.12.1 Aug 13 07:15:58.902969 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Aug 13 07:15:58.902977 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Aug 13 07:15:58.902985 kernel: PCI: Using configuration type 1 for base access Aug 13 07:15:58.902992 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 07:15:58.903003 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 07:15:58.903010 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 07:15:58.903018 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 07:15:58.903025 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 07:15:58.903033 kernel: ACPI: Added _OSI(Module Device) Aug 13 07:15:58.903040 kernel: ACPI: Added _OSI(Processor Device) Aug 13 07:15:58.903048 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 07:15:58.903055 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 07:15:58.903063 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 13 07:15:58.903072 kernel: ACPI: Interpreter enabled Aug 13 07:15:58.903080 kernel: ACPI: PM: (supports S0 S3 S5) Aug 13 07:15:58.903087 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 07:15:58.903095 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 07:15:58.903102 kernel: PCI: Using E820 reservations for host bridge windows Aug 13 07:15:58.903110 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Aug 13 07:15:58.903117 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 07:15:58.903344 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 07:15:58.903488 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Aug 13 07:15:58.903617 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Aug 13 07:15:58.903628 kernel: PCI host bridge to bus 0000:00 Aug 13 07:15:58.903769 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 07:15:58.903954 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 07:15:58.904086 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 07:15:58.904199 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Aug 13 07:15:58.904319 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Aug 13 07:15:58.904433 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Aug 13 07:15:58.904548 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 07:15:58.904708 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Aug 13 07:15:58.904870 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Aug 13 07:15:58.905008 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Aug 13 07:15:58.905139 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Aug 13 07:15:58.905264 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Aug 13 07:15:58.905389 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Aug 13 07:15:58.905514 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 07:15:58.905661 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Aug 13 07:15:58.905804 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Aug 13 07:15:58.905934 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Aug 13 07:15:58.906077 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Aug 13 07:15:58.906252 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Aug 13 07:15:58.906381 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Aug 13 07:15:58.906507 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Aug 13 07:15:58.906633 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Aug 13 07:15:58.906775 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Aug 13 07:15:58.906921 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Aug 13 07:15:58.907067 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Aug 13 07:15:58.907196 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Aug 13 07:15:58.907322 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Aug 13 07:15:58.907468 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Aug 13 07:15:58.907596 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Aug 13 07:15:58.907740 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Aug 13 07:15:58.907894 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Aug 13 07:15:58.908061 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Aug 13 07:15:58.908227 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Aug 13 07:15:58.908357 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Aug 13 07:15:58.908368 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 07:15:58.908376 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 07:15:58.908384 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 07:15:58.908391 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 07:15:58.908399 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Aug 13 07:15:58.908411 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Aug 13 07:15:58.908419 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Aug 13 07:15:58.908426 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Aug 13 07:15:58.908434 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Aug 13 07:15:58.908441 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Aug 13 07:15:58.908449 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Aug 13 07:15:58.908456 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Aug 13 07:15:58.908464 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Aug 13 07:15:58.908472 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Aug 13 07:15:58.908481 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Aug 13 07:15:58.908489 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Aug 13 07:15:58.908497 kernel: iommu: Default domain type: Translated Aug 13 07:15:58.908504 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 07:15:58.908512 kernel: efivars: Registered efivars operations Aug 13 07:15:58.908519 kernel: PCI: Using ACPI for IRQ routing Aug 13 07:15:58.908527 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 07:15:58.908534 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Aug 13 07:15:58.908542 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Aug 13 07:15:58.908552 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Aug 13 07:15:58.908559 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Aug 13 07:15:58.908687 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Aug 13 07:15:58.908829 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Aug 13 07:15:58.908958 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 07:15:58.908976 kernel: vgaarb: loaded Aug 13 07:15:58.908984 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 13 07:15:58.908992 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 13 07:15:58.909004 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 07:15:58.909012 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 07:15:58.909020 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 07:15:58.909028 kernel: pnp: PnP ACPI init Aug 13 07:15:58.909182 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Aug 13 07:15:58.909193 kernel: pnp: PnP ACPI: found 6 devices Aug 13 07:15:58.909201 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 07:15:58.909209 kernel: NET: Registered PF_INET protocol family Aug 13 07:15:58.909217 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 07:15:58.909228 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 07:15:58.909236 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 07:15:58.909244 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 07:15:58.909251 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 13 07:15:58.909259 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 07:15:58.909267 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 07:15:58.909274 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 07:15:58.909282 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 07:15:58.909292 kernel: NET: Registered PF_XDP protocol family Aug 13 07:15:58.909421 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Aug 13 07:15:58.909547 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Aug 13 07:15:58.909665 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 07:15:58.909795 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 07:15:58.909915 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 07:15:58.910040 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Aug 13 07:15:58.910156 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Aug 13 07:15:58.910287 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Aug 13 07:15:58.910299 kernel: PCI: CLS 0 bytes, default 64 Aug 13 07:15:58.910306 kernel: Initialise system trusted keyrings Aug 13 07:15:58.910314 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 07:15:58.910322 kernel: Key type asymmetric registered Aug 13 07:15:58.910329 kernel: Asymmetric key parser 'x509' registered Aug 13 07:15:58.910337 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 13 07:15:58.910344 kernel: io scheduler mq-deadline registered Aug 13 07:15:58.910352 kernel: io scheduler kyber registered Aug 13 07:15:58.910363 kernel: io scheduler bfq registered Aug 13 07:15:58.910371 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 07:15:58.910379 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Aug 13 07:15:58.910387 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Aug 13 07:15:58.910394 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Aug 13 07:15:58.910402 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 07:15:58.910410 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 07:15:58.910418 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 07:15:58.910425 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 07:15:58.910435 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 07:15:58.910443 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 07:15:58.910581 kernel: rtc_cmos 00:04: RTC can wake from S4 Aug 13 07:15:58.910702 kernel: rtc_cmos 00:04: registered as rtc0 Aug 13 07:15:58.910837 kernel: rtc_cmos 00:04: setting system clock to 2025-08-13T07:15:58 UTC (1755069358) Aug 13 07:15:58.910957 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Aug 13 07:15:58.910979 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Aug 13 07:15:58.910988 kernel: efifb: probing for efifb Aug 13 07:15:58.911001 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Aug 13 07:15:58.911009 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Aug 13 07:15:58.911017 kernel: efifb: scrolling: redraw Aug 13 07:15:58.911025 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Aug 13 07:15:58.911033 kernel: Console: switching to colour frame buffer device 100x37 Aug 13 07:15:58.911040 kernel: fb0: EFI VGA frame buffer device Aug 13 07:15:58.911068 kernel: pstore: Using crash dump compression: deflate Aug 13 07:15:58.911078 kernel: pstore: Registered efi_pstore as persistent store backend Aug 13 07:15:58.911086 kernel: NET: Registered PF_INET6 protocol family Aug 13 07:15:58.911096 kernel: Segment Routing with IPv6 Aug 13 07:15:58.911104 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 07:15:58.911114 kernel: NET: Registered PF_PACKET protocol family Aug 13 07:15:58.911122 kernel: Key type dns_resolver registered Aug 13 07:15:58.911129 kernel: IPI shorthand broadcast: enabled Aug 13 07:15:58.911137 kernel: sched_clock: Marking stable (937003152, 110128169)->(1090606688, -43475367) Aug 13 07:15:58.911145 kernel: registered taskstats version 1 Aug 13 07:15:58.911153 kernel: Loading compiled-in X.509 certificates Aug 13 07:15:58.911161 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: 264e720147fa8df9744bb9dc1c08171c0cb20041' Aug 13 07:15:58.911171 kernel: Key type .fscrypt registered Aug 13 07:15:58.911179 kernel: Key type fscrypt-provisioning registered Aug 13 07:15:58.911187 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 07:15:58.911195 kernel: ima: Allocated hash algorithm: sha1 Aug 13 07:15:58.911203 kernel: ima: No architecture policies found Aug 13 07:15:58.911211 kernel: clk: Disabling unused clocks Aug 13 07:15:58.911218 kernel: Freeing unused kernel image (initmem) memory: 42876K Aug 13 07:15:58.911226 kernel: Write protecting the kernel read-only data: 36864k Aug 13 07:15:58.911234 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Aug 13 07:15:58.911244 kernel: Run /init as init process Aug 13 07:15:58.911252 kernel: with arguments: Aug 13 07:15:58.911260 kernel: /init Aug 13 07:15:58.911267 kernel: with environment: Aug 13 07:15:58.911275 kernel: HOME=/ Aug 13 07:15:58.911283 kernel: TERM=linux Aug 13 07:15:58.911290 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 07:15:58.911300 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 07:15:58.911313 systemd[1]: Detected virtualization kvm. Aug 13 07:15:58.911321 systemd[1]: Detected architecture x86-64. Aug 13 07:15:58.911329 systemd[1]: Running in initrd. Aug 13 07:15:58.911338 systemd[1]: No hostname configured, using default hostname. Aug 13 07:15:58.911346 systemd[1]: Hostname set to . Aug 13 07:15:58.911360 systemd[1]: Initializing machine ID from VM UUID. Aug 13 07:15:58.911368 systemd[1]: Queued start job for default target initrd.target. Aug 13 07:15:58.911376 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:15:58.911385 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:15:58.911394 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 07:15:58.911402 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:15:58.911411 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 07:15:58.911422 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 07:15:58.911432 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 07:15:58.911440 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 07:15:58.911449 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:15:58.911457 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:15:58.911465 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:15:58.911474 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:15:58.911484 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:15:58.911493 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:15:58.911501 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:15:58.911509 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:15:58.911518 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 07:15:58.911526 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 07:15:58.911535 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:15:58.911543 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:15:58.911551 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:15:58.911562 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:15:58.911571 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 07:15:58.911579 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:15:58.911587 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 07:15:58.911596 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 07:15:58.911604 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:15:58.911612 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:15:58.911620 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:15:58.911631 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 07:15:58.911639 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:15:58.911648 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 07:15:58.911657 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 07:15:58.911684 systemd-journald[190]: Collecting audit messages is disabled. Aug 13 07:15:58.911705 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:15:58.911714 systemd-journald[190]: Journal started Aug 13 07:15:58.911747 systemd-journald[190]: Runtime Journal (/run/log/journal/1f2d4db0b5df47c7885ed2af342735cf) is 6.0M, max 48.3M, 42.2M free. Aug 13 07:15:58.913639 systemd-modules-load[193]: Inserted module 'overlay' Aug 13 07:15:58.919829 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:15:58.921802 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:15:58.923180 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:15:58.927868 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:15:58.929885 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:15:58.932885 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:15:58.953352 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 07:15:58.955815 kernel: Bridge firewalling registered Aug 13 07:15:58.955833 systemd-modules-load[193]: Inserted module 'br_netfilter' Aug 13 07:15:58.958366 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:15:58.968005 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:15:58.969555 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:15:58.971762 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:15:58.976127 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 07:15:58.982829 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:15:58.992947 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:15:59.011299 dracut-cmdline[226]: dracut-dracut-053 Aug 13 07:15:59.015656 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:15:59.035601 systemd-resolved[229]: Positive Trust Anchors: Aug 13 07:15:59.035639 systemd-resolved[229]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:15:59.035684 systemd-resolved[229]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:15:59.039194 systemd-resolved[229]: Defaulting to hostname 'linux'. Aug 13 07:15:59.040942 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:15:59.046250 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:15:59.123845 kernel: SCSI subsystem initialized Aug 13 07:15:59.132810 kernel: Loading iSCSI transport class v2.0-870. Aug 13 07:15:59.143820 kernel: iscsi: registered transport (tcp) Aug 13 07:15:59.164840 kernel: iscsi: registered transport (qla4xxx) Aug 13 07:15:59.164878 kernel: QLogic iSCSI HBA Driver Aug 13 07:15:59.225217 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 07:15:59.235090 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 07:15:59.261893 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 07:15:59.261925 kernel: device-mapper: uevent: version 1.0.3 Aug 13 07:15:59.262895 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 07:15:59.304822 kernel: raid6: avx2x4 gen() 30616 MB/s Aug 13 07:15:59.321810 kernel: raid6: avx2x2 gen() 31335 MB/s Aug 13 07:15:59.338861 kernel: raid6: avx2x1 gen() 25829 MB/s Aug 13 07:15:59.338886 kernel: raid6: using algorithm avx2x2 gen() 31335 MB/s Aug 13 07:15:59.356857 kernel: raid6: .... xor() 19899 MB/s, rmw enabled Aug 13 07:15:59.356877 kernel: raid6: using avx2x2 recovery algorithm Aug 13 07:15:59.377812 kernel: xor: automatically using best checksumming function avx Aug 13 07:15:59.539823 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 07:15:59.554685 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:15:59.563972 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:15:59.577726 systemd-udevd[414]: Using default interface naming scheme 'v255'. Aug 13 07:15:59.582660 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:15:59.594934 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 07:15:59.610700 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation Aug 13 07:15:59.645138 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:15:59.657927 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:15:59.726637 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:15:59.735048 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 07:15:59.748250 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 07:15:59.751530 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:15:59.754342 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:15:59.756857 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:15:59.760812 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Aug 13 07:15:59.765047 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Aug 13 07:15:59.766003 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 07:15:59.770809 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 07:15:59.777881 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 07:15:59.777914 kernel: GPT:9289727 != 19775487 Aug 13 07:15:59.777926 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 07:15:59.777945 kernel: GPT:9289727 != 19775487 Aug 13 07:15:59.777954 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 07:15:59.777964 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:15:59.785316 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 07:15:59.786812 kernel: AES CTR mode by8 optimization enabled Aug 13 07:15:59.788406 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:15:59.799973 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:15:59.801121 kernel: libata version 3.00 loaded. Aug 13 07:15:59.800107 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:15:59.805378 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:15:59.806655 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:15:59.813895 kernel: ahci 0000:00:1f.2: version 3.0 Aug 13 07:15:59.814094 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Aug 13 07:15:59.806828 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:15:59.808067 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:15:59.819040 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Aug 13 07:15:59.819246 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Aug 13 07:15:59.818516 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:15:59.825820 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (467) Aug 13 07:15:59.827810 kernel: scsi host0: ahci Aug 13 07:15:59.828854 kernel: scsi host1: ahci Aug 13 07:15:59.830847 kernel: scsi host2: ahci Aug 13 07:15:59.832591 kernel: scsi host3: ahci Aug 13 07:15:59.832624 kernel: BTRFS: device fsid 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (462) Aug 13 07:15:59.834801 kernel: scsi host4: ahci Aug 13 07:15:59.842041 kernel: scsi host5: ahci Aug 13 07:15:59.842222 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Aug 13 07:15:59.842234 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Aug 13 07:15:59.842251 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Aug 13 07:15:59.842262 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Aug 13 07:15:59.842272 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Aug 13 07:15:59.842292 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Aug 13 07:15:59.840795 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 13 07:15:59.853429 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 13 07:15:59.864863 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 07:15:59.869942 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 13 07:15:59.871169 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 13 07:15:59.887950 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 07:15:59.889092 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:15:59.889159 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:15:59.891396 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:15:59.893386 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:15:59.898729 disk-uuid[560]: Primary Header is updated. Aug 13 07:15:59.898729 disk-uuid[560]: Secondary Entries is updated. Aug 13 07:15:59.898729 disk-uuid[560]: Secondary Header is updated. Aug 13 07:15:59.902112 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:15:59.905819 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:15:59.914529 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:15:59.925020 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:15:59.954437 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:16:00.152819 kernel: ata1: SATA link down (SStatus 0 SControl 300) Aug 13 07:16:00.152901 kernel: ata2: SATA link down (SStatus 0 SControl 300) Aug 13 07:16:00.161031 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Aug 13 07:16:00.161119 kernel: ata5: SATA link down (SStatus 0 SControl 300) Aug 13 07:16:00.161130 kernel: ata4: SATA link down (SStatus 0 SControl 300) Aug 13 07:16:00.175833 kernel: ata6: SATA link down (SStatus 0 SControl 300) Aug 13 07:16:00.175851 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Aug 13 07:16:00.177039 kernel: ata3.00: applying bridge limits Aug 13 07:16:00.177051 kernel: ata3.00: configured for UDMA/100 Aug 13 07:16:00.177808 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Aug 13 07:16:00.230818 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Aug 13 07:16:00.231075 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 13 07:16:00.246813 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Aug 13 07:16:00.909839 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:16:00.910385 disk-uuid[562]: The operation has completed successfully. Aug 13 07:16:00.944498 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 07:16:00.944704 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 07:16:00.975966 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 07:16:00.982496 sh[599]: Success Aug 13 07:16:00.997814 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Aug 13 07:16:01.036443 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 07:16:01.055539 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 07:16:01.058599 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 07:16:01.071867 kernel: BTRFS info (device dm-0): first mount of filesystem 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad Aug 13 07:16:01.071938 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:16:01.071951 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 07:16:01.074145 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 07:16:01.074161 kernel: BTRFS info (device dm-0): using free space tree Aug 13 07:16:01.078866 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 07:16:01.081196 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 07:16:01.089999 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 07:16:01.091671 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 07:16:01.106141 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:16:01.106214 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:16:01.106226 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:16:01.109804 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:16:01.120088 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 07:16:01.121850 kernel: BTRFS info (device vda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:16:01.131207 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 07:16:01.139962 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 07:16:01.218669 ignition[697]: Ignition 2.19.0 Aug 13 07:16:01.218683 ignition[697]: Stage: fetch-offline Aug 13 07:16:01.218722 ignition[697]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:16:01.218733 ignition[697]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:16:01.218852 ignition[697]: parsed url from cmdline: "" Aug 13 07:16:01.218856 ignition[697]: no config URL provided Aug 13 07:16:01.218862 ignition[697]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 07:16:01.218872 ignition[697]: no config at "/usr/lib/ignition/user.ign" Aug 13 07:16:01.218909 ignition[697]: op(1): [started] loading QEMU firmware config module Aug 13 07:16:01.218915 ignition[697]: op(1): executing: "modprobe" "qemu_fw_cfg" Aug 13 07:16:01.229242 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:16:01.235465 ignition[697]: op(1): [finished] loading QEMU firmware config module Aug 13 07:16:01.238016 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:16:01.270905 systemd-networkd[787]: lo: Link UP Aug 13 07:16:01.270918 systemd-networkd[787]: lo: Gained carrier Aug 13 07:16:01.273238 systemd-networkd[787]: Enumeration completed Aug 13 07:16:01.273469 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:16:01.273850 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:16:01.273856 systemd-networkd[787]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 07:16:01.274272 systemd[1]: Reached target network.target - Network. Aug 13 07:16:01.275223 systemd-networkd[787]: eth0: Link UP Aug 13 07:16:01.275228 systemd-networkd[787]: eth0: Gained carrier Aug 13 07:16:01.275237 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:16:01.291436 ignition[697]: parsing config with SHA512: 0192c3706ffad0a9e4e40b5753d3fed496e8bf9c18b832f590bee9873dc50347c9d889002cbbb0c7c9013c5a7848039d97bba8781c084bd7cb19638a761df135 Aug 13 07:16:01.295903 unknown[697]: fetched base config from "system" Aug 13 07:16:01.295921 unknown[697]: fetched user config from "qemu" Aug 13 07:16:01.297863 ignition[697]: fetch-offline: fetch-offline passed Aug 13 07:16:01.298701 ignition[697]: Ignition finished successfully Aug 13 07:16:01.297877 systemd-networkd[787]: eth0: DHCPv4 address 10.0.0.130/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 13 07:16:01.302704 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:16:01.304148 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 13 07:16:01.313935 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 07:16:01.331693 ignition[791]: Ignition 2.19.0 Aug 13 07:16:01.331703 ignition[791]: Stage: kargs Aug 13 07:16:01.331930 ignition[791]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:16:01.331944 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:16:01.332869 ignition[791]: kargs: kargs passed Aug 13 07:16:01.332923 ignition[791]: Ignition finished successfully Aug 13 07:16:01.336809 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 07:16:01.347955 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 07:16:01.365981 ignition[800]: Ignition 2.19.0 Aug 13 07:16:01.365993 ignition[800]: Stage: disks Aug 13 07:16:01.366228 ignition[800]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:16:01.366245 ignition[800]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:16:01.369741 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 07:16:01.367375 ignition[800]: disks: disks passed Aug 13 07:16:01.371494 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 07:16:01.367435 ignition[800]: Ignition finished successfully Aug 13 07:16:01.373730 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 07:16:01.375155 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:16:01.377045 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:16:01.379264 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:16:01.391959 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 07:16:01.406414 systemd-fsck[810]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 13 07:16:01.413141 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 07:16:01.420940 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 07:16:01.582829 kernel: EXT4-fs (vda9): mounted filesystem 98cc0201-e9ec-4d2c-8a62-5b521bf9317d r/w with ordered data mode. Quota mode: none. Aug 13 07:16:01.583383 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 07:16:01.584865 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 07:16:01.594901 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:16:01.596985 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 07:16:01.597415 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 13 07:16:01.597466 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 07:16:01.605650 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (818) Aug 13 07:16:01.605680 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:16:01.597493 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:16:01.610870 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:16:01.610914 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:16:01.610928 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:16:01.607176 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 07:16:01.613750 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:16:01.617569 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 07:16:01.658545 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 07:16:01.663335 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Aug 13 07:16:01.667678 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 07:16:01.673159 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 07:16:01.824300 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 07:16:01.836935 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 07:16:01.840141 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 07:16:01.849814 kernel: BTRFS info (device vda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:16:01.901601 ignition[931]: INFO : Ignition 2.19.0 Aug 13 07:16:01.901601 ignition[931]: INFO : Stage: mount Aug 13 07:16:01.903524 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:16:01.903524 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:16:01.903524 ignition[931]: INFO : mount: mount passed Aug 13 07:16:01.903524 ignition[931]: INFO : Ignition finished successfully Aug 13 07:16:01.908118 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 07:16:01.963106 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 07:16:01.965322 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 07:16:02.071154 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 07:16:02.087967 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:16:02.095828 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (944) Aug 13 07:16:02.098201 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:16:02.098222 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:16:02.098233 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:16:02.101808 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:16:02.103765 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:16:02.130764 ignition[961]: INFO : Ignition 2.19.0 Aug 13 07:16:02.130764 ignition[961]: INFO : Stage: files Aug 13 07:16:02.132647 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:16:02.132647 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:16:02.132647 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Aug 13 07:16:02.136191 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 07:16:02.136191 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 07:16:02.139322 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 07:16:02.139322 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 07:16:02.142089 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 07:16:02.142089 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 13 07:16:02.142089 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 13 07:16:02.142089 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 07:16:02.142089 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 13 07:16:02.139710 unknown[961]: wrote ssh authorized keys file for user: core Aug 13 07:16:02.198565 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 07:16:02.278717 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 07:16:02.280986 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 13 07:16:02.280986 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 07:16:02.280986 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:16:02.280986 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:16:02.280986 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:16:02.280986 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:16:02.280986 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:16:02.280986 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:16:02.280986 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:16:02.280986 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:16:02.280986 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:16:02.280986 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:16:02.280986 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:16:02.280986 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Aug 13 07:16:02.719196 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 13 07:16:03.121991 systemd-networkd[787]: eth0: Gained IPv6LL Aug 13 07:16:03.651350 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:16:03.651350 ignition[961]: INFO : files: op(c): [started] processing unit "containerd.service" Aug 13 07:16:03.655064 ignition[961]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 13 07:16:03.657840 ignition[961]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 13 07:16:03.657840 ignition[961]: INFO : files: op(c): [finished] processing unit "containerd.service" Aug 13 07:16:03.661493 ignition[961]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Aug 13 07:16:03.661493 ignition[961]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:16:03.664564 ignition[961]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:16:03.664564 ignition[961]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Aug 13 07:16:03.664564 ignition[961]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Aug 13 07:16:03.664564 ignition[961]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 07:16:03.670910 ignition[961]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 07:16:03.670910 ignition[961]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Aug 13 07:16:03.670910 ignition[961]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Aug 13 07:16:03.694047 ignition[961]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 07:16:03.702266 ignition[961]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 07:16:03.704034 ignition[961]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Aug 13 07:16:03.704034 ignition[961]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Aug 13 07:16:03.706704 ignition[961]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 07:16:03.708160 ignition[961]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:16:03.709915 ignition[961]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:16:03.711519 ignition[961]: INFO : files: files passed Aug 13 07:16:03.712238 ignition[961]: INFO : Ignition finished successfully Aug 13 07:16:03.714929 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 07:16:03.729950 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 07:16:03.732115 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 07:16:03.734187 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 07:16:03.734298 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 07:16:03.743171 initrd-setup-root-after-ignition[990]: grep: /sysroot/oem/oem-release: No such file or directory Aug 13 07:16:03.745981 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:16:03.747654 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:16:03.749163 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:16:03.748562 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:16:03.750993 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 07:16:03.765008 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 07:16:03.791354 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 07:16:03.791491 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 07:16:03.793806 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 07:16:03.865046 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 07:16:03.868102 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 07:16:03.879986 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 07:16:03.895068 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:16:03.906997 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 07:16:03.917683 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:16:03.919123 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:16:03.921611 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 07:16:03.923711 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 07:16:03.923907 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:16:03.926231 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 07:16:03.927841 systemd[1]: Stopped target basic.target - Basic System. Aug 13 07:16:03.929936 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 07:16:03.931994 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:16:03.934031 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 07:16:03.936458 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 07:16:03.938515 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:16:03.940828 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 07:16:03.942797 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 07:16:03.944905 systemd[1]: Stopped target swap.target - Swaps. Aug 13 07:16:03.946611 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 07:16:03.946856 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:16:03.948997 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:16:03.950375 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:16:03.952363 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 07:16:03.952493 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:16:03.954506 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 07:16:03.954662 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 07:16:03.956892 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 07:16:03.957048 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:16:03.958772 systemd[1]: Stopped target paths.target - Path Units. Aug 13 07:16:03.960469 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 07:16:03.963845 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:16:03.965995 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 07:16:03.967892 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 07:16:03.969623 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 07:16:03.969741 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:16:03.971551 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 07:16:03.971681 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:16:03.973955 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 07:16:03.974112 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:16:03.975887 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 07:16:03.976026 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 07:16:03.985991 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 07:16:03.987121 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 07:16:03.987251 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:16:03.991061 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 07:16:03.992140 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 07:16:03.992373 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:16:03.994584 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 07:16:03.994700 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:16:04.000512 ignition[1016]: INFO : Ignition 2.19.0 Aug 13 07:16:04.000512 ignition[1016]: INFO : Stage: umount Aug 13 07:16:04.000512 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:16:04.000512 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:16:04.008379 ignition[1016]: INFO : umount: umount passed Aug 13 07:16:04.008379 ignition[1016]: INFO : Ignition finished successfully Aug 13 07:16:04.003071 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 07:16:04.003192 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 07:16:04.004609 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 07:16:04.004719 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 07:16:04.007515 systemd[1]: Stopped target network.target - Network. Aug 13 07:16:04.008390 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 07:16:04.008456 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 07:16:04.010285 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 07:16:04.010343 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 07:16:04.012081 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 07:16:04.012134 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 07:16:04.014546 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 07:16:04.014623 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 07:16:04.016846 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 07:16:04.018723 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 07:16:04.021884 systemd-networkd[787]: eth0: DHCPv6 lease lost Aug 13 07:16:04.022207 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 07:16:04.022361 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 07:16:04.027059 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 07:16:04.027553 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 07:16:04.027718 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 07:16:04.030502 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 07:16:04.030558 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:16:04.041910 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 07:16:04.043373 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 07:16:04.043432 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:16:04.045910 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 07:16:04.045965 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:16:04.047857 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 07:16:04.047907 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 07:16:04.049984 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 07:16:04.050041 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:16:04.051493 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:16:04.064178 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 07:16:04.064315 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 07:16:04.070757 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 07:16:04.071837 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:16:04.074555 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 07:16:04.075517 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 07:16:04.077560 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 07:16:04.078507 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:16:04.080534 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 07:16:04.081428 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:16:04.083506 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 07:16:04.084446 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 07:16:04.086455 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:16:04.087405 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:16:04.098931 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 07:16:04.101081 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 07:16:04.102094 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:16:04.104485 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:16:04.105468 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:16:04.108923 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 07:16:04.110047 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 07:16:04.446894 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 07:16:04.447883 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 07:16:04.450250 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 07:16:04.452281 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 07:16:04.453219 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 07:16:04.469217 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 07:16:04.476199 systemd[1]: Switching root. Aug 13 07:16:04.521258 systemd-journald[190]: Journal stopped Aug 13 07:16:06.168016 systemd-journald[190]: Received SIGTERM from PID 1 (systemd). Aug 13 07:16:06.168099 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 07:16:06.168119 kernel: SELinux: policy capability open_perms=1 Aug 13 07:16:06.168141 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 07:16:06.168153 kernel: SELinux: policy capability always_check_network=0 Aug 13 07:16:06.168171 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 07:16:06.168184 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 07:16:06.168195 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 07:16:06.168207 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 07:16:06.168218 kernel: audit: type=1403 audit(1755069365.295:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 07:16:06.168237 systemd[1]: Successfully loaded SELinux policy in 41.985ms. Aug 13 07:16:06.168257 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.183ms. Aug 13 07:16:06.168274 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 07:16:06.168287 systemd[1]: Detected virtualization kvm. Aug 13 07:16:06.168305 systemd[1]: Detected architecture x86-64. Aug 13 07:16:06.168317 systemd[1]: Detected first boot. Aug 13 07:16:06.168329 systemd[1]: Initializing machine ID from VM UUID. Aug 13 07:16:06.168341 zram_generator::config[1081]: No configuration found. Aug 13 07:16:06.168354 systemd[1]: Populated /etc with preset unit settings. Aug 13 07:16:06.168365 systemd[1]: Queued start job for default target multi-user.target. Aug 13 07:16:06.168377 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 13 07:16:06.168390 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 07:16:06.168408 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 07:16:06.168426 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 07:16:06.168441 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 07:16:06.168454 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 07:16:06.168466 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 07:16:06.168483 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 07:16:06.168495 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 07:16:06.168508 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:16:06.168520 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:16:06.168538 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 07:16:06.168551 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 07:16:06.168563 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 07:16:06.168576 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:16:06.168588 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 07:16:06.168600 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:16:06.168612 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 07:16:06.168624 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:16:06.168636 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:16:06.168654 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:16:06.168666 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:16:06.168681 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 07:16:06.168693 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 07:16:06.168705 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 07:16:06.168721 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 07:16:06.168733 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:16:06.168753 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:16:06.168771 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:16:06.168880 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 07:16:06.168895 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 07:16:06.168908 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 07:16:06.168958 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 07:16:06.168971 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:16:06.168983 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 07:16:06.168996 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 07:16:06.169008 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 07:16:06.169027 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 07:16:06.169040 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:16:06.169052 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:16:06.169065 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 07:16:06.169077 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:16:06.169089 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 07:16:06.169100 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:16:06.169112 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 07:16:06.169130 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:16:06.169143 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 07:16:06.169159 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Aug 13 07:16:06.169172 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Aug 13 07:16:06.169184 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:16:06.169196 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:16:06.169208 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 07:16:06.169220 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 07:16:06.169238 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:16:06.169251 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:16:06.169263 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 07:16:06.169275 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 07:16:06.169288 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 07:16:06.169300 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 07:16:06.169311 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 07:16:06.169323 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 07:16:06.169334 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:16:06.169352 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:16:06.169364 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:16:06.169378 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:16:06.169390 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:16:06.169403 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 07:16:06.169421 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 07:16:06.169437 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 07:16:06.169472 systemd-journald[1158]: Collecting audit messages is disabled. Aug 13 07:16:06.169501 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 07:16:06.169513 kernel: fuse: init (API version 7.39) Aug 13 07:16:06.169525 systemd-journald[1158]: Journal started Aug 13 07:16:06.169553 systemd-journald[1158]: Runtime Journal (/run/log/journal/1f2d4db0b5df47c7885ed2af342735cf) is 6.0M, max 48.3M, 42.2M free. Aug 13 07:16:06.174162 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:16:06.179822 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:16:06.180295 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 07:16:06.180538 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 07:16:06.181368 systemd-tmpfiles[1163]: ACLs are not supported, ignoring. Aug 13 07:16:06.181383 systemd-tmpfiles[1163]: ACLs are not supported, ignoring. Aug 13 07:16:06.185042 kernel: loop: module loaded Aug 13 07:16:06.182204 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 07:16:06.183886 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 07:16:06.185841 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 07:16:06.187552 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:16:06.187774 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:16:06.189339 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:16:06.206097 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 07:16:06.216925 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 07:16:06.218735 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 07:16:06.225929 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 07:16:06.229398 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 07:16:06.231908 kernel: ACPI: bus type drm_connector registered Aug 13 07:16:06.232272 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:16:06.234975 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 07:16:06.236166 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:16:06.240962 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:16:06.247903 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 07:16:06.249582 systemd-journald[1158]: Time spent on flushing to /var/log/journal/1f2d4db0b5df47c7885ed2af342735cf is 13.387ms for 984 entries. Aug 13 07:16:06.249582 systemd-journald[1158]: System Journal (/var/log/journal/1f2d4db0b5df47c7885ed2af342735cf) is 8.0M, max 195.6M, 187.6M free. Aug 13 07:16:06.282978 systemd-journald[1158]: Received client request to flush runtime journal. Aug 13 07:16:06.249544 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 07:16:06.250081 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 07:16:06.252150 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 07:16:06.267081 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 07:16:06.273263 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:16:06.276097 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 07:16:06.279284 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 07:16:06.291222 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 13 07:16:06.294551 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 07:16:06.296274 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:16:06.303939 udevadm[1233]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 13 07:16:06.309246 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 07:16:06.316954 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:16:06.359799 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Aug 13 07:16:06.359824 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Aug 13 07:16:06.366365 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:16:06.903222 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 07:16:06.916152 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:16:06.942943 systemd-udevd[1246]: Using default interface naming scheme 'v255'. Aug 13 07:16:06.961323 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:16:06.972013 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:16:06.987016 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 07:16:07.001192 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Aug 13 07:16:07.013842 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1250) Aug 13 07:16:07.039007 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 07:16:07.057947 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 07:16:07.080830 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Aug 13 07:16:07.086808 kernel: ACPI: button: Power Button [PWRF] Aug 13 07:16:07.106827 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Aug 13 07:16:07.107031 systemd-networkd[1251]: lo: Link UP Aug 13 07:16:07.107037 systemd-networkd[1251]: lo: Gained carrier Aug 13 07:16:07.108898 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Aug 13 07:16:07.123899 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Aug 13 07:16:07.108860 systemd-networkd[1251]: Enumeration completed Aug 13 07:16:07.108978 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:16:07.111895 systemd-networkd[1251]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:16:07.111900 systemd-networkd[1251]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 07:16:07.112607 systemd-networkd[1251]: eth0: Link UP Aug 13 07:16:07.112612 systemd-networkd[1251]: eth0: Gained carrier Aug 13 07:16:07.112624 systemd-networkd[1251]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:16:07.123218 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 07:16:07.127427 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Aug 13 07:16:07.127834 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Aug 13 07:16:07.128108 systemd-networkd[1251]: eth0: DHCPv4 address 10.0.0.130/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 13 07:16:07.144501 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 07:16:07.152032 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:16:07.210898 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:16:07.258024 kernel: kvm_amd: TSC scaling supported Aug 13 07:16:07.258174 kernel: kvm_amd: Nested Virtualization enabled Aug 13 07:16:07.258200 kernel: kvm_amd: Nested Paging enabled Aug 13 07:16:07.258223 kernel: kvm_amd: LBR virtualization supported Aug 13 07:16:07.259064 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Aug 13 07:16:07.259093 kernel: kvm_amd: Virtual GIF supported Aug 13 07:16:07.281816 kernel: EDAC MC: Ver: 3.0.0 Aug 13 07:16:07.307087 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 13 07:16:07.321063 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 13 07:16:07.331810 lvm[1293]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 07:16:07.362652 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 13 07:16:07.364403 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:16:07.377970 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 13 07:16:07.383831 lvm[1296]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 07:16:07.420916 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 13 07:16:07.422491 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 07:16:07.423753 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 07:16:07.423794 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:16:07.424849 systemd[1]: Reached target machines.target - Containers. Aug 13 07:16:07.427048 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 13 07:16:07.449206 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 07:16:07.452801 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 07:16:07.454194 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:16:07.455731 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 07:16:07.458806 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 13 07:16:07.462021 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 07:16:07.464336 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 07:16:07.482827 kernel: loop0: detected capacity change from 0 to 221472 Aug 13 07:16:07.486628 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 07:16:07.494896 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 07:16:07.496099 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 13 07:16:07.507840 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 07:16:07.546821 kernel: loop1: detected capacity change from 0 to 140768 Aug 13 07:16:07.590860 kernel: loop2: detected capacity change from 0 to 142488 Aug 13 07:16:07.623816 kernel: loop3: detected capacity change from 0 to 221472 Aug 13 07:16:07.633834 kernel: loop4: detected capacity change from 0 to 140768 Aug 13 07:16:07.645812 kernel: loop5: detected capacity change from 0 to 142488 Aug 13 07:16:07.657487 (sd-merge)[1316]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Aug 13 07:16:07.658338 (sd-merge)[1316]: Merged extensions into '/usr'. Aug 13 07:16:07.662961 systemd[1]: Reloading requested from client PID 1304 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 07:16:07.662977 systemd[1]: Reloading... Aug 13 07:16:07.715849 zram_generator::config[1347]: No configuration found. Aug 13 07:16:07.736576 ldconfig[1301]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 07:16:07.850299 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:16:07.918118 systemd[1]: Reloading finished in 254 ms. Aug 13 07:16:07.941553 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 07:16:07.943142 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 07:16:07.955194 systemd[1]: Starting ensure-sysext.service... Aug 13 07:16:07.958078 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:16:07.963103 systemd[1]: Reloading requested from client PID 1388 ('systemctl') (unit ensure-sysext.service)... Aug 13 07:16:07.963127 systemd[1]: Reloading... Aug 13 07:16:07.984165 systemd-tmpfiles[1389]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 07:16:07.984565 systemd-tmpfiles[1389]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 07:16:07.985654 systemd-tmpfiles[1389]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 07:16:07.986025 systemd-tmpfiles[1389]: ACLs are not supported, ignoring. Aug 13 07:16:07.986118 systemd-tmpfiles[1389]: ACLs are not supported, ignoring. Aug 13 07:16:07.990173 systemd-tmpfiles[1389]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 07:16:07.990187 systemd-tmpfiles[1389]: Skipping /boot Aug 13 07:16:08.006180 systemd-tmpfiles[1389]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 07:16:08.006198 systemd-tmpfiles[1389]: Skipping /boot Aug 13 07:16:08.024833 zram_generator::config[1418]: No configuration found. Aug 13 07:16:08.167667 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:16:08.245326 systemd[1]: Reloading finished in 281 ms. Aug 13 07:16:08.265588 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:16:08.282801 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 07:16:08.286124 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 07:16:08.289373 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 07:16:08.296145 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:16:08.300751 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 07:16:08.308280 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:16:08.308465 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:16:08.311136 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:16:08.317117 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:16:08.322274 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:16:08.325956 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:16:08.326069 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:16:08.329310 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:16:08.329579 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:16:08.331742 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:16:08.331991 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:16:08.339509 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 07:16:08.342538 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:16:08.343015 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:16:08.347317 augenrules[1492]: No rules Aug 13 07:16:08.352142 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 07:16:08.374476 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 07:16:08.382772 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:16:08.383133 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:16:08.391047 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:16:08.394023 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 07:16:08.399307 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:16:08.403553 systemd-resolved[1466]: Positive Trust Anchors: Aug 13 07:16:08.404069 systemd-resolved[1466]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:16:08.404149 systemd-resolved[1466]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:16:08.406028 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:16:08.407436 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:16:08.409149 systemd-resolved[1466]: Defaulting to hostname 'linux'. Aug 13 07:16:08.411805 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 07:16:08.413015 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:16:08.414567 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:16:08.416766 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 07:16:08.418593 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:16:08.418863 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:16:08.420672 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 07:16:08.420924 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 07:16:08.422452 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:16:08.422673 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:16:08.424361 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:16:08.424597 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:16:08.426405 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 07:16:08.431569 systemd[1]: Finished ensure-sysext.service. Aug 13 07:16:08.438272 systemd[1]: Reached target network.target - Network. Aug 13 07:16:08.439241 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:16:08.440441 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:16:08.440518 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:16:08.456014 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 13 07:16:08.457176 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 07:16:08.522649 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 13 07:16:08.965885 systemd-resolved[1466]: Clock change detected. Flushing caches. Aug 13 07:16:08.965927 systemd-timesyncd[1524]: Contacted time server 10.0.0.1:123 (10.0.0.1). Aug 13 07:16:08.965975 systemd-timesyncd[1524]: Initial clock synchronization to Wed 2025-08-13 07:16:08.965801 UTC. Aug 13 07:16:08.967181 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:16:08.968403 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 07:16:08.969689 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 07:16:08.970954 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 07:16:08.972238 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 07:16:08.972269 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:16:08.973178 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 07:16:08.974443 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 07:16:08.975666 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 07:16:08.976900 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:16:08.978787 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 07:16:08.981956 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 07:16:08.984496 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 07:16:08.990752 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 07:16:08.994387 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:16:08.995361 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:16:08.996480 systemd[1]: System is tainted: cgroupsv1 Aug 13 07:16:08.996522 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 07:16:08.996546 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 07:16:08.997881 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 07:16:09.000151 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 07:16:09.002315 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 07:16:09.005193 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 07:16:09.007449 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 07:16:09.016557 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 07:16:09.021445 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 07:16:09.022942 jq[1530]: false Aug 13 07:16:09.023562 dbus-daemon[1529]: [system] SELinux support is enabled Aug 13 07:16:09.030565 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 07:16:09.032995 extend-filesystems[1531]: Found loop3 Aug 13 07:16:09.032995 extend-filesystems[1531]: Found loop4 Aug 13 07:16:09.032995 extend-filesystems[1531]: Found loop5 Aug 13 07:16:09.032995 extend-filesystems[1531]: Found sr0 Aug 13 07:16:09.032995 extend-filesystems[1531]: Found vda Aug 13 07:16:09.032995 extend-filesystems[1531]: Found vda1 Aug 13 07:16:09.032995 extend-filesystems[1531]: Found vda2 Aug 13 07:16:09.032995 extend-filesystems[1531]: Found vda3 Aug 13 07:16:09.032995 extend-filesystems[1531]: Found usr Aug 13 07:16:09.032995 extend-filesystems[1531]: Found vda4 Aug 13 07:16:09.032995 extend-filesystems[1531]: Found vda6 Aug 13 07:16:09.032995 extend-filesystems[1531]: Found vda7 Aug 13 07:16:09.032995 extend-filesystems[1531]: Found vda9 Aug 13 07:16:09.032995 extend-filesystems[1531]: Checking size of /dev/vda9 Aug 13 07:16:09.034829 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 07:16:09.052484 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 07:16:09.053937 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 07:16:09.055380 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 07:16:09.060437 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 07:16:09.073244 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 07:16:09.077577 extend-filesystems[1531]: Resized partition /dev/vda9 Aug 13 07:16:09.078679 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 07:16:09.078959 jq[1553]: true Aug 13 07:16:09.079127 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 07:16:09.079602 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 07:16:09.080013 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 07:16:09.083433 extend-filesystems[1557]: resize2fs 1.47.1 (20-May-2024) Aug 13 07:16:09.085442 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 07:16:09.085783 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 07:16:09.090362 update_engine[1550]: I20250813 07:16:09.088544 1550 main.cc:92] Flatcar Update Engine starting Aug 13 07:16:09.098737 jq[1561]: true Aug 13 07:16:09.102787 update_engine[1550]: I20250813 07:16:09.102730 1550 update_check_scheduler.cc:74] Next update check in 7m55s Aug 13 07:16:09.108548 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1263) Aug 13 07:16:09.119684 (ntainerd)[1565]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 07:16:09.132487 tar[1560]: linux-amd64/helm Aug 13 07:16:09.132842 systemd-logind[1548]: Watching system buttons on /dev/input/event1 (Power Button) Aug 13 07:16:09.132868 systemd-logind[1548]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 07:16:09.134632 systemd-logind[1548]: New seat seat0. Aug 13 07:16:09.136915 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 07:16:09.137633 sshd_keygen[1559]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 07:16:09.144624 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Aug 13 07:16:09.150202 systemd[1]: Started update-engine.service - Update Engine. Aug 13 07:16:09.230623 systemd-networkd[1251]: eth0: Gained IPv6LL Aug 13 07:16:09.292684 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 07:16:09.293118 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 07:16:09.317974 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 07:16:09.318115 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 07:16:09.320488 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 07:16:09.325661 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 07:16:09.333523 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 07:16:09.335576 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 07:16:09.339479 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 07:16:09.346604 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Aug 13 07:16:09.349608 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 07:16:09.352775 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:16:09.357654 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 07:16:09.367124 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 07:16:09.367546 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 07:16:09.384291 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 07:16:09.395842 systemd[1]: coreos-metadata.service: Deactivated successfully. Aug 13 07:16:09.396481 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Aug 13 07:16:09.399025 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 07:16:09.416171 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 07:16:09.430011 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 07:16:09.430332 locksmithd[1598]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 07:16:09.446856 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 07:16:09.448302 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 07:16:09.501080 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 07:16:09.550995 tar[1560]: linux-amd64/LICENSE Aug 13 07:16:09.551127 tar[1560]: linux-amd64/README.md Aug 13 07:16:09.835387 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Aug 13 07:16:10.085063 extend-filesystems[1557]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 13 07:16:10.085063 extend-filesystems[1557]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 13 07:16:10.085063 extend-filesystems[1557]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Aug 13 07:16:10.085451 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 07:16:10.098474 extend-filesystems[1531]: Resized filesystem in /dev/vda9 Aug 13 07:16:10.088978 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 07:16:10.089354 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 07:16:10.292587 bash[1620]: Updated "/home/core/.ssh/authorized_keys" Aug 13 07:16:10.295229 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 07:16:10.301908 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Aug 13 07:16:10.515438 containerd[1565]: time="2025-08-13T07:16:10.514727489Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Aug 13 07:16:10.540980 containerd[1565]: time="2025-08-13T07:16:10.540895404Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:16:10.543262 containerd[1565]: time="2025-08-13T07:16:10.543211366Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:16:10.543262 containerd[1565]: time="2025-08-13T07:16:10.543240481Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 07:16:10.543262 containerd[1565]: time="2025-08-13T07:16:10.543260017Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 07:16:10.543559 containerd[1565]: time="2025-08-13T07:16:10.543532448Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 13 07:16:10.543600 containerd[1565]: time="2025-08-13T07:16:10.543559639Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 13 07:16:10.543687 containerd[1565]: time="2025-08-13T07:16:10.543662442Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:16:10.543687 containerd[1565]: time="2025-08-13T07:16:10.543684493Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:16:10.544095 containerd[1565]: time="2025-08-13T07:16:10.544060027Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:16:10.544095 containerd[1565]: time="2025-08-13T07:16:10.544081698Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 07:16:10.544163 containerd[1565]: time="2025-08-13T07:16:10.544099431Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:16:10.544163 containerd[1565]: time="2025-08-13T07:16:10.544113177Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 07:16:10.544298 containerd[1565]: time="2025-08-13T07:16:10.544273447Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:16:10.544635 containerd[1565]: time="2025-08-13T07:16:10.544600701Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:16:10.544839 containerd[1565]: time="2025-08-13T07:16:10.544808560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:16:10.544839 containerd[1565]: time="2025-08-13T07:16:10.544828969Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 07:16:10.545003 containerd[1565]: time="2025-08-13T07:16:10.544979671Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 07:16:10.545092 containerd[1565]: time="2025-08-13T07:16:10.545071633Z" level=info msg="metadata content store policy set" policy=shared Aug 13 07:16:10.847053 containerd[1565]: time="2025-08-13T07:16:10.846932074Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 07:16:10.847053 containerd[1565]: time="2025-08-13T07:16:10.847005401Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 07:16:10.847053 containerd[1565]: time="2025-08-13T07:16:10.847022443Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 13 07:16:10.847053 containerd[1565]: time="2025-08-13T07:16:10.847049664Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 13 07:16:10.847209 containerd[1565]: time="2025-08-13T07:16:10.847081544Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 07:16:10.847351 containerd[1565]: time="2025-08-13T07:16:10.847312667Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 07:16:10.866952 containerd[1565]: time="2025-08-13T07:16:10.866867846Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 07:16:10.867405 containerd[1565]: time="2025-08-13T07:16:10.867372773Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 13 07:16:10.867479 containerd[1565]: time="2025-08-13T07:16:10.867409722Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 13 07:16:10.867479 containerd[1565]: time="2025-08-13T07:16:10.867435090Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 13 07:16:10.867538 containerd[1565]: time="2025-08-13T07:16:10.867514709Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 07:16:10.867838 containerd[1565]: time="2025-08-13T07:16:10.867542451Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 07:16:10.867838 containerd[1565]: time="2025-08-13T07:16:10.867561657Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 07:16:10.867838 containerd[1565]: time="2025-08-13T07:16:10.867579991Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 07:16:10.867838 containerd[1565]: time="2025-08-13T07:16:10.867598947Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 07:16:10.867838 containerd[1565]: time="2025-08-13T07:16:10.867622551Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 07:16:10.867838 containerd[1565]: time="2025-08-13T07:16:10.867643480Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 07:16:10.867838 containerd[1565]: time="2025-08-13T07:16:10.867666934Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 07:16:10.867838 containerd[1565]: time="2025-08-13T07:16:10.867701369Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 07:16:10.867838 containerd[1565]: time="2025-08-13T07:16:10.867731916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 07:16:10.867838 containerd[1565]: time="2025-08-13T07:16:10.867749389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 07:16:10.867838 containerd[1565]: time="2025-08-13T07:16:10.867771160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 07:16:10.867838 containerd[1565]: time="2025-08-13T07:16:10.867791899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 07:16:10.867838 containerd[1565]: time="2025-08-13T07:16:10.867828207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 07:16:10.868095 containerd[1565]: time="2025-08-13T07:16:10.867856880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 07:16:10.868095 containerd[1565]: time="2025-08-13T07:16:10.867880445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 07:16:10.868095 containerd[1565]: time="2025-08-13T07:16:10.867907926Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 13 07:16:10.868095 containerd[1565]: time="2025-08-13T07:16:10.867929306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 13 07:16:10.868095 containerd[1565]: time="2025-08-13T07:16:10.867950987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 07:16:10.868095 containerd[1565]: time="2025-08-13T07:16:10.867971946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 13 07:16:10.868095 containerd[1565]: time="2025-08-13T07:16:10.867999007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 07:16:10.868095 containerd[1565]: time="2025-08-13T07:16:10.868040785Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 13 07:16:10.868095 containerd[1565]: time="2025-08-13T07:16:10.868074368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 13 07:16:10.868095 containerd[1565]: time="2025-08-13T07:16:10.868099185Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 07:16:10.868282 containerd[1565]: time="2025-08-13T07:16:10.868122078Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 07:16:10.868282 containerd[1565]: time="2025-08-13T07:16:10.868203360Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 07:16:10.868282 containerd[1565]: time="2025-08-13T07:16:10.868231092Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 13 07:16:10.868282 containerd[1565]: time="2025-08-13T07:16:10.868250929Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 07:16:10.868282 containerd[1565]: time="2025-08-13T07:16:10.868272279Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 13 07:16:10.868402 containerd[1565]: time="2025-08-13T07:16:10.868296424Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 07:16:10.868402 containerd[1565]: time="2025-08-13T07:16:10.868316272Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 13 07:16:10.868402 containerd[1565]: time="2025-08-13T07:16:10.868334846Z" level=info msg="NRI interface is disabled by configuration." Aug 13 07:16:10.868466 containerd[1565]: time="2025-08-13T07:16:10.868401842Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 07:16:10.869096 containerd[1565]: time="2025-08-13T07:16:10.869016124Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 07:16:10.869096 containerd[1565]: time="2025-08-13T07:16:10.869109188Z" level=info msg="Connect containerd service" Aug 13 07:16:10.869442 containerd[1565]: time="2025-08-13T07:16:10.869164963Z" level=info msg="using legacy CRI server" Aug 13 07:16:10.869442 containerd[1565]: time="2025-08-13T07:16:10.869192194Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 07:16:10.869500 containerd[1565]: time="2025-08-13T07:16:10.869427294Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 07:16:10.870584 containerd[1565]: time="2025-08-13T07:16:10.870554808Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 07:16:10.870756 containerd[1565]: time="2025-08-13T07:16:10.870710560Z" level=info msg="Start subscribing containerd event" Aug 13 07:16:10.870781 containerd[1565]: time="2025-08-13T07:16:10.870772366Z" level=info msg="Start recovering state" Aug 13 07:16:10.872008 containerd[1565]: time="2025-08-13T07:16:10.871972816Z" level=info msg="Start event monitor" Aug 13 07:16:10.872036 containerd[1565]: time="2025-08-13T07:16:10.872020155Z" level=info msg="Start snapshots syncer" Aug 13 07:16:10.872355 containerd[1565]: time="2025-08-13T07:16:10.872038790Z" level=info msg="Start cni network conf syncer for default" Aug 13 07:16:10.872355 containerd[1565]: time="2025-08-13T07:16:10.872052416Z" level=info msg="Start streaming server" Aug 13 07:16:10.872535 containerd[1565]: time="2025-08-13T07:16:10.872488683Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 07:16:10.872583 containerd[1565]: time="2025-08-13T07:16:10.872558995Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 07:16:10.872660 containerd[1565]: time="2025-08-13T07:16:10.872635238Z" level=info msg="containerd successfully booted in 0.359529s" Aug 13 07:16:10.872886 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 07:16:11.685772 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:16:11.687727 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 07:16:11.689053 systemd[1]: Startup finished in 7.656s (kernel) + 6.022s (userspace) = 13.678s. Aug 13 07:16:11.693362 (kubelet)[1665]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:16:12.048568 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 07:16:12.081806 systemd[1]: Started sshd@0-10.0.0.130:22-10.0.0.1:45580.service - OpenSSH per-connection server daemon (10.0.0.1:45580). Aug 13 07:16:12.124735 sshd[1675]: Accepted publickey for core from 10.0.0.1 port 45580 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:16:12.127173 sshd[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:16:12.136850 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 07:16:12.169796 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 07:16:12.172541 systemd-logind[1548]: New session 1 of user core. Aug 13 07:16:12.188069 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 07:16:12.203655 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 07:16:12.207377 (systemd)[1681]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 07:16:12.443311 systemd[1681]: Queued start job for default target default.target. Aug 13 07:16:12.443776 systemd[1681]: Created slice app.slice - User Application Slice. Aug 13 07:16:12.443794 systemd[1681]: Reached target paths.target - Paths. Aug 13 07:16:12.443807 systemd[1681]: Reached target timers.target - Timers. Aug 13 07:16:12.458478 systemd[1681]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 07:16:12.466638 systemd[1681]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 07:16:12.466715 systemd[1681]: Reached target sockets.target - Sockets. Aug 13 07:16:12.466729 systemd[1681]: Reached target basic.target - Basic System. Aug 13 07:16:12.466770 systemd[1681]: Reached target default.target - Main User Target. Aug 13 07:16:12.466805 systemd[1681]: Startup finished in 250ms. Aug 13 07:16:12.467842 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 07:16:12.471238 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 07:16:12.528568 systemd[1]: Started sshd@1-10.0.0.130:22-10.0.0.1:45582.service - OpenSSH per-connection server daemon (10.0.0.1:45582). Aug 13 07:16:12.553520 kubelet[1665]: E0813 07:16:12.553446 1665 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:16:12.559155 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:16:12.559774 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:16:12.577790 sshd[1694]: Accepted publickey for core from 10.0.0.1 port 45582 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:16:12.579782 sshd[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:16:12.585475 systemd-logind[1548]: New session 2 of user core. Aug 13 07:16:12.603748 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 07:16:12.660895 sshd[1694]: pam_unix(sshd:session): session closed for user core Aug 13 07:16:12.673699 systemd[1]: Started sshd@2-10.0.0.130:22-10.0.0.1:45598.service - OpenSSH per-connection server daemon (10.0.0.1:45598). Aug 13 07:16:12.674613 systemd[1]: sshd@1-10.0.0.130:22-10.0.0.1:45582.service: Deactivated successfully. Aug 13 07:16:12.678130 systemd-logind[1548]: Session 2 logged out. Waiting for processes to exit. Aug 13 07:16:12.679607 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 07:16:12.681811 systemd-logind[1548]: Removed session 2. Aug 13 07:16:12.710892 sshd[1701]: Accepted publickey for core from 10.0.0.1 port 45598 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:16:12.712988 sshd[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:16:12.719250 systemd-logind[1548]: New session 3 of user core. Aug 13 07:16:12.735052 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 07:16:12.790042 sshd[1701]: pam_unix(sshd:session): session closed for user core Aug 13 07:16:12.799577 systemd[1]: Started sshd@3-10.0.0.130:22-10.0.0.1:45612.service - OpenSSH per-connection server daemon (10.0.0.1:45612). Aug 13 07:16:12.800051 systemd[1]: sshd@2-10.0.0.130:22-10.0.0.1:45598.service: Deactivated successfully. Aug 13 07:16:12.802558 systemd-logind[1548]: Session 3 logged out. Waiting for processes to exit. Aug 13 07:16:12.803371 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 07:16:12.805038 systemd-logind[1548]: Removed session 3. Aug 13 07:16:12.874451 sshd[1709]: Accepted publickey for core from 10.0.0.1 port 45612 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:16:12.876728 sshd[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:16:12.882082 systemd-logind[1548]: New session 4 of user core. Aug 13 07:16:12.891751 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 07:16:12.948165 sshd[1709]: pam_unix(sshd:session): session closed for user core Aug 13 07:16:12.971668 systemd[1]: Started sshd@4-10.0.0.130:22-10.0.0.1:45618.service - OpenSSH per-connection server daemon (10.0.0.1:45618). Aug 13 07:16:12.972658 systemd[1]: sshd@3-10.0.0.130:22-10.0.0.1:45612.service: Deactivated successfully. Aug 13 07:16:12.975302 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 07:16:12.976124 systemd-logind[1548]: Session 4 logged out. Waiting for processes to exit. Aug 13 07:16:12.977866 systemd-logind[1548]: Removed session 4. Aug 13 07:16:13.003762 sshd[1718]: Accepted publickey for core from 10.0.0.1 port 45618 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:16:13.005988 sshd[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:16:13.011944 systemd-logind[1548]: New session 5 of user core. Aug 13 07:16:13.021809 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 07:16:13.083295 sudo[1724]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 07:16:13.083730 sudo[1724]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:16:13.107726 sudo[1724]: pam_unix(sudo:session): session closed for user root Aug 13 07:16:13.109902 sshd[1718]: pam_unix(sshd:session): session closed for user core Aug 13 07:16:13.119572 systemd[1]: Started sshd@5-10.0.0.130:22-10.0.0.1:45626.service - OpenSSH per-connection server daemon (10.0.0.1:45626). Aug 13 07:16:13.120034 systemd[1]: sshd@4-10.0.0.130:22-10.0.0.1:45618.service: Deactivated successfully. Aug 13 07:16:13.122246 systemd-logind[1548]: Session 5 logged out. Waiting for processes to exit. Aug 13 07:16:13.123832 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 07:16:13.124617 systemd-logind[1548]: Removed session 5. Aug 13 07:16:13.151264 sshd[1726]: Accepted publickey for core from 10.0.0.1 port 45626 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:16:13.153230 sshd[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:16:13.157786 systemd-logind[1548]: New session 6 of user core. Aug 13 07:16:13.164690 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 07:16:13.224788 sudo[1734]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 07:16:13.225497 sudo[1734]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:16:13.231012 sudo[1734]: pam_unix(sudo:session): session closed for user root Aug 13 07:16:13.239862 sudo[1733]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 13 07:16:13.240583 sudo[1733]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:16:13.264568 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 13 07:16:13.266867 auditctl[1737]: No rules Aug 13 07:16:13.268393 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 07:16:13.268757 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 13 07:16:13.271598 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 07:16:13.318972 augenrules[1756]: No rules Aug 13 07:16:13.321629 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 07:16:13.324444 sudo[1733]: pam_unix(sudo:session): session closed for user root Aug 13 07:16:13.327131 sshd[1726]: pam_unix(sshd:session): session closed for user core Aug 13 07:16:13.343676 systemd[1]: Started sshd@6-10.0.0.130:22-10.0.0.1:45636.service - OpenSSH per-connection server daemon (10.0.0.1:45636). Aug 13 07:16:13.344454 systemd[1]: sshd@5-10.0.0.130:22-10.0.0.1:45626.service: Deactivated successfully. Aug 13 07:16:13.347963 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 07:16:13.348085 systemd-logind[1548]: Session 6 logged out. Waiting for processes to exit. Aug 13 07:16:13.350441 systemd-logind[1548]: Removed session 6. Aug 13 07:16:13.375693 sshd[1762]: Accepted publickey for core from 10.0.0.1 port 45636 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:16:13.378383 sshd[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:16:13.384545 systemd-logind[1548]: New session 7 of user core. Aug 13 07:16:13.394650 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 07:16:13.451493 sudo[1769]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 07:16:13.452145 sudo[1769]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:16:14.057613 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 07:16:14.057918 (dockerd)[1788]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 07:16:14.684106 dockerd[1788]: time="2025-08-13T07:16:14.683474571Z" level=info msg="Starting up" Aug 13 07:16:15.615225 dockerd[1788]: time="2025-08-13T07:16:15.615128880Z" level=info msg="Loading containers: start." Aug 13 07:16:15.735375 kernel: Initializing XFRM netlink socket Aug 13 07:16:15.876306 systemd-networkd[1251]: docker0: Link UP Aug 13 07:16:15.900173 dockerd[1788]: time="2025-08-13T07:16:15.900119466Z" level=info msg="Loading containers: done." Aug 13 07:16:15.922164 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1826726227-merged.mount: Deactivated successfully. Aug 13 07:16:15.925012 dockerd[1788]: time="2025-08-13T07:16:15.924950515Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 07:16:15.925143 dockerd[1788]: time="2025-08-13T07:16:15.925118790Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Aug 13 07:16:15.925304 dockerd[1788]: time="2025-08-13T07:16:15.925281325Z" level=info msg="Daemon has completed initialization" Aug 13 07:16:15.965927 dockerd[1788]: time="2025-08-13T07:16:15.965813438Z" level=info msg="API listen on /run/docker.sock" Aug 13 07:16:15.966112 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 07:16:16.754169 containerd[1565]: time="2025-08-13T07:16:16.754113960Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Aug 13 07:16:17.471129 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3807076664.mount: Deactivated successfully. Aug 13 07:16:18.522410 containerd[1565]: time="2025-08-13T07:16:18.522354670Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:18.523203 containerd[1565]: time="2025-08-13T07:16:18.523120145Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.11: active requests=0, bytes read=28077759" Aug 13 07:16:18.526153 containerd[1565]: time="2025-08-13T07:16:18.526120259Z" level=info msg="ImageCreate event name:\"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:18.528867 containerd[1565]: time="2025-08-13T07:16:18.528833616Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:18.529885 containerd[1565]: time="2025-08-13T07:16:18.529835905Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.11\" with image id \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\", size \"28074559\" in 1.77566602s" Aug 13 07:16:18.529885 containerd[1565]: time="2025-08-13T07:16:18.529882542Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\"" Aug 13 07:16:18.530474 containerd[1565]: time="2025-08-13T07:16:18.530452190Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Aug 13 07:16:20.022217 containerd[1565]: time="2025-08-13T07:16:20.022155982Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:20.022898 containerd[1565]: time="2025-08-13T07:16:20.022845494Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.11: active requests=0, bytes read=24713245" Aug 13 07:16:20.023980 containerd[1565]: time="2025-08-13T07:16:20.023936319Z" level=info msg="ImageCreate event name:\"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:20.026643 containerd[1565]: time="2025-08-13T07:16:20.026610903Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:20.027759 containerd[1565]: time="2025-08-13T07:16:20.027727546Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.11\" with image id \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\", size \"26315079\" in 1.497178996s" Aug 13 07:16:20.027821 containerd[1565]: time="2025-08-13T07:16:20.027764686Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\"" Aug 13 07:16:20.028305 containerd[1565]: time="2025-08-13T07:16:20.028253683Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Aug 13 07:16:21.829295 containerd[1565]: time="2025-08-13T07:16:21.829219931Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:21.835271 containerd[1565]: time="2025-08-13T07:16:21.835204059Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.11: active requests=0, bytes read=18783700" Aug 13 07:16:21.838440 containerd[1565]: time="2025-08-13T07:16:21.838404399Z" level=info msg="ImageCreate event name:\"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:21.842437 containerd[1565]: time="2025-08-13T07:16:21.842373099Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:21.843752 containerd[1565]: time="2025-08-13T07:16:21.843705677Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.11\" with image id \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\", size \"20385552\" in 1.815415976s" Aug 13 07:16:21.843752 containerd[1565]: time="2025-08-13T07:16:21.843745411Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\"" Aug 13 07:16:21.844291 containerd[1565]: time="2025-08-13T07:16:21.844244818Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Aug 13 07:16:22.809668 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 07:16:22.824638 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:16:23.045315 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:16:23.052121 (kubelet)[2011]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:16:23.565528 kubelet[2011]: E0813 07:16:23.565454 2011 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:16:23.572565 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:16:23.572961 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:16:23.922636 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount746057924.mount: Deactivated successfully. Aug 13 07:16:25.053003 containerd[1565]: time="2025-08-13T07:16:25.052933944Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:25.054174 containerd[1565]: time="2025-08-13T07:16:25.054128092Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.11: active requests=0, bytes read=30383612" Aug 13 07:16:25.055863 containerd[1565]: time="2025-08-13T07:16:25.055813682Z" level=info msg="ImageCreate event name:\"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:25.057957 containerd[1565]: time="2025-08-13T07:16:25.057911715Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:25.058624 containerd[1565]: time="2025-08-13T07:16:25.058563317Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.11\" with image id \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\", repo tag \"registry.k8s.io/kube-proxy:v1.31.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\", size \"30382631\" in 3.214270028s" Aug 13 07:16:25.058624 containerd[1565]: time="2025-08-13T07:16:25.058604895Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\"" Aug 13 07:16:25.059177 containerd[1565]: time="2025-08-13T07:16:25.059145709Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 07:16:25.624480 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1608583305.mount: Deactivated successfully. Aug 13 07:16:26.734737 containerd[1565]: time="2025-08-13T07:16:26.734656022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:26.735830 containerd[1565]: time="2025-08-13T07:16:26.735785830Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Aug 13 07:16:26.737057 containerd[1565]: time="2025-08-13T07:16:26.737006909Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:26.740379 containerd[1565]: time="2025-08-13T07:16:26.740313097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:26.741592 containerd[1565]: time="2025-08-13T07:16:26.741546559Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.682370123s" Aug 13 07:16:26.741659 containerd[1565]: time="2025-08-13T07:16:26.741594599Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 07:16:26.742297 containerd[1565]: time="2025-08-13T07:16:26.742262171Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 07:16:27.307740 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2377493864.mount: Deactivated successfully. Aug 13 07:16:27.315666 containerd[1565]: time="2025-08-13T07:16:27.315612558Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:27.316462 containerd[1565]: time="2025-08-13T07:16:27.316417287Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Aug 13 07:16:27.317735 containerd[1565]: time="2025-08-13T07:16:27.317697577Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:27.320305 containerd[1565]: time="2025-08-13T07:16:27.320244581Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:27.321323 containerd[1565]: time="2025-08-13T07:16:27.321268651Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 578.97426ms" Aug 13 07:16:27.321323 containerd[1565]: time="2025-08-13T07:16:27.321305821Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 07:16:27.322016 containerd[1565]: time="2025-08-13T07:16:27.321968834Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 13 07:16:28.047928 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount158737011.mount: Deactivated successfully. Aug 13 07:16:31.421827 containerd[1565]: time="2025-08-13T07:16:31.421734670Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:31.424183 containerd[1565]: time="2025-08-13T07:16:31.424074958Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Aug 13 07:16:31.425316 containerd[1565]: time="2025-08-13T07:16:31.425281560Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:31.428457 containerd[1565]: time="2025-08-13T07:16:31.428413571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:31.429595 containerd[1565]: time="2025-08-13T07:16:31.429551294Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 4.107551332s" Aug 13 07:16:31.429648 containerd[1565]: time="2025-08-13T07:16:31.429594124Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 13 07:16:33.823116 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 07:16:33.837606 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:16:33.850870 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 07:16:33.850973 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 07:16:33.851368 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:16:33.858618 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:16:33.887937 systemd[1]: Reloading requested from client PID 2178 ('systemctl') (unit session-7.scope)... Aug 13 07:16:33.887961 systemd[1]: Reloading... Aug 13 07:16:33.969154 zram_generator::config[2218]: No configuration found. Aug 13 07:16:34.436947 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:16:34.525873 systemd[1]: Reloading finished in 637 ms. Aug 13 07:16:34.592581 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 07:16:34.592784 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 07:16:34.593401 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:16:34.599747 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:16:34.813333 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:16:34.824739 (kubelet)[2277]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 07:16:34.899736 kubelet[2277]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:16:34.899736 kubelet[2277]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 07:16:34.899736 kubelet[2277]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:16:34.900236 kubelet[2277]: I0813 07:16:34.899807 2277 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 07:16:35.304656 kubelet[2277]: I0813 07:16:35.304473 2277 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 07:16:35.304656 kubelet[2277]: I0813 07:16:35.304517 2277 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 07:16:35.304861 kubelet[2277]: I0813 07:16:35.304840 2277 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 07:16:35.327544 kubelet[2277]: E0813 07:16:35.327487 2277 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.130:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:16:35.330984 kubelet[2277]: I0813 07:16:35.330920 2277 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 07:16:35.336686 kubelet[2277]: E0813 07:16:35.336651 2277 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 07:16:35.336686 kubelet[2277]: I0813 07:16:35.336686 2277 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 07:16:35.345123 kubelet[2277]: I0813 07:16:35.345094 2277 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 07:16:35.346009 kubelet[2277]: I0813 07:16:35.345980 2277 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 07:16:35.346190 kubelet[2277]: I0813 07:16:35.346148 2277 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 07:16:35.347077 kubelet[2277]: I0813 07:16:35.346183 2277 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Aug 13 07:16:35.347077 kubelet[2277]: I0813 07:16:35.346758 2277 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 07:16:35.347077 kubelet[2277]: I0813 07:16:35.346770 2277 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 07:16:35.347077 kubelet[2277]: I0813 07:16:35.346923 2277 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:16:35.352810 kubelet[2277]: I0813 07:16:35.352770 2277 kubelet.go:408] "Attempting to sync node with API server" Aug 13 07:16:35.352810 kubelet[2277]: I0813 07:16:35.352811 2277 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 07:16:35.352894 kubelet[2277]: I0813 07:16:35.352866 2277 kubelet.go:314] "Adding apiserver pod source" Aug 13 07:16:35.352921 kubelet[2277]: I0813 07:16:35.352899 2277 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 07:16:35.355594 kubelet[2277]: I0813 07:16:35.355422 2277 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 07:16:35.355594 kubelet[2277]: W0813 07:16:35.355429 2277 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Aug 13 07:16:35.355594 kubelet[2277]: W0813 07:16:35.355424 2277 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Aug 13 07:16:35.355594 kubelet[2277]: E0813 07:16:35.355507 2277 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:16:35.355594 kubelet[2277]: E0813 07:16:35.355507 2277 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:16:35.355897 kubelet[2277]: I0813 07:16:35.355878 2277 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 07:16:35.356900 kubelet[2277]: W0813 07:16:35.356873 2277 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 07:16:35.359223 kubelet[2277]: I0813 07:16:35.359016 2277 server.go:1274] "Started kubelet" Aug 13 07:16:35.359990 kubelet[2277]: I0813 07:16:35.359523 2277 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 07:16:35.359990 kubelet[2277]: I0813 07:16:35.359791 2277 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 07:16:35.359990 kubelet[2277]: I0813 07:16:35.359856 2277 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 07:16:35.360539 kubelet[2277]: I0813 07:16:35.360445 2277 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 07:16:35.363355 kubelet[2277]: I0813 07:16:35.360819 2277 server.go:449] "Adding debug handlers to kubelet server" Aug 13 07:16:35.363355 kubelet[2277]: I0813 07:16:35.361573 2277 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 07:16:35.363355 kubelet[2277]: I0813 07:16:35.363214 2277 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 07:16:35.363355 kubelet[2277]: I0813 07:16:35.363317 2277 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 07:16:35.363502 kubelet[2277]: I0813 07:16:35.363407 2277 reconciler.go:26] "Reconciler: start to sync state" Aug 13 07:16:35.363756 kubelet[2277]: W0813 07:16:35.363721 2277 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Aug 13 07:16:35.363795 kubelet[2277]: E0813 07:16:35.363759 2277 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:16:35.364914 kubelet[2277]: I0813 07:16:35.364891 2277 factory.go:221] Registration of the systemd container factory successfully Aug 13 07:16:35.365103 kubelet[2277]: I0813 07:16:35.365081 2277 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 07:16:35.365635 kubelet[2277]: E0813 07:16:35.365599 2277 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:16:35.365753 kubelet[2277]: E0813 07:16:35.365730 2277 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="200ms" Aug 13 07:16:35.366386 kubelet[2277]: E0813 07:16:35.366356 2277 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 07:16:35.366942 kubelet[2277]: I0813 07:16:35.366913 2277 factory.go:221] Registration of the containerd container factory successfully Aug 13 07:16:35.367471 kubelet[2277]: E0813 07:16:35.366222 2277 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.130:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.130:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185b4250fa882e0c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-13 07:16:35.35898574 +0000 UTC m=+0.529304207,LastTimestamp:2025-08-13 07:16:35.35898574 +0000 UTC m=+0.529304207,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 13 07:16:35.381740 kubelet[2277]: I0813 07:16:35.380925 2277 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 07:16:35.382583 kubelet[2277]: I0813 07:16:35.382546 2277 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 07:16:35.382625 kubelet[2277]: I0813 07:16:35.382597 2277 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 07:16:35.382657 kubelet[2277]: I0813 07:16:35.382638 2277 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 07:16:35.382722 kubelet[2277]: E0813 07:16:35.382698 2277 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 07:16:35.385729 kubelet[2277]: W0813 07:16:35.385672 2277 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Aug 13 07:16:35.385781 kubelet[2277]: E0813 07:16:35.385738 2277 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:16:35.399304 kubelet[2277]: I0813 07:16:35.399281 2277 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 07:16:35.399422 kubelet[2277]: I0813 07:16:35.399393 2277 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 07:16:35.399422 kubelet[2277]: I0813 07:16:35.399421 2277 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:16:35.466770 kubelet[2277]: E0813 07:16:35.466734 2277 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:16:35.482965 kubelet[2277]: E0813 07:16:35.482910 2277 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 07:16:35.566983 kubelet[2277]: E0813 07:16:35.566831 2277 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:16:35.566983 kubelet[2277]: E0813 07:16:35.566936 2277 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="400ms" Aug 13 07:16:35.667505 kubelet[2277]: E0813 07:16:35.667419 2277 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:16:35.683634 kubelet[2277]: E0813 07:16:35.683590 2277 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 07:16:35.768302 kubelet[2277]: E0813 07:16:35.768238 2277 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:16:35.869243 kubelet[2277]: E0813 07:16:35.869140 2277 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:16:35.968093 kubelet[2277]: E0813 07:16:35.968023 2277 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="800ms" Aug 13 07:16:35.970150 kubelet[2277]: E0813 07:16:35.970108 2277 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:16:36.031360 kubelet[2277]: I0813 07:16:36.031308 2277 policy_none.go:49] "None policy: Start" Aug 13 07:16:36.032523 kubelet[2277]: I0813 07:16:36.032493 2277 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 07:16:36.032610 kubelet[2277]: I0813 07:16:36.032542 2277 state_mem.go:35] "Initializing new in-memory state store" Aug 13 07:16:36.043387 kubelet[2277]: I0813 07:16:36.043321 2277 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 07:16:36.043603 kubelet[2277]: I0813 07:16:36.043585 2277 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 07:16:36.043643 kubelet[2277]: I0813 07:16:36.043604 2277 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 07:16:36.044594 kubelet[2277]: I0813 07:16:36.044539 2277 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 07:16:36.045646 kubelet[2277]: E0813 07:16:36.045618 2277 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 13 07:16:36.145625 kubelet[2277]: I0813 07:16:36.145455 2277 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 07:16:36.145948 kubelet[2277]: E0813 07:16:36.145902 2277 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" Aug 13 07:16:36.168566 kubelet[2277]: I0813 07:16:36.168516 2277 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1f306aea8f84e5b224399cc750986288-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1f306aea8f84e5b224399cc750986288\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:16:36.168566 kubelet[2277]: I0813 07:16:36.168562 2277 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:16:36.168715 kubelet[2277]: I0813 07:16:36.168599 2277 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:16:36.168715 kubelet[2277]: I0813 07:16:36.168620 2277 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:16:36.168715 kubelet[2277]: I0813 07:16:36.168637 2277 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1f306aea8f84e5b224399cc750986288-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1f306aea8f84e5b224399cc750986288\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:16:36.168715 kubelet[2277]: I0813 07:16:36.168654 2277 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1f306aea8f84e5b224399cc750986288-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1f306aea8f84e5b224399cc750986288\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:16:36.168715 kubelet[2277]: I0813 07:16:36.168679 2277 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:16:36.168855 kubelet[2277]: I0813 07:16:36.168697 2277 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:16:36.168855 kubelet[2277]: I0813 07:16:36.168714 2277 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/27e4a50e94f48ec00f6bd509cb48ed05-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"27e4a50e94f48ec00f6bd509cb48ed05\") " pod="kube-system/kube-scheduler-localhost" Aug 13 07:16:36.201488 kubelet[2277]: W0813 07:16:36.201378 2277 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Aug 13 07:16:36.201488 kubelet[2277]: E0813 07:16:36.201476 2277 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:16:36.348390 kubelet[2277]: I0813 07:16:36.348353 2277 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 07:16:36.348825 kubelet[2277]: E0813 07:16:36.348790 2277 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" Aug 13 07:16:36.390944 kubelet[2277]: E0813 07:16:36.390881 2277 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:16:36.391610 containerd[1565]: time="2025-08-13T07:16:36.391562362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1f306aea8f84e5b224399cc750986288,Namespace:kube-system,Attempt:0,}" Aug 13 07:16:36.392812 kubelet[2277]: E0813 07:16:36.392772 2277 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:16:36.393153 containerd[1565]: time="2025-08-13T07:16:36.393129310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:407c569889bb86d746b0274843003fd0,Namespace:kube-system,Attempt:0,}" Aug 13 07:16:36.394465 kubelet[2277]: E0813 07:16:36.394433 2277 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:16:36.395216 containerd[1565]: time="2025-08-13T07:16:36.395170847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:27e4a50e94f48ec00f6bd509cb48ed05,Namespace:kube-system,Attempt:0,}" Aug 13 07:16:36.453032 kubelet[2277]: W0813 07:16:36.452850 2277 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Aug 13 07:16:36.453032 kubelet[2277]: E0813 07:16:36.452946 2277 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:16:36.502361 kubelet[2277]: W0813 07:16:36.502231 2277 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Aug 13 07:16:36.502482 kubelet[2277]: E0813 07:16:36.502370 2277 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:16:36.642720 kubelet[2277]: W0813 07:16:36.642661 2277 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Aug 13 07:16:36.642720 kubelet[2277]: E0813 07:16:36.642706 2277 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:16:36.750776 kubelet[2277]: I0813 07:16:36.750680 2277 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 07:16:36.751138 kubelet[2277]: E0813 07:16:36.751095 2277 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" Aug 13 07:16:36.768897 kubelet[2277]: E0813 07:16:36.768834 2277 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="1.6s" Aug 13 07:16:37.236135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1216065408.mount: Deactivated successfully. Aug 13 07:16:37.241102 containerd[1565]: time="2025-08-13T07:16:37.241043025Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:16:37.242974 containerd[1565]: time="2025-08-13T07:16:37.242925865Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 07:16:37.244130 containerd[1565]: time="2025-08-13T07:16:37.244081211Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:16:37.245197 containerd[1565]: time="2025-08-13T07:16:37.245156967Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:16:37.246077 containerd[1565]: time="2025-08-13T07:16:37.245997643Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:16:37.246846 containerd[1565]: time="2025-08-13T07:16:37.246802662Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 07:16:37.247576 containerd[1565]: time="2025-08-13T07:16:37.247527441Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Aug 13 07:16:37.250737 containerd[1565]: time="2025-08-13T07:16:37.250684079Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:16:37.253091 containerd[1565]: time="2025-08-13T07:16:37.253054613Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 859.866352ms" Aug 13 07:16:37.253935 containerd[1565]: time="2025-08-13T07:16:37.253898334Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 858.633121ms" Aug 13 07:16:37.254836 containerd[1565]: time="2025-08-13T07:16:37.254775859Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 863.130251ms" Aug 13 07:16:37.527503 kubelet[2277]: E0813 07:16:37.527428 2277 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.130:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:16:37.553405 kubelet[2277]: I0813 07:16:37.553359 2277 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 07:16:37.554069 kubelet[2277]: E0813 07:16:37.553963 2277 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" Aug 13 07:16:37.613511 containerd[1565]: time="2025-08-13T07:16:37.613067488Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:16:37.613511 containerd[1565]: time="2025-08-13T07:16:37.613142759Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:16:37.613511 containerd[1565]: time="2025-08-13T07:16:37.613157046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:16:37.613511 containerd[1565]: time="2025-08-13T07:16:37.613282972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:16:37.622229 containerd[1565]: time="2025-08-13T07:16:37.621900898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:16:37.622229 containerd[1565]: time="2025-08-13T07:16:37.621990235Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:16:37.622229 containerd[1565]: time="2025-08-13T07:16:37.622010072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:16:37.622229 containerd[1565]: time="2025-08-13T07:16:37.622120008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:16:37.623686 containerd[1565]: time="2025-08-13T07:16:37.623268762Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:16:37.623686 containerd[1565]: time="2025-08-13T07:16:37.623313095Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:16:37.623686 containerd[1565]: time="2025-08-13T07:16:37.623389348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:16:37.623686 containerd[1565]: time="2025-08-13T07:16:37.623548236Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:16:37.755090 containerd[1565]: time="2025-08-13T07:16:37.755044775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:407c569889bb86d746b0274843003fd0,Namespace:kube-system,Attempt:0,} returns sandbox id \"36f6eaabac670024676f40f09ed0464175090e6ba7b606c3b91ae365c85a04dc\"" Aug 13 07:16:37.758325 kubelet[2277]: E0813 07:16:37.757998 2277 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:16:37.761388 containerd[1565]: time="2025-08-13T07:16:37.761297687Z" level=info msg="CreateContainer within sandbox \"36f6eaabac670024676f40f09ed0464175090e6ba7b606c3b91ae365c85a04dc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 07:16:37.761854 containerd[1565]: time="2025-08-13T07:16:37.761825015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:27e4a50e94f48ec00f6bd509cb48ed05,Namespace:kube-system,Attempt:0,} returns sandbox id \"543382577b22f77392f7ccc1620bbb7ddd31378b959d900c81b56a11b36345b4\"" Aug 13 07:16:37.762325 kubelet[2277]: E0813 07:16:37.762302 2277 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:16:37.763625 containerd[1565]: time="2025-08-13T07:16:37.763596506Z" level=info msg="CreateContainer within sandbox \"543382577b22f77392f7ccc1620bbb7ddd31378b959d900c81b56a11b36345b4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 07:16:37.766267 containerd[1565]: time="2025-08-13T07:16:37.766234812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1f306aea8f84e5b224399cc750986288,Namespace:kube-system,Attempt:0,} returns sandbox id \"acad90556b9e9d289f9decdb72944aaf71d323bec3c4f76fae4e0e23a8e6da76\"" Aug 13 07:16:37.767187 kubelet[2277]: E0813 07:16:37.767168 2277 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:16:37.768875 containerd[1565]: time="2025-08-13T07:16:37.768808597Z" level=info msg="CreateContainer within sandbox \"acad90556b9e9d289f9decdb72944aaf71d323bec3c4f76fae4e0e23a8e6da76\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 07:16:37.795942 containerd[1565]: time="2025-08-13T07:16:37.795810045Z" level=info msg="CreateContainer within sandbox \"36f6eaabac670024676f40f09ed0464175090e6ba7b606c3b91ae365c85a04dc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d4358756fbf647abf3c2cd9f623aec8f3b0fa98e5f4c09f35d4364937c141806\"" Aug 13 07:16:37.796673 containerd[1565]: time="2025-08-13T07:16:37.796631756Z" level=info msg="StartContainer for \"d4358756fbf647abf3c2cd9f623aec8f3b0fa98e5f4c09f35d4364937c141806\"" Aug 13 07:16:37.802790 containerd[1565]: time="2025-08-13T07:16:37.802753201Z" level=info msg="CreateContainer within sandbox \"543382577b22f77392f7ccc1620bbb7ddd31378b959d900c81b56a11b36345b4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"087bf0f34e5d4fb80932fa40b93f5b6ab4e23da5cb3b902840c065180c18c0bc\"" Aug 13 07:16:37.803199 containerd[1565]: time="2025-08-13T07:16:37.803178819Z" level=info msg="StartContainer for \"087bf0f34e5d4fb80932fa40b93f5b6ab4e23da5cb3b902840c065180c18c0bc\"" Aug 13 07:16:37.804916 containerd[1565]: time="2025-08-13T07:16:37.804887472Z" level=info msg="CreateContainer within sandbox \"acad90556b9e9d289f9decdb72944aaf71d323bec3c4f76fae4e0e23a8e6da76\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"10e9ea2dd9c0a08990c1fbd1f717782e424038f3a8552cab29ce08de6ad3efd5\"" Aug 13 07:16:37.805534 containerd[1565]: time="2025-08-13T07:16:37.805351322Z" level=info msg="StartContainer for \"10e9ea2dd9c0a08990c1fbd1f717782e424038f3a8552cab29ce08de6ad3efd5\"" Aug 13 07:16:37.894702 containerd[1565]: time="2025-08-13T07:16:37.894538836Z" level=info msg="StartContainer for \"d4358756fbf647abf3c2cd9f623aec8f3b0fa98e5f4c09f35d4364937c141806\" returns successfully" Aug 13 07:16:37.903775 containerd[1565]: time="2025-08-13T07:16:37.902647658Z" level=info msg="StartContainer for \"087bf0f34e5d4fb80932fa40b93f5b6ab4e23da5cb3b902840c065180c18c0bc\" returns successfully" Aug 13 07:16:37.903775 containerd[1565]: time="2025-08-13T07:16:37.903086570Z" level=info msg="StartContainer for \"10e9ea2dd9c0a08990c1fbd1f717782e424038f3a8552cab29ce08de6ad3efd5\" returns successfully" Aug 13 07:16:38.395089 kubelet[2277]: E0813 07:16:38.394772 2277 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:16:38.395762 kubelet[2277]: E0813 07:16:38.395689 2277 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:16:38.397876 kubelet[2277]: E0813 07:16:38.397793 2277 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:16:39.156240 kubelet[2277]: I0813 07:16:39.156209 2277 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 07:16:39.399714 kubelet[2277]: E0813 07:16:39.399673 2277 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:16:39.399714 kubelet[2277]: E0813 07:16:39.399673 2277 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:16:39.516163 kubelet[2277]: E0813 07:16:39.516112 2277 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Aug 13 07:16:39.660889 kubelet[2277]: E0813 07:16:39.660742 2277 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.185b4250fa882e0c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-13 07:16:35.35898574 +0000 UTC m=+0.529304207,LastTimestamp:2025-08-13 07:16:35.35898574 +0000 UTC m=+0.529304207,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 13 07:16:39.661243 kubelet[2277]: I0813 07:16:39.661207 2277 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Aug 13 07:16:39.661243 kubelet[2277]: E0813 07:16:39.661245 2277 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Aug 13 07:16:40.357756 kubelet[2277]: I0813 07:16:40.357706 2277 apiserver.go:52] "Watching apiserver" Aug 13 07:16:40.364115 kubelet[2277]: I0813 07:16:40.364087 2277 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 07:16:42.077554 systemd[1]: Reloading requested from client PID 2557 ('systemctl') (unit session-7.scope)... Aug 13 07:16:42.077570 systemd[1]: Reloading... Aug 13 07:16:42.151374 zram_generator::config[2599]: No configuration found. Aug 13 07:16:42.271532 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:16:42.355623 systemd[1]: Reloading finished in 277 ms. Aug 13 07:16:42.388941 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:16:42.407734 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 07:16:42.408233 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:16:42.419555 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:16:42.583234 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:16:42.588473 (kubelet)[2651]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 07:16:42.631452 kubelet[2651]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:16:42.631452 kubelet[2651]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 07:16:42.631905 kubelet[2651]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:16:42.631905 kubelet[2651]: I0813 07:16:42.631814 2651 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 07:16:42.638209 kubelet[2651]: I0813 07:16:42.638162 2651 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 07:16:42.638209 kubelet[2651]: I0813 07:16:42.638187 2651 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 07:16:42.638405 kubelet[2651]: I0813 07:16:42.638391 2651 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 07:16:42.639631 kubelet[2651]: I0813 07:16:42.639604 2651 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 07:16:42.641763 kubelet[2651]: I0813 07:16:42.641702 2651 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 07:16:42.645663 kubelet[2651]: E0813 07:16:42.645623 2651 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 07:16:42.645663 kubelet[2651]: I0813 07:16:42.645651 2651 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 07:16:42.650561 kubelet[2651]: I0813 07:16:42.650524 2651 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 07:16:42.651023 kubelet[2651]: I0813 07:16:42.650993 2651 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 07:16:42.651180 kubelet[2651]: I0813 07:16:42.651137 2651 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 07:16:42.651333 kubelet[2651]: I0813 07:16:42.651166 2651 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Aug 13 07:16:42.651428 kubelet[2651]: I0813 07:16:42.651355 2651 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 07:16:42.651428 kubelet[2651]: I0813 07:16:42.651373 2651 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 07:16:42.651428 kubelet[2651]: I0813 07:16:42.651407 2651 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:16:42.651529 kubelet[2651]: I0813 07:16:42.651514 2651 kubelet.go:408] "Attempting to sync node with API server" Aug 13 07:16:42.651529 kubelet[2651]: I0813 07:16:42.651530 2651 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 07:16:42.651570 kubelet[2651]: I0813 07:16:42.651557 2651 kubelet.go:314] "Adding apiserver pod source" Aug 13 07:16:42.651570 kubelet[2651]: I0813 07:16:42.651567 2651 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 07:16:42.652761 kubelet[2651]: I0813 07:16:42.652683 2651 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 07:16:42.654316 kubelet[2651]: I0813 07:16:42.654299 2651 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 07:16:42.654931 kubelet[2651]: I0813 07:16:42.654915 2651 server.go:1274] "Started kubelet" Aug 13 07:16:42.656137 kubelet[2651]: I0813 07:16:42.656106 2651 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 07:16:42.656278 kubelet[2651]: I0813 07:16:42.656181 2651 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 07:16:42.656488 kubelet[2651]: I0813 07:16:42.656465 2651 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 07:16:42.657076 kubelet[2651]: I0813 07:16:42.657051 2651 server.go:449] "Adding debug handlers to kubelet server" Aug 13 07:16:42.659692 kubelet[2651]: E0813 07:16:42.659204 2651 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 07:16:42.660129 kubelet[2651]: I0813 07:16:42.660087 2651 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 07:16:42.660290 kubelet[2651]: I0813 07:16:42.660259 2651 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 07:16:42.663663 kubelet[2651]: I0813 07:16:42.663634 2651 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 07:16:42.665353 kubelet[2651]: I0813 07:16:42.663881 2651 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 07:16:42.665353 kubelet[2651]: I0813 07:16:42.664162 2651 reconciler.go:26] "Reconciler: start to sync state" Aug 13 07:16:42.666878 kubelet[2651]: I0813 07:16:42.666855 2651 factory.go:221] Registration of the systemd container factory successfully Aug 13 07:16:42.666993 kubelet[2651]: I0813 07:16:42.666970 2651 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 07:16:42.669608 kubelet[2651]: I0813 07:16:42.668890 2651 factory.go:221] Registration of the containerd container factory successfully Aug 13 07:16:42.675870 kubelet[2651]: I0813 07:16:42.675705 2651 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 07:16:42.677756 kubelet[2651]: I0813 07:16:42.677731 2651 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 07:16:42.677756 kubelet[2651]: I0813 07:16:42.677763 2651 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 07:16:42.677876 kubelet[2651]: I0813 07:16:42.677783 2651 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 07:16:42.677876 kubelet[2651]: E0813 07:16:42.677843 2651 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 07:16:42.722261 kubelet[2651]: I0813 07:16:42.722222 2651 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 07:16:42.722261 kubelet[2651]: I0813 07:16:42.722245 2651 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 07:16:42.722261 kubelet[2651]: I0813 07:16:42.722270 2651 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:16:42.722472 kubelet[2651]: I0813 07:16:42.722447 2651 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 07:16:42.722472 kubelet[2651]: I0813 07:16:42.722459 2651 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 07:16:42.722527 kubelet[2651]: I0813 07:16:42.722477 2651 policy_none.go:49] "None policy: Start" Aug 13 07:16:42.723141 kubelet[2651]: I0813 07:16:42.723112 2651 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 07:16:42.723176 kubelet[2651]: I0813 07:16:42.723163 2651 state_mem.go:35] "Initializing new in-memory state store" Aug 13 07:16:42.723413 kubelet[2651]: I0813 07:16:42.723398 2651 state_mem.go:75] "Updated machine memory state" Aug 13 07:16:42.725115 kubelet[2651]: I0813 07:16:42.725086 2651 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 07:16:42.726107 kubelet[2651]: I0813 07:16:42.725518 2651 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 07:16:42.726107 kubelet[2651]: I0813 07:16:42.725541 2651 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 07:16:42.726107 kubelet[2651]: I0813 07:16:42.725926 2651 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 07:16:42.832106 kubelet[2651]: I0813 07:16:42.831974 2651 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 07:16:42.843929 kubelet[2651]: I0813 07:16:42.843706 2651 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Aug 13 07:16:42.843929 kubelet[2651]: I0813 07:16:42.843799 2651 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Aug 13 07:16:42.864638 kubelet[2651]: I0813 07:16:42.864578 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/27e4a50e94f48ec00f6bd509cb48ed05-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"27e4a50e94f48ec00f6bd509cb48ed05\") " pod="kube-system/kube-scheduler-localhost" Aug 13 07:16:42.864638 kubelet[2651]: I0813 07:16:42.864624 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1f306aea8f84e5b224399cc750986288-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1f306aea8f84e5b224399cc750986288\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:16:42.864638 kubelet[2651]: I0813 07:16:42.864645 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:16:42.864872 kubelet[2651]: I0813 07:16:42.864672 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:16:42.864872 kubelet[2651]: I0813 07:16:42.864728 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:16:42.864872 kubelet[2651]: I0813 07:16:42.864771 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1f306aea8f84e5b224399cc750986288-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1f306aea8f84e5b224399cc750986288\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:16:42.864872 kubelet[2651]: I0813 07:16:42.864790 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1f306aea8f84e5b224399cc750986288-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1f306aea8f84e5b224399cc750986288\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:16:42.864872 kubelet[2651]: I0813 07:16:42.864808 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:16:42.864981 kubelet[2651]: I0813 07:16:42.864826 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:16:43.089479 kubelet[2651]: E0813 07:16:43.089424 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:16:43.089744 kubelet[2651]: E0813 07:16:43.089713 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:16:43.089875 kubelet[2651]: E0813 07:16:43.089847 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:16:43.652999 kubelet[2651]: I0813 07:16:43.652937 2651 apiserver.go:52] "Watching apiserver" Aug 13 07:16:43.664881 kubelet[2651]: I0813 07:16:43.664836 2651 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 07:16:43.690318 kubelet[2651]: E0813 07:16:43.689874 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:16:43.690318 kubelet[2651]: E0813 07:16:43.689891 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:16:43.690318 kubelet[2651]: E0813 07:16:43.690079 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:16:43.711008 kubelet[2651]: I0813 07:16:43.710923 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.710896699 podStartE2EDuration="1.710896699s" podCreationTimestamp="2025-08-13 07:16:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:16:43.710822457 +0000 UTC m=+1.115684844" watchObservedRunningTime="2025-08-13 07:16:43.710896699 +0000 UTC m=+1.115759087" Aug 13 07:16:43.719357 kubelet[2651]: I0813 07:16:43.718752 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.718730691 podStartE2EDuration="1.718730691s" podCreationTimestamp="2025-08-13 07:16:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:16:43.718404698 +0000 UTC m=+1.123267085" watchObservedRunningTime="2025-08-13 07:16:43.718730691 +0000 UTC m=+1.123593078" Aug 13 07:16:43.744361 kubelet[2651]: I0813 07:16:43.744260 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.744237371 podStartE2EDuration="1.744237371s" podCreationTimestamp="2025-08-13 07:16:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:16:43.732361521 +0000 UTC m=+1.137223907" watchObservedRunningTime="2025-08-13 07:16:43.744237371 +0000 UTC m=+1.149099758" Aug 13 07:16:44.690861 kubelet[2651]: E0813 07:16:44.690811 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:16:45.882635 kubelet[2651]: E0813 07:16:45.882582 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:16:47.351534 kubelet[2651]: I0813 07:16:47.351490 2651 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 07:16:47.352180 containerd[1565]: time="2025-08-13T07:16:47.351945598Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 07:16:47.352568 kubelet[2651]: I0813 07:16:47.352258 2651 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 07:16:48.400132 kubelet[2651]: I0813 07:16:48.400073 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xtwp\" (UniqueName: \"kubernetes.io/projected/2bec1054-84fb-4f55-a8aa-6cc66548ffca-kube-api-access-9xtwp\") pod \"kube-proxy-x4lvb\" (UID: \"2bec1054-84fb-4f55-a8aa-6cc66548ffca\") " pod="kube-system/kube-proxy-x4lvb" Aug 13 07:16:48.400132 kubelet[2651]: I0813 07:16:48.400130 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2bec1054-84fb-4f55-a8aa-6cc66548ffca-kube-proxy\") pod \"kube-proxy-x4lvb\" (UID: \"2bec1054-84fb-4f55-a8aa-6cc66548ffca\") " pod="kube-system/kube-proxy-x4lvb" Aug 13 07:16:48.400689 kubelet[2651]: I0813 07:16:48.400161 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2bec1054-84fb-4f55-a8aa-6cc66548ffca-xtables-lock\") pod \"kube-proxy-x4lvb\" (UID: \"2bec1054-84fb-4f55-a8aa-6cc66548ffca\") " pod="kube-system/kube-proxy-x4lvb" Aug 13 07:16:48.400689 kubelet[2651]: I0813 07:16:48.400184 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2bec1054-84fb-4f55-a8aa-6cc66548ffca-lib-modules\") pod \"kube-proxy-x4lvb\" (UID: \"2bec1054-84fb-4f55-a8aa-6cc66548ffca\") " pod="kube-system/kube-proxy-x4lvb" Aug 13 07:16:48.501107 kubelet[2651]: I0813 07:16:48.501032 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tm87\" (UniqueName: \"kubernetes.io/projected/b0b42944-3dc6-4ddf-896e-e07933b4df9d-kube-api-access-8tm87\") pod \"tigera-operator-5bf8dfcb4-sgf46\" (UID: \"b0b42944-3dc6-4ddf-896e-e07933b4df9d\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-sgf46" Aug 13 07:16:48.501268 kubelet[2651]: I0813 07:16:48.501124 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b0b42944-3dc6-4ddf-896e-e07933b4df9d-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-sgf46\" (UID: \"b0b42944-3dc6-4ddf-896e-e07933b4df9d\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-sgf46" Aug 13 07:16:48.670705 kubelet[2651]: E0813 07:16:48.670525 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:16:48.671982 containerd[1565]: time="2025-08-13T07:16:48.671927675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x4lvb,Uid:2bec1054-84fb-4f55-a8aa-6cc66548ffca,Namespace:kube-system,Attempt:0,}" Aug 13 07:16:48.786513 containerd[1565]: time="2025-08-13T07:16:48.786411839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-sgf46,Uid:b0b42944-3dc6-4ddf-896e-e07933b4df9d,Namespace:tigera-operator,Attempt:0,}" Aug 13 07:16:48.831597 containerd[1565]: time="2025-08-13T07:16:48.831413760Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:16:48.831758 containerd[1565]: time="2025-08-13T07:16:48.831623578Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:16:48.831758 containerd[1565]: time="2025-08-13T07:16:48.831642033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:16:48.831911 containerd[1565]: time="2025-08-13T07:16:48.831790279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:16:48.832618 containerd[1565]: time="2025-08-13T07:16:48.832474672Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:16:48.832751 containerd[1565]: time="2025-08-13T07:16:48.832556120Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:16:48.832751 containerd[1565]: time="2025-08-13T07:16:48.832653439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:16:48.833107 containerd[1565]: time="2025-08-13T07:16:48.833012844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:16:48.879827 containerd[1565]: time="2025-08-13T07:16:48.879432389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x4lvb,Uid:2bec1054-84fb-4f55-a8aa-6cc66548ffca,Namespace:kube-system,Attempt:0,} returns sandbox id \"dbfa3a696012dc4906f3106f80ead7bb2bacb298e1dd7c53d535b6879799a9a8\"" Aug 13 07:16:48.880965 kubelet[2651]: E0813 07:16:48.880417 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:16:48.883463 containerd[1565]: time="2025-08-13T07:16:48.883415253Z" level=info msg="CreateContainer within sandbox \"dbfa3a696012dc4906f3106f80ead7bb2bacb298e1dd7c53d535b6879799a9a8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 07:16:48.899373 containerd[1565]: time="2025-08-13T07:16:48.899302708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-sgf46,Uid:b0b42944-3dc6-4ddf-896e-e07933b4df9d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"90fbfeea07b6d9c25609a4c2236cc508a31f96f67bd57a3d08d6acb839cfe2a3\"" Aug 13 07:16:48.901279 containerd[1565]: time="2025-08-13T07:16:48.901234501Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 13 07:16:48.906314 containerd[1565]: time="2025-08-13T07:16:48.906187327Z" level=info msg="CreateContainer within sandbox \"dbfa3a696012dc4906f3106f80ead7bb2bacb298e1dd7c53d535b6879799a9a8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e52691f6be9eac867b66a4862b89e3e9e09d66823a4e552a38d3253d0ebc8611\"" Aug 13 07:16:48.907016 containerd[1565]: time="2025-08-13T07:16:48.906965895Z" level=info msg="StartContainer for \"e52691f6be9eac867b66a4862b89e3e9e09d66823a4e552a38d3253d0ebc8611\"" Aug 13 07:16:48.976144 containerd[1565]: time="2025-08-13T07:16:48.975963838Z" level=info msg="StartContainer for \"e52691f6be9eac867b66a4862b89e3e9e09d66823a4e552a38d3253d0ebc8611\" returns successfully" Aug 13 07:16:49.450381 kubelet[2651]: E0813 07:16:49.450310 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:16:49.699847 kubelet[2651]: E0813 07:16:49.699795 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:16:49.700740 kubelet[2651]: E0813 07:16:49.700633 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:16:49.734245 kubelet[2651]: I0813 07:16:49.734176 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-x4lvb" podStartSLOduration=1.734158817 podStartE2EDuration="1.734158817s" podCreationTimestamp="2025-08-13 07:16:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:16:49.73415409 +0000 UTC m=+7.139016477" watchObservedRunningTime="2025-08-13 07:16:49.734158817 +0000 UTC m=+7.139021204" Aug 13 07:16:50.699532 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3394372232.mount: Deactivated successfully. Aug 13 07:16:51.082716 containerd[1565]: time="2025-08-13T07:16:51.082657582Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:51.083536 containerd[1565]: time="2025-08-13T07:16:51.083481662Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Aug 13 07:16:51.084782 containerd[1565]: time="2025-08-13T07:16:51.084754458Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:51.086839 containerd[1565]: time="2025-08-13T07:16:51.086804196Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:51.087552 containerd[1565]: time="2025-08-13T07:16:51.087503058Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.186218139s" Aug 13 07:16:51.087600 containerd[1565]: time="2025-08-13T07:16:51.087550956Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 13 07:16:51.089810 containerd[1565]: time="2025-08-13T07:16:51.089779858Z" level=info msg="CreateContainer within sandbox \"90fbfeea07b6d9c25609a4c2236cc508a31f96f67bd57a3d08d6acb839cfe2a3\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 13 07:16:51.101896 containerd[1565]: time="2025-08-13T07:16:51.101848483Z" level=info msg="CreateContainer within sandbox \"90fbfeea07b6d9c25609a4c2236cc508a31f96f67bd57a3d08d6acb839cfe2a3\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"13a5d30702b4012134e7b8dff9929147e087348417dfbcbeee7e4ae0017a6251\"" Aug 13 07:16:51.103100 containerd[1565]: time="2025-08-13T07:16:51.102255901Z" level=info msg="StartContainer for \"13a5d30702b4012134e7b8dff9929147e087348417dfbcbeee7e4ae0017a6251\"" Aug 13 07:16:51.156669 containerd[1565]: time="2025-08-13T07:16:51.156624997Z" level=info msg="StartContainer for \"13a5d30702b4012134e7b8dff9929147e087348417dfbcbeee7e4ae0017a6251\" returns successfully" Aug 13 07:16:52.427268 kubelet[2651]: E0813 07:16:52.427219 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:16:52.454850 kubelet[2651]: I0813 07:16:52.454774 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-sgf46" podStartSLOduration=2.266919545 podStartE2EDuration="4.454745596s" podCreationTimestamp="2025-08-13 07:16:48 +0000 UTC" firstStartedPulling="2025-08-13 07:16:48.900561171 +0000 UTC m=+6.305423548" lastFinishedPulling="2025-08-13 07:16:51.088387212 +0000 UTC m=+8.493249599" observedRunningTime="2025-08-13 07:16:51.719713724 +0000 UTC m=+9.124576111" watchObservedRunningTime="2025-08-13 07:16:52.454745596 +0000 UTC m=+9.859607983" Aug 13 07:16:52.710236 kubelet[2651]: E0813 07:16:52.709287 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:16:54.049483 update_engine[1550]: I20250813 07:16:54.049392 1550 update_attempter.cc:509] Updating boot flags... Aug 13 07:16:54.077372 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3008) Aug 13 07:16:54.127451 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3008) Aug 13 07:16:54.166451 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3008) Aug 13 07:16:55.889598 kubelet[2651]: E0813 07:16:55.889212 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:16:56.719372 kubelet[2651]: E0813 07:16:56.717644 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:16:56.846843 sudo[1769]: pam_unix(sudo:session): session closed for user root Aug 13 07:16:56.849915 sshd[1762]: pam_unix(sshd:session): session closed for user core Aug 13 07:16:56.855153 systemd[1]: sshd@6-10.0.0.130:22-10.0.0.1:45636.service: Deactivated successfully. Aug 13 07:16:56.863929 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 07:16:56.865317 systemd-logind[1548]: Session 7 logged out. Waiting for processes to exit. Aug 13 07:16:56.866827 systemd-logind[1548]: Removed session 7. Aug 13 07:16:59.178425 kubelet[2651]: I0813 07:16:59.178364 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/70cf688c-6a90-4569-a54b-f8186a876ec2-typha-certs\") pod \"calico-typha-646fcb9b65-ctkn8\" (UID: \"70cf688c-6a90-4569-a54b-f8186a876ec2\") " pod="calico-system/calico-typha-646fcb9b65-ctkn8" Aug 13 07:16:59.178425 kubelet[2651]: I0813 07:16:59.178421 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w92ds\" (UniqueName: \"kubernetes.io/projected/70cf688c-6a90-4569-a54b-f8186a876ec2-kube-api-access-w92ds\") pod \"calico-typha-646fcb9b65-ctkn8\" (UID: \"70cf688c-6a90-4569-a54b-f8186a876ec2\") " pod="calico-system/calico-typha-646fcb9b65-ctkn8" Aug 13 07:16:59.178425 kubelet[2651]: I0813 07:16:59.178450 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70cf688c-6a90-4569-a54b-f8186a876ec2-tigera-ca-bundle\") pod \"calico-typha-646fcb9b65-ctkn8\" (UID: \"70cf688c-6a90-4569-a54b-f8186a876ec2\") " pod="calico-system/calico-typha-646fcb9b65-ctkn8" Aug 13 07:16:59.428838 kubelet[2651]: E0813 07:16:59.428553 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:16:59.435051 containerd[1565]: time="2025-08-13T07:16:59.434985247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-646fcb9b65-ctkn8,Uid:70cf688c-6a90-4569-a54b-f8186a876ec2,Namespace:calico-system,Attempt:0,}" Aug 13 07:16:59.462931 containerd[1565]: time="2025-08-13T07:16:59.461951738Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:16:59.462931 containerd[1565]: time="2025-08-13T07:16:59.462019487Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:16:59.462931 containerd[1565]: time="2025-08-13T07:16:59.462033860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:16:59.462931 containerd[1565]: time="2025-08-13T07:16:59.462145228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:16:59.481282 kubelet[2651]: I0813 07:16:59.480985 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/00109e6f-ccdb-40d4-ab0f-bcba4f1bd22f-policysync\") pod \"calico-node-46t4b\" (UID: \"00109e6f-ccdb-40d4-ab0f-bcba4f1bd22f\") " pod="calico-system/calico-node-46t4b" Aug 13 07:16:59.481282 kubelet[2651]: I0813 07:16:59.481026 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/00109e6f-ccdb-40d4-ab0f-bcba4f1bd22f-xtables-lock\") pod \"calico-node-46t4b\" (UID: \"00109e6f-ccdb-40d4-ab0f-bcba4f1bd22f\") " pod="calico-system/calico-node-46t4b" Aug 13 07:16:59.481282 kubelet[2651]: I0813 07:16:59.481056 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwcdr\" (UniqueName: \"kubernetes.io/projected/00109e6f-ccdb-40d4-ab0f-bcba4f1bd22f-kube-api-access-rwcdr\") pod \"calico-node-46t4b\" (UID: \"00109e6f-ccdb-40d4-ab0f-bcba4f1bd22f\") " pod="calico-system/calico-node-46t4b" Aug 13 07:16:59.481282 kubelet[2651]: I0813 07:16:59.481075 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/00109e6f-ccdb-40d4-ab0f-bcba4f1bd22f-var-run-calico\") pod \"calico-node-46t4b\" (UID: \"00109e6f-ccdb-40d4-ab0f-bcba4f1bd22f\") " pod="calico-system/calico-node-46t4b" Aug 13 07:16:59.481282 kubelet[2651]: I0813 07:16:59.481091 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/00109e6f-ccdb-40d4-ab0f-bcba4f1bd22f-cni-net-dir\") pod \"calico-node-46t4b\" (UID: \"00109e6f-ccdb-40d4-ab0f-bcba4f1bd22f\") " pod="calico-system/calico-node-46t4b" Aug 13 07:16:59.481651 kubelet[2651]: I0813 07:16:59.481105 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/00109e6f-ccdb-40d4-ab0f-bcba4f1bd22f-cni-log-dir\") pod \"calico-node-46t4b\" (UID: \"00109e6f-ccdb-40d4-ab0f-bcba4f1bd22f\") " pod="calico-system/calico-node-46t4b" Aug 13 07:16:59.481651 kubelet[2651]: I0813 07:16:59.481118 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00109e6f-ccdb-40d4-ab0f-bcba4f1bd22f-tigera-ca-bundle\") pod \"calico-node-46t4b\" (UID: \"00109e6f-ccdb-40d4-ab0f-bcba4f1bd22f\") " pod="calico-system/calico-node-46t4b" Aug 13 07:16:59.481651 kubelet[2651]: I0813 07:16:59.481131 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/00109e6f-ccdb-40d4-ab0f-bcba4f1bd22f-var-lib-calico\") pod \"calico-node-46t4b\" (UID: \"00109e6f-ccdb-40d4-ab0f-bcba4f1bd22f\") " pod="calico-system/calico-node-46t4b" Aug 13 07:16:59.481651 kubelet[2651]: I0813 07:16:59.481147 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/00109e6f-ccdb-40d4-ab0f-bcba4f1bd22f-cni-bin-dir\") pod \"calico-node-46t4b\" (UID: \"00109e6f-ccdb-40d4-ab0f-bcba4f1bd22f\") " pod="calico-system/calico-node-46t4b" Aug 13 07:16:59.481651 kubelet[2651]: I0813 07:16:59.481160 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/00109e6f-ccdb-40d4-ab0f-bcba4f1bd22f-flexvol-driver-host\") pod \"calico-node-46t4b\" (UID: \"00109e6f-ccdb-40d4-ab0f-bcba4f1bd22f\") " pod="calico-system/calico-node-46t4b" Aug 13 07:16:59.481909 kubelet[2651]: I0813 07:16:59.481173 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/00109e6f-ccdb-40d4-ab0f-bcba4f1bd22f-lib-modules\") pod \"calico-node-46t4b\" (UID: \"00109e6f-ccdb-40d4-ab0f-bcba4f1bd22f\") " pod="calico-system/calico-node-46t4b" Aug 13 07:16:59.481909 kubelet[2651]: I0813 07:16:59.481185 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/00109e6f-ccdb-40d4-ab0f-bcba4f1bd22f-node-certs\") pod \"calico-node-46t4b\" (UID: \"00109e6f-ccdb-40d4-ab0f-bcba4f1bd22f\") " pod="calico-system/calico-node-46t4b" Aug 13 07:16:59.530145 containerd[1565]: time="2025-08-13T07:16:59.530087194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-646fcb9b65-ctkn8,Uid:70cf688c-6a90-4569-a54b-f8186a876ec2,Namespace:calico-system,Attempt:0,} returns sandbox id \"b5b215a6749519b79b367b37625ab1fa7d9ccffbddeeb03ce4e7b432a276bdad\"" Aug 13 07:16:59.535074 kubelet[2651]: E0813 07:16:59.534825 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:16:59.535929 containerd[1565]: time="2025-08-13T07:16:59.535893068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 13 07:16:59.583928 kubelet[2651]: E0813 07:16:59.583865 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.583928 kubelet[2651]: W0813 07:16:59.583902 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.583928 kubelet[2651]: E0813 07:16:59.583933 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.586695 kubelet[2651]: E0813 07:16:59.586670 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.586695 kubelet[2651]: W0813 07:16:59.586686 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.586695 kubelet[2651]: E0813 07:16:59.586699 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.591738 kubelet[2651]: E0813 07:16:59.591707 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.591809 kubelet[2651]: W0813 07:16:59.591736 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.591809 kubelet[2651]: E0813 07:16:59.591765 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.684951 kubelet[2651]: E0813 07:16:59.684521 2651 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s2k55" podUID="f09470c1-c77d-44b2-8331-61723edd172c" Aug 13 07:16:59.694929 containerd[1565]: time="2025-08-13T07:16:59.694877904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-46t4b,Uid:00109e6f-ccdb-40d4-ab0f-bcba4f1bd22f,Namespace:calico-system,Attempt:0,}" Aug 13 07:16:59.741436 containerd[1565]: time="2025-08-13T07:16:59.740866854Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:16:59.741436 containerd[1565]: time="2025-08-13T07:16:59.740979404Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:16:59.741436 containerd[1565]: time="2025-08-13T07:16:59.741001269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:16:59.741436 containerd[1565]: time="2025-08-13T07:16:59.741135034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:16:59.773076 kubelet[2651]: E0813 07:16:59.773021 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.773076 kubelet[2651]: W0813 07:16:59.773046 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.773076 kubelet[2651]: E0813 07:16:59.773070 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.773381 kubelet[2651]: E0813 07:16:59.773354 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.773381 kubelet[2651]: W0813 07:16:59.773363 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.773381 kubelet[2651]: E0813 07:16:59.773374 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.773671 kubelet[2651]: E0813 07:16:59.773640 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.773671 kubelet[2651]: W0813 07:16:59.773653 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.773671 kubelet[2651]: E0813 07:16:59.773664 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.773915 kubelet[2651]: E0813 07:16:59.773886 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.773915 kubelet[2651]: W0813 07:16:59.773899 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.773915 kubelet[2651]: E0813 07:16:59.773908 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.774160 kubelet[2651]: E0813 07:16:59.774141 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.774160 kubelet[2651]: W0813 07:16:59.774153 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.774160 kubelet[2651]: E0813 07:16:59.774163 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.774453 kubelet[2651]: E0813 07:16:59.774413 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.774453 kubelet[2651]: W0813 07:16:59.774425 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.774453 kubelet[2651]: E0813 07:16:59.774434 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.774697 kubelet[2651]: E0813 07:16:59.774662 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.774778 kubelet[2651]: W0813 07:16:59.774712 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.774778 kubelet[2651]: E0813 07:16:59.774724 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.775010 kubelet[2651]: E0813 07:16:59.774990 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.775010 kubelet[2651]: W0813 07:16:59.775002 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.775091 kubelet[2651]: E0813 07:16:59.775013 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.775304 kubelet[2651]: E0813 07:16:59.775288 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.775304 kubelet[2651]: W0813 07:16:59.775299 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.775304 kubelet[2651]: E0813 07:16:59.775309 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.775585 kubelet[2651]: E0813 07:16:59.775564 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.775585 kubelet[2651]: W0813 07:16:59.775578 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.775663 kubelet[2651]: E0813 07:16:59.775589 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.775801 kubelet[2651]: E0813 07:16:59.775783 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.775801 kubelet[2651]: W0813 07:16:59.775795 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.775801 kubelet[2651]: E0813 07:16:59.775803 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.776039 kubelet[2651]: E0813 07:16:59.776020 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.776039 kubelet[2651]: W0813 07:16:59.776031 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.776117 kubelet[2651]: E0813 07:16:59.776041 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.776280 kubelet[2651]: E0813 07:16:59.776262 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.776280 kubelet[2651]: W0813 07:16:59.776273 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.776280 kubelet[2651]: E0813 07:16:59.776281 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.776542 kubelet[2651]: E0813 07:16:59.776522 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.776542 kubelet[2651]: W0813 07:16:59.776538 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.776626 kubelet[2651]: E0813 07:16:59.776550 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.776855 kubelet[2651]: E0813 07:16:59.776821 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.776855 kubelet[2651]: W0813 07:16:59.776834 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.776855 kubelet[2651]: E0813 07:16:59.776844 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.777093 kubelet[2651]: E0813 07:16:59.777073 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.777093 kubelet[2651]: W0813 07:16:59.777085 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.777093 kubelet[2651]: E0813 07:16:59.777098 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.777365 kubelet[2651]: E0813 07:16:59.777320 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.777365 kubelet[2651]: W0813 07:16:59.777334 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.777365 kubelet[2651]: E0813 07:16:59.777361 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.777994 kubelet[2651]: E0813 07:16:59.777971 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.777994 kubelet[2651]: W0813 07:16:59.777987 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.777994 kubelet[2651]: E0813 07:16:59.777999 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.778230 kubelet[2651]: E0813 07:16:59.778210 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.778230 kubelet[2651]: W0813 07:16:59.778222 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.778230 kubelet[2651]: E0813 07:16:59.778230 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.778760 kubelet[2651]: E0813 07:16:59.778444 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.778760 kubelet[2651]: W0813 07:16:59.778456 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.778760 kubelet[2651]: E0813 07:16:59.778466 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.784077 kubelet[2651]: E0813 07:16:59.784053 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.784077 kubelet[2651]: W0813 07:16:59.784073 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.784199 kubelet[2651]: E0813 07:16:59.784094 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.784199 kubelet[2651]: I0813 07:16:59.784123 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f09470c1-c77d-44b2-8331-61723edd172c-registration-dir\") pod \"csi-node-driver-s2k55\" (UID: \"f09470c1-c77d-44b2-8331-61723edd172c\") " pod="calico-system/csi-node-driver-s2k55" Aug 13 07:16:59.784395 kubelet[2651]: E0813 07:16:59.784377 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.784395 kubelet[2651]: W0813 07:16:59.784390 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.784479 kubelet[2651]: E0813 07:16:59.784431 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.784645 kubelet[2651]: I0813 07:16:59.784449 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f09470c1-c77d-44b2-8331-61723edd172c-kubelet-dir\") pod \"csi-node-driver-s2k55\" (UID: \"f09470c1-c77d-44b2-8331-61723edd172c\") " pod="calico-system/csi-node-driver-s2k55" Aug 13 07:16:59.784771 kubelet[2651]: E0813 07:16:59.784735 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.784771 kubelet[2651]: W0813 07:16:59.784748 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.784771 kubelet[2651]: E0813 07:16:59.784759 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.785545 kubelet[2651]: E0813 07:16:59.785290 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.785545 kubelet[2651]: W0813 07:16:59.785301 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.785545 kubelet[2651]: E0813 07:16:59.785364 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.785684 kubelet[2651]: E0813 07:16:59.785614 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.785684 kubelet[2651]: W0813 07:16:59.785623 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.785684 kubelet[2651]: E0813 07:16:59.785637 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.785684 kubelet[2651]: I0813 07:16:59.785655 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f09470c1-c77d-44b2-8331-61723edd172c-socket-dir\") pod \"csi-node-driver-s2k55\" (UID: \"f09470c1-c77d-44b2-8331-61723edd172c\") " pod="calico-system/csi-node-driver-s2k55" Aug 13 07:16:59.786932 kubelet[2651]: E0813 07:16:59.786042 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.786932 kubelet[2651]: W0813 07:16:59.786055 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.786932 kubelet[2651]: E0813 07:16:59.786097 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.786932 kubelet[2651]: I0813 07:16:59.786156 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f09470c1-c77d-44b2-8331-61723edd172c-varrun\") pod \"csi-node-driver-s2k55\" (UID: \"f09470c1-c77d-44b2-8331-61723edd172c\") " pod="calico-system/csi-node-driver-s2k55" Aug 13 07:16:59.786932 kubelet[2651]: E0813 07:16:59.786402 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.786932 kubelet[2651]: W0813 07:16:59.786411 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.786932 kubelet[2651]: E0813 07:16:59.786462 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.786932 kubelet[2651]: E0813 07:16:59.786639 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.786932 kubelet[2651]: W0813 07:16:59.786651 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.787431 containerd[1565]: time="2025-08-13T07:16:59.786248179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-46t4b,Uid:00109e6f-ccdb-40d4-ab0f-bcba4f1bd22f,Namespace:calico-system,Attempt:0,} returns sandbox id \"d1f0cebbbaa87805d9cc2fed534a6a47d488f620f0669c92610d42a9468d1e59\"" Aug 13 07:16:59.787486 kubelet[2651]: E0813 07:16:59.786665 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.787486 kubelet[2651]: E0813 07:16:59.787024 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.787486 kubelet[2651]: W0813 07:16:59.787033 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.787486 kubelet[2651]: E0813 07:16:59.787049 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.787486 kubelet[2651]: I0813 07:16:59.787069 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqb4s\" (UniqueName: \"kubernetes.io/projected/f09470c1-c77d-44b2-8331-61723edd172c-kube-api-access-qqb4s\") pod \"csi-node-driver-s2k55\" (UID: \"f09470c1-c77d-44b2-8331-61723edd172c\") " pod="calico-system/csi-node-driver-s2k55" Aug 13 07:16:59.787486 kubelet[2651]: E0813 07:16:59.787321 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.787486 kubelet[2651]: W0813 07:16:59.787352 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.787486 kubelet[2651]: E0813 07:16:59.787375 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.787757 kubelet[2651]: E0813 07:16:59.787606 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.787757 kubelet[2651]: W0813 07:16:59.787614 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.787757 kubelet[2651]: E0813 07:16:59.787622 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.788004 kubelet[2651]: E0813 07:16:59.787967 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.788004 kubelet[2651]: W0813 07:16:59.787983 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.788004 kubelet[2651]: E0813 07:16:59.787994 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.788651 kubelet[2651]: E0813 07:16:59.788636 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.788729 kubelet[2651]: W0813 07:16:59.788716 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.788807 kubelet[2651]: E0813 07:16:59.788794 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.789531 kubelet[2651]: E0813 07:16:59.789517 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.789607 kubelet[2651]: W0813 07:16:59.789595 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.789665 kubelet[2651]: E0813 07:16:59.789654 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.789973 kubelet[2651]: E0813 07:16:59.789959 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.790048 kubelet[2651]: W0813 07:16:59.790037 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.790103 kubelet[2651]: E0813 07:16:59.790093 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.891750 kubelet[2651]: E0813 07:16:59.891703 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.891750 kubelet[2651]: W0813 07:16:59.891737 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.891973 kubelet[2651]: E0813 07:16:59.891769 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.892229 kubelet[2651]: E0813 07:16:59.892181 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.892229 kubelet[2651]: W0813 07:16:59.892213 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.892484 kubelet[2651]: E0813 07:16:59.892253 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.892633 kubelet[2651]: E0813 07:16:59.892612 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.892633 kubelet[2651]: W0813 07:16:59.892629 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.892705 kubelet[2651]: E0813 07:16:59.892646 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.892929 kubelet[2651]: E0813 07:16:59.892898 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.892929 kubelet[2651]: W0813 07:16:59.892925 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.892999 kubelet[2651]: E0813 07:16:59.892943 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.893259 kubelet[2651]: E0813 07:16:59.893238 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.893259 kubelet[2651]: W0813 07:16:59.893254 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.893365 kubelet[2651]: E0813 07:16:59.893271 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.893706 kubelet[2651]: E0813 07:16:59.893673 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.893769 kubelet[2651]: W0813 07:16:59.893704 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.893769 kubelet[2651]: E0813 07:16:59.893740 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.894035 kubelet[2651]: E0813 07:16:59.894014 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.894091 kubelet[2651]: W0813 07:16:59.894036 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.894091 kubelet[2651]: E0813 07:16:59.894069 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.894376 kubelet[2651]: E0813 07:16:59.894355 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.894376 kubelet[2651]: W0813 07:16:59.894371 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.894452 kubelet[2651]: E0813 07:16:59.894406 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.894636 kubelet[2651]: E0813 07:16:59.894616 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.894636 kubelet[2651]: W0813 07:16:59.894631 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.894712 kubelet[2651]: E0813 07:16:59.894666 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.894882 kubelet[2651]: E0813 07:16:59.894862 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.894882 kubelet[2651]: W0813 07:16:59.894877 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.894955 kubelet[2651]: E0813 07:16:59.894926 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.895151 kubelet[2651]: E0813 07:16:59.895132 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.895151 kubelet[2651]: W0813 07:16:59.895147 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.895222 kubelet[2651]: E0813 07:16:59.895180 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.895412 kubelet[2651]: E0813 07:16:59.895393 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.895412 kubelet[2651]: W0813 07:16:59.895408 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.895474 kubelet[2651]: E0813 07:16:59.895434 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.895650 kubelet[2651]: E0813 07:16:59.895631 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.895650 kubelet[2651]: W0813 07:16:59.895646 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.895719 kubelet[2651]: E0813 07:16:59.895679 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.895886 kubelet[2651]: E0813 07:16:59.895867 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.895886 kubelet[2651]: W0813 07:16:59.895882 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.895968 kubelet[2651]: E0813 07:16:59.895900 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.896156 kubelet[2651]: E0813 07:16:59.896137 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.896156 kubelet[2651]: W0813 07:16:59.896151 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.896215 kubelet[2651]: E0813 07:16:59.896171 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.896469 kubelet[2651]: E0813 07:16:59.896451 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.896469 kubelet[2651]: W0813 07:16:59.896465 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.896539 kubelet[2651]: E0813 07:16:59.896501 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.896722 kubelet[2651]: E0813 07:16:59.896702 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.896722 kubelet[2651]: W0813 07:16:59.896717 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.896812 kubelet[2651]: E0813 07:16:59.896749 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.897066 kubelet[2651]: E0813 07:16:59.897038 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.897066 kubelet[2651]: W0813 07:16:59.897061 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.897135 kubelet[2651]: E0813 07:16:59.897098 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.897373 kubelet[2651]: E0813 07:16:59.897356 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.897373 kubelet[2651]: W0813 07:16:59.897368 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.897502 kubelet[2651]: E0813 07:16:59.897407 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.897636 kubelet[2651]: E0813 07:16:59.897619 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.897636 kubelet[2651]: W0813 07:16:59.897630 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.897700 kubelet[2651]: E0813 07:16:59.897662 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.897937 kubelet[2651]: E0813 07:16:59.897898 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.897937 kubelet[2651]: W0813 07:16:59.897918 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.897937 kubelet[2651]: E0813 07:16:59.897935 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.898291 kubelet[2651]: E0813 07:16:59.898234 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.898291 kubelet[2651]: W0813 07:16:59.898266 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.898431 kubelet[2651]: E0813 07:16:59.898302 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.898715 kubelet[2651]: E0813 07:16:59.898688 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.898715 kubelet[2651]: W0813 07:16:59.898703 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.898841 kubelet[2651]: E0813 07:16:59.898813 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.899078 kubelet[2651]: E0813 07:16:59.899057 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.899078 kubelet[2651]: W0813 07:16:59.899074 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.899179 kubelet[2651]: E0813 07:16:59.899089 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.899452 kubelet[2651]: E0813 07:16:59.899430 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.899452 kubelet[2651]: W0813 07:16:59.899446 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.899571 kubelet[2651]: E0813 07:16:59.899459 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:16:59.905276 kubelet[2651]: E0813 07:16:59.905249 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:16:59.905276 kubelet[2651]: W0813 07:16:59.905267 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:16:59.905276 kubelet[2651]: E0813 07:16:59.905282 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:01.192010 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1339834256.mount: Deactivated successfully. Aug 13 07:17:01.678125 kubelet[2651]: E0813 07:17:01.678054 2651 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s2k55" podUID="f09470c1-c77d-44b2-8331-61723edd172c" Aug 13 07:17:01.747095 containerd[1565]: time="2025-08-13T07:17:01.747040109Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:01.747955 containerd[1565]: time="2025-08-13T07:17:01.747913328Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Aug 13 07:17:01.749114 containerd[1565]: time="2025-08-13T07:17:01.749081429Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:01.751415 containerd[1565]: time="2025-08-13T07:17:01.751360747Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:01.751911 containerd[1565]: time="2025-08-13T07:17:01.751886517Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.215823386s" Aug 13 07:17:01.752166 containerd[1565]: time="2025-08-13T07:17:01.751914873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Aug 13 07:17:01.753057 containerd[1565]: time="2025-08-13T07:17:01.753015694Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 13 07:17:01.766466 containerd[1565]: time="2025-08-13T07:17:01.766395399Z" level=info msg="CreateContainer within sandbox \"b5b215a6749519b79b367b37625ab1fa7d9ccffbddeeb03ce4e7b432a276bdad\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 13 07:17:01.780287 containerd[1565]: time="2025-08-13T07:17:01.780217839Z" level=info msg="CreateContainer within sandbox \"b5b215a6749519b79b367b37625ab1fa7d9ccffbddeeb03ce4e7b432a276bdad\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"dfadac1c514952c220d627d11da453cbcea3a70fc20d226f1957b6bc1c546eab\"" Aug 13 07:17:01.784277 containerd[1565]: time="2025-08-13T07:17:01.784239136Z" level=info msg="StartContainer for \"dfadac1c514952c220d627d11da453cbcea3a70fc20d226f1957b6bc1c546eab\"" Aug 13 07:17:01.865060 containerd[1565]: time="2025-08-13T07:17:01.865010103Z" level=info msg="StartContainer for \"dfadac1c514952c220d627d11da453cbcea3a70fc20d226f1957b6bc1c546eab\" returns successfully" Aug 13 07:17:02.735683 kubelet[2651]: E0813 07:17:02.735600 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:17:02.746408 kubelet[2651]: I0813 07:17:02.746316 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-646fcb9b65-ctkn8" podStartSLOduration=1.528937037 podStartE2EDuration="3.746294774s" podCreationTimestamp="2025-08-13 07:16:59 +0000 UTC" firstStartedPulling="2025-08-13 07:16:59.535489211 +0000 UTC m=+16.940351598" lastFinishedPulling="2025-08-13 07:17:01.752846948 +0000 UTC m=+19.157709335" observedRunningTime="2025-08-13 07:17:02.745525553 +0000 UTC m=+20.150387950" watchObservedRunningTime="2025-08-13 07:17:02.746294774 +0000 UTC m=+20.151157161" Aug 13 07:17:02.799573 kubelet[2651]: E0813 07:17:02.799536 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:02.799573 kubelet[2651]: W0813 07:17:02.799557 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:02.799833 kubelet[2651]: E0813 07:17:02.799591 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:02.799989 kubelet[2651]: E0813 07:17:02.799967 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:02.799989 kubelet[2651]: W0813 07:17:02.799983 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:02.800061 kubelet[2651]: E0813 07:17:02.799996 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:02.800378 kubelet[2651]: E0813 07:17:02.800350 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:02.800378 kubelet[2651]: W0813 07:17:02.800375 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:02.800457 kubelet[2651]: E0813 07:17:02.800402 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:02.800649 kubelet[2651]: E0813 07:17:02.800635 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:02.800649 kubelet[2651]: W0813 07:17:02.800645 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:02.800720 kubelet[2651]: E0813 07:17:02.800653 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:02.800879 kubelet[2651]: E0813 07:17:02.800865 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:02.800879 kubelet[2651]: W0813 07:17:02.800875 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:02.800941 kubelet[2651]: E0813 07:17:02.800885 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:02.801148 kubelet[2651]: E0813 07:17:02.801113 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:02.801148 kubelet[2651]: W0813 07:17:02.801145 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:02.801223 kubelet[2651]: E0813 07:17:02.801162 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:02.801421 kubelet[2651]: E0813 07:17:02.801404 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:02.801421 kubelet[2651]: W0813 07:17:02.801416 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:02.801489 kubelet[2651]: E0813 07:17:02.801426 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:02.801675 kubelet[2651]: E0813 07:17:02.801657 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:02.801675 kubelet[2651]: W0813 07:17:02.801670 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:02.801745 kubelet[2651]: E0813 07:17:02.801679 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:02.801918 kubelet[2651]: E0813 07:17:02.801904 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:02.801918 kubelet[2651]: W0813 07:17:02.801916 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:02.801972 kubelet[2651]: E0813 07:17:02.801926 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:02.802275 kubelet[2651]: E0813 07:17:02.802150 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:02.802275 kubelet[2651]: W0813 07:17:02.802167 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:02.802275 kubelet[2651]: E0813 07:17:02.802179 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:02.802491 kubelet[2651]: E0813 07:17:02.802464 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:02.802491 kubelet[2651]: W0813 07:17:02.802477 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:02.802625 kubelet[2651]: E0813 07:17:02.802494 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:02.802784 kubelet[2651]: E0813 07:17:02.802758 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:02.802784 kubelet[2651]: W0813 07:17:02.802780 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:02.802915 kubelet[2651]: E0813 07:17:02.802799 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:02.803394 kubelet[2651]: E0813 07:17:02.803378 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:02.803394 kubelet[2651]: W0813 07:17:02.803390 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:02.803502 kubelet[2651]: E0813 07:17:02.803401 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:02.803615 kubelet[2651]: E0813 07:17:02.803603 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:02.803615 kubelet[2651]: W0813 07:17:02.803613 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:02.803615 kubelet[2651]: E0813 07:17:02.803621 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:02.803889 kubelet[2651]: E0813 07:17:02.803846 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:02.803889 kubelet[2651]: W0813 07:17:02.803856 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:02.803889 kubelet[2651]: E0813 07:17:02.803865 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:02.811991 kubelet[2651]: E0813 07:17:02.811948 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:02.811991 kubelet[2651]: W0813 07:17:02.811973 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:02.812077 kubelet[2651]: E0813 07:17:02.811997 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:02.812347 kubelet[2651]: E0813 07:17:02.812311 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:02.812347 kubelet[2651]: W0813 07:17:02.812323 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:02.812409 kubelet[2651]: E0813 07:17:02.812358 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:02.812706 kubelet[2651]: E0813 07:17:02.812682 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:02.812706 kubelet[2651]: W0813 07:17:02.812700 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:02.812770 kubelet[2651]: E0813 07:17:02.812725 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:02.812981 kubelet[2651]: E0813 07:17:02.812961 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:02.812981 kubelet[2651]: W0813 07:17:02.812975 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:02.813033 kubelet[2651]: E0813 07:17:02.812994 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:02.813250 kubelet[2651]: E0813 07:17:02.813235 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:02.813281 kubelet[2651]: W0813 07:17:02.813249 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:02.813281 kubelet[2651]: E0813 07:17:02.813270 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:02.813579 kubelet[2651]: E0813 07:17:02.813564 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:02.813579 kubelet[2651]: W0813 07:17:02.813577 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:02.813651 kubelet[2651]: E0813 07:17:02.813597 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:02.813911 kubelet[2651]: E0813 07:17:02.813884 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:02.813911 kubelet[2651]: W0813 07:17:02.813901 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:02.813977 kubelet[2651]: E0813 07:17:02.813918 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:02.814204 kubelet[2651]: E0813 07:17:02.814188 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:02.814238 kubelet[2651]: W0813 07:17:02.814204 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:02.814238 kubelet[2651]: E0813 07:17:02.814227 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:02.814550 kubelet[2651]: E0813 07:17:02.814534 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:02.814590 kubelet[2651]: W0813 07:17:02.814549 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:02.814590 kubelet[2651]: E0813 07:17:02.814571 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:02.814922 kubelet[2651]: E0813 07:17:02.814892 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:02.814976 kubelet[2651]: W0813 07:17:02.814917 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:02.815016 kubelet[2651]: E0813 07:17:02.814990 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:02.815313 kubelet[2651]: E0813 07:17:02.815296 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:02.815380 kubelet[2651]: W0813 07:17:02.815312 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:02.815416 kubelet[2651]: E0813 07:17:02.815333 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:02.815656 kubelet[2651]: E0813 07:17:02.815642 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:02.815689 kubelet[2651]: W0813 07:17:02.815656 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:02.815689 kubelet[2651]: E0813 07:17:02.815677 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:02.815910 kubelet[2651]: E0813 07:17:02.815895 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:02.815910 kubelet[2651]: W0813 07:17:02.815908 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:02.815965 kubelet[2651]: E0813 07:17:02.815925 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:02.816214 kubelet[2651]: E0813 07:17:02.816175 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:02.816214 kubelet[2651]: W0813 07:17:02.816203 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:02.816286 kubelet[2651]: E0813 07:17:02.816253 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:02.816630 kubelet[2651]: E0813 07:17:02.816612 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:02.816630 kubelet[2651]: W0813 07:17:02.816628 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:02.816700 kubelet[2651]: E0813 07:17:02.816647 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:02.817089 kubelet[2651]: E0813 07:17:02.817066 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:02.817089 kubelet[2651]: W0813 07:17:02.817087 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:02.817163 kubelet[2651]: E0813 07:17:02.817102 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:02.817795 kubelet[2651]: E0813 07:17:02.817765 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:02.817844 kubelet[2651]: W0813 07:17:02.817796 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:02.817844 kubelet[2651]: E0813 07:17:02.817818 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:02.819953 kubelet[2651]: E0813 07:17:02.818237 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:02.819953 kubelet[2651]: W0813 07:17:02.818257 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:02.819953 kubelet[2651]: E0813 07:17:02.818270 2651 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:03.413485 containerd[1565]: time="2025-08-13T07:17:03.413417452Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:03.414444 containerd[1565]: time="2025-08-13T07:17:03.414409054Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Aug 13 07:17:03.415741 containerd[1565]: time="2025-08-13T07:17:03.415677926Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:03.418138 containerd[1565]: time="2025-08-13T07:17:03.418091183Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:03.418919 containerd[1565]: time="2025-08-13T07:17:03.418877171Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.665824986s" Aug 13 07:17:03.419007 containerd[1565]: time="2025-08-13T07:17:03.418923669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Aug 13 07:17:03.421508 containerd[1565]: time="2025-08-13T07:17:03.421453470Z" level=info msg="CreateContainer within sandbox \"d1f0cebbbaa87805d9cc2fed534a6a47d488f620f0669c92610d42a9468d1e59\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 13 07:17:03.436892 containerd[1565]: time="2025-08-13T07:17:03.436837719Z" level=info msg="CreateContainer within sandbox \"d1f0cebbbaa87805d9cc2fed534a6a47d488f620f0669c92610d42a9468d1e59\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ee140daa3c7ae89aeb85831dc6a9d39ef98e17ade382154958f83e547316682a\"" Aug 13 07:17:03.437497 containerd[1565]: time="2025-08-13T07:17:03.437454866Z" level=info msg="StartContainer for \"ee140daa3c7ae89aeb85831dc6a9d39ef98e17ade382154958f83e547316682a\"" Aug 13 07:17:03.509194 containerd[1565]: time="2025-08-13T07:17:03.509149181Z" level=info msg="StartContainer for \"ee140daa3c7ae89aeb85831dc6a9d39ef98e17ade382154958f83e547316682a\" returns successfully" Aug 13 07:17:03.566977 containerd[1565]: time="2025-08-13T07:17:03.565360041Z" level=info msg="shim disconnected" id=ee140daa3c7ae89aeb85831dc6a9d39ef98e17ade382154958f83e547316682a namespace=k8s.io Aug 13 07:17:03.566977 containerd[1565]: time="2025-08-13T07:17:03.566972025Z" level=warning msg="cleaning up after shim disconnected" id=ee140daa3c7ae89aeb85831dc6a9d39ef98e17ade382154958f83e547316682a namespace=k8s.io Aug 13 07:17:03.566977 containerd[1565]: time="2025-08-13T07:17:03.566983464Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:17:03.678790 kubelet[2651]: E0813 07:17:03.678576 2651 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s2k55" podUID="f09470c1-c77d-44b2-8331-61723edd172c" Aug 13 07:17:03.748411 kubelet[2651]: I0813 07:17:03.748312 2651 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:17:03.759017 containerd[1565]: time="2025-08-13T07:17:03.758401836Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 13 07:17:03.763032 kubelet[2651]: E0813 07:17:03.761396 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:17:03.768998 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee140daa3c7ae89aeb85831dc6a9d39ef98e17ade382154958f83e547316682a-rootfs.mount: Deactivated successfully. Aug 13 07:17:05.685841 kubelet[2651]: E0813 07:17:05.685763 2651 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s2k55" podUID="f09470c1-c77d-44b2-8331-61723edd172c" Aug 13 07:17:06.461505 containerd[1565]: time="2025-08-13T07:17:06.461434780Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:06.462364 containerd[1565]: time="2025-08-13T07:17:06.462283655Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Aug 13 07:17:06.463588 containerd[1565]: time="2025-08-13T07:17:06.463546704Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:06.465923 containerd[1565]: time="2025-08-13T07:17:06.465896232Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:06.466649 containerd[1565]: time="2025-08-13T07:17:06.466608554Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 2.708150503s" Aug 13 07:17:06.466649 containerd[1565]: time="2025-08-13T07:17:06.466638765Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Aug 13 07:17:06.468668 containerd[1565]: time="2025-08-13T07:17:06.468638488Z" level=info msg="CreateContainer within sandbox \"d1f0cebbbaa87805d9cc2fed534a6a47d488f620f0669c92610d42a9468d1e59\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 13 07:17:06.489265 containerd[1565]: time="2025-08-13T07:17:06.489207007Z" level=info msg="CreateContainer within sandbox \"d1f0cebbbaa87805d9cc2fed534a6a47d488f620f0669c92610d42a9468d1e59\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3e15373f8bee96175a2c6d62d9868a22a0fefcf680182ade341f454d96cc497f\"" Aug 13 07:17:06.489984 containerd[1565]: time="2025-08-13T07:17:06.489785632Z" level=info msg="StartContainer for \"3e15373f8bee96175a2c6d62d9868a22a0fefcf680182ade341f454d96cc497f\"" Aug 13 07:17:06.557313 containerd[1565]: time="2025-08-13T07:17:06.557256390Z" level=info msg="StartContainer for \"3e15373f8bee96175a2c6d62d9868a22a0fefcf680182ade341f454d96cc497f\" returns successfully" Aug 13 07:17:07.678858 kubelet[2651]: E0813 07:17:07.678790 2651 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s2k55" podUID="f09470c1-c77d-44b2-8331-61723edd172c" Aug 13 07:17:08.352658 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e15373f8bee96175a2c6d62d9868a22a0fefcf680182ade341f454d96cc497f-rootfs.mount: Deactivated successfully. Aug 13 07:17:08.354003 containerd[1565]: time="2025-08-13T07:17:08.353913805Z" level=info msg="shim disconnected" id=3e15373f8bee96175a2c6d62d9868a22a0fefcf680182ade341f454d96cc497f namespace=k8s.io Aug 13 07:17:08.354460 containerd[1565]: time="2025-08-13T07:17:08.354003410Z" level=warning msg="cleaning up after shim disconnected" id=3e15373f8bee96175a2c6d62d9868a22a0fefcf680182ade341f454d96cc497f namespace=k8s.io Aug 13 07:17:08.354460 containerd[1565]: time="2025-08-13T07:17:08.354012525Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:17:08.404157 kubelet[2651]: I0813 07:17:08.404093 2651 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 13 07:17:08.453402 kubelet[2651]: I0813 07:17:08.452580 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c3703c35-893a-4eb4-b160-0a5c2f7c54ca-calico-apiserver-certs\") pod \"calico-apiserver-74b999fc99-ng8mj\" (UID: \"c3703c35-893a-4eb4-b160-0a5c2f7c54ca\") " pod="calico-apiserver/calico-apiserver-74b999fc99-ng8mj" Aug 13 07:17:08.453402 kubelet[2651]: I0813 07:17:08.452627 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjjdn\" (UniqueName: \"kubernetes.io/projected/e400ac0b-ae46-4ac2-83f3-c47cd5c10714-kube-api-access-tjjdn\") pod \"calico-kube-controllers-5896fd98dd-kf2hf\" (UID: \"e400ac0b-ae46-4ac2-83f3-c47cd5c10714\") " pod="calico-system/calico-kube-controllers-5896fd98dd-kf2hf" Aug 13 07:17:08.453402 kubelet[2651]: I0813 07:17:08.452647 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d55d2c19-4154-4c8b-a129-b8b3f108e610-config-volume\") pod \"coredns-7c65d6cfc9-jdqsn\" (UID: \"d55d2c19-4154-4c8b-a129-b8b3f108e610\") " pod="kube-system/coredns-7c65d6cfc9-jdqsn" Aug 13 07:17:08.453402 kubelet[2651]: I0813 07:17:08.452673 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz2s9\" (UniqueName: \"kubernetes.io/projected/d55d2c19-4154-4c8b-a129-b8b3f108e610-kube-api-access-jz2s9\") pod \"coredns-7c65d6cfc9-jdqsn\" (UID: \"d55d2c19-4154-4c8b-a129-b8b3f108e610\") " pod="kube-system/coredns-7c65d6cfc9-jdqsn" Aug 13 07:17:08.453402 kubelet[2651]: I0813 07:17:08.452687 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqpgk\" (UniqueName: \"kubernetes.io/projected/dbdac039-2576-4669-ab05-2a44aa4184c7-kube-api-access-vqpgk\") pod \"calico-apiserver-74b999fc99-cfksv\" (UID: \"dbdac039-2576-4669-ab05-2a44aa4184c7\") " pod="calico-apiserver/calico-apiserver-74b999fc99-cfksv" Aug 13 07:17:08.453661 kubelet[2651]: I0813 07:17:08.452703 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/00def675-358e-43ab-abab-2b7a68814926-whisker-backend-key-pair\") pod \"whisker-6ccf7ff454-9mgkm\" (UID: \"00def675-358e-43ab-abab-2b7a68814926\") " pod="calico-system/whisker-6ccf7ff454-9mgkm" Aug 13 07:17:08.453661 kubelet[2651]: I0813 07:17:08.452728 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlgl5\" (UniqueName: \"kubernetes.io/projected/00def675-358e-43ab-abab-2b7a68814926-kube-api-access-dlgl5\") pod \"whisker-6ccf7ff454-9mgkm\" (UID: \"00def675-358e-43ab-abab-2b7a68814926\") " pod="calico-system/whisker-6ccf7ff454-9mgkm" Aug 13 07:17:08.453661 kubelet[2651]: I0813 07:17:08.452748 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a91f2a95-61d2-44d1-8e65-0711a3ca46ef-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-b9j6l\" (UID: \"a91f2a95-61d2-44d1-8e65-0711a3ca46ef\") " pod="calico-system/goldmane-58fd7646b9-b9j6l" Aug 13 07:17:08.453661 kubelet[2651]: I0813 07:17:08.452765 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9ghr\" (UniqueName: \"kubernetes.io/projected/c3703c35-893a-4eb4-b160-0a5c2f7c54ca-kube-api-access-b9ghr\") pod \"calico-apiserver-74b999fc99-ng8mj\" (UID: \"c3703c35-893a-4eb4-b160-0a5c2f7c54ca\") " pod="calico-apiserver/calico-apiserver-74b999fc99-ng8mj" Aug 13 07:17:08.453661 kubelet[2651]: I0813 07:17:08.452782 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7klw\" (UniqueName: \"kubernetes.io/projected/a91f2a95-61d2-44d1-8e65-0711a3ca46ef-kube-api-access-l7klw\") pod \"goldmane-58fd7646b9-b9j6l\" (UID: \"a91f2a95-61d2-44d1-8e65-0711a3ca46ef\") " pod="calico-system/goldmane-58fd7646b9-b9j6l" Aug 13 07:17:08.453801 kubelet[2651]: I0813 07:17:08.452799 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndcws\" (UniqueName: \"kubernetes.io/projected/0a0b3cbe-9aa7-400d-968e-cb12067ca892-kube-api-access-ndcws\") pod \"coredns-7c65d6cfc9-7467l\" (UID: \"0a0b3cbe-9aa7-400d-968e-cb12067ca892\") " pod="kube-system/coredns-7c65d6cfc9-7467l" Aug 13 07:17:08.453801 kubelet[2651]: I0813 07:17:08.452826 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e400ac0b-ae46-4ac2-83f3-c47cd5c10714-tigera-ca-bundle\") pod \"calico-kube-controllers-5896fd98dd-kf2hf\" (UID: \"e400ac0b-ae46-4ac2-83f3-c47cd5c10714\") " pod="calico-system/calico-kube-controllers-5896fd98dd-kf2hf" Aug 13 07:17:08.453801 kubelet[2651]: I0813 07:17:08.452844 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a0b3cbe-9aa7-400d-968e-cb12067ca892-config-volume\") pod \"coredns-7c65d6cfc9-7467l\" (UID: \"0a0b3cbe-9aa7-400d-968e-cb12067ca892\") " pod="kube-system/coredns-7c65d6cfc9-7467l" Aug 13 07:17:08.453801 kubelet[2651]: I0813 07:17:08.452858 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00def675-358e-43ab-abab-2b7a68814926-whisker-ca-bundle\") pod \"whisker-6ccf7ff454-9mgkm\" (UID: \"00def675-358e-43ab-abab-2b7a68814926\") " pod="calico-system/whisker-6ccf7ff454-9mgkm" Aug 13 07:17:08.453801 kubelet[2651]: I0813 07:17:08.452882 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a91f2a95-61d2-44d1-8e65-0711a3ca46ef-config\") pod \"goldmane-58fd7646b9-b9j6l\" (UID: \"a91f2a95-61d2-44d1-8e65-0711a3ca46ef\") " pod="calico-system/goldmane-58fd7646b9-b9j6l" Aug 13 07:17:08.454051 kubelet[2651]: I0813 07:17:08.452900 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/a91f2a95-61d2-44d1-8e65-0711a3ca46ef-goldmane-key-pair\") pod \"goldmane-58fd7646b9-b9j6l\" (UID: \"a91f2a95-61d2-44d1-8e65-0711a3ca46ef\") " pod="calico-system/goldmane-58fd7646b9-b9j6l" Aug 13 07:17:08.454051 kubelet[2651]: I0813 07:17:08.452928 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/dbdac039-2576-4669-ab05-2a44aa4184c7-calico-apiserver-certs\") pod \"calico-apiserver-74b999fc99-cfksv\" (UID: \"dbdac039-2576-4669-ab05-2a44aa4184c7\") " pod="calico-apiserver/calico-apiserver-74b999fc99-cfksv" Aug 13 07:17:08.739081 kubelet[2651]: E0813 07:17:08.738923 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:17:08.739908 containerd[1565]: time="2025-08-13T07:17:08.739841080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7467l,Uid:0a0b3cbe-9aa7-400d-968e-cb12067ca892,Namespace:kube-system,Attempt:0,}" Aug 13 07:17:08.741259 containerd[1565]: time="2025-08-13T07:17:08.741235292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-b9j6l,Uid:a91f2a95-61d2-44d1-8e65-0711a3ca46ef,Namespace:calico-system,Attempt:0,}" Aug 13 07:17:08.747629 containerd[1565]: time="2025-08-13T07:17:08.747594176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5896fd98dd-kf2hf,Uid:e400ac0b-ae46-4ac2-83f3-c47cd5c10714,Namespace:calico-system,Attempt:0,}" Aug 13 07:17:08.751105 kubelet[2651]: E0813 07:17:08.751076 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:17:08.751624 containerd[1565]: time="2025-08-13T07:17:08.751469682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-jdqsn,Uid:d55d2c19-4154-4c8b-a129-b8b3f108e610,Namespace:kube-system,Attempt:0,}" Aug 13 07:17:08.753447 containerd[1565]: time="2025-08-13T07:17:08.753417136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b999fc99-ng8mj,Uid:c3703c35-893a-4eb4-b160-0a5c2f7c54ca,Namespace:calico-apiserver,Attempt:0,}" Aug 13 07:17:08.754775 containerd[1565]: time="2025-08-13T07:17:08.754728805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6ccf7ff454-9mgkm,Uid:00def675-358e-43ab-abab-2b7a68814926,Namespace:calico-system,Attempt:0,}" Aug 13 07:17:08.756411 containerd[1565]: time="2025-08-13T07:17:08.756377967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b999fc99-cfksv,Uid:dbdac039-2576-4669-ab05-2a44aa4184c7,Namespace:calico-apiserver,Attempt:0,}" Aug 13 07:17:08.770756 containerd[1565]: time="2025-08-13T07:17:08.770654679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 07:17:08.996068 containerd[1565]: time="2025-08-13T07:17:08.995821003Z" level=error msg="Failed to destroy network for sandbox \"b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:08.996672 containerd[1565]: time="2025-08-13T07:17:08.996576644Z" level=error msg="encountered an error cleaning up failed sandbox \"b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:08.996672 containerd[1565]: time="2025-08-13T07:17:08.996627202Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7467l,Uid:0a0b3cbe-9aa7-400d-968e-cb12067ca892,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:09.011626 kubelet[2651]: E0813 07:17:09.011560 2651 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:09.011784 kubelet[2651]: E0813 07:17:09.011657 2651 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-7467l" Aug 13 07:17:09.011784 kubelet[2651]: E0813 07:17:09.011684 2651 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-7467l" Aug 13 07:17:09.011784 kubelet[2651]: E0813 07:17:09.011729 2651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-7467l_kube-system(0a0b3cbe-9aa7-400d-968e-cb12067ca892)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-7467l_kube-system(0a0b3cbe-9aa7-400d-968e-cb12067ca892)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-7467l" podUID="0a0b3cbe-9aa7-400d-968e-cb12067ca892" Aug 13 07:17:09.013307 containerd[1565]: time="2025-08-13T07:17:09.013007638Z" level=error msg="Failed to destroy network for sandbox \"09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:09.013711 containerd[1565]: time="2025-08-13T07:17:09.013679291Z" level=error msg="encountered an error cleaning up failed sandbox \"09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:09.013758 containerd[1565]: time="2025-08-13T07:17:09.013739556Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-b9j6l,Uid:a91f2a95-61d2-44d1-8e65-0711a3ca46ef,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:09.013987 kubelet[2651]: E0813 07:17:09.013932 2651 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:09.013987 kubelet[2651]: E0813 07:17:09.013973 2651 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-b9j6l" Aug 13 07:17:09.013987 kubelet[2651]: E0813 07:17:09.013989 2651 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-b9j6l" Aug 13 07:17:09.014195 kubelet[2651]: E0813 07:17:09.014018 2651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-b9j6l_calico-system(a91f2a95-61d2-44d1-8e65-0711a3ca46ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-b9j6l_calico-system(a91f2a95-61d2-44d1-8e65-0711a3ca46ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-b9j6l" podUID="a91f2a95-61d2-44d1-8e65-0711a3ca46ef" Aug 13 07:17:09.014277 containerd[1565]: time="2025-08-13T07:17:09.014251803Z" level=error msg="Failed to destroy network for sandbox \"a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:09.014844 containerd[1565]: time="2025-08-13T07:17:09.014697184Z" level=error msg="encountered an error cleaning up failed sandbox \"a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:09.014844 containerd[1565]: time="2025-08-13T07:17:09.014734920Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5896fd98dd-kf2hf,Uid:e400ac0b-ae46-4ac2-83f3-c47cd5c10714,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:09.014974 kubelet[2651]: E0813 07:17:09.014877 2651 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:09.014974 kubelet[2651]: E0813 07:17:09.014907 2651 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5896fd98dd-kf2hf" Aug 13 07:17:09.014974 kubelet[2651]: E0813 07:17:09.014921 2651 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5896fd98dd-kf2hf" Aug 13 07:17:09.015195 kubelet[2651]: E0813 07:17:09.014948 2651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5896fd98dd-kf2hf_calico-system(e400ac0b-ae46-4ac2-83f3-c47cd5c10714)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5896fd98dd-kf2hf_calico-system(e400ac0b-ae46-4ac2-83f3-c47cd5c10714)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5896fd98dd-kf2hf" podUID="e400ac0b-ae46-4ac2-83f3-c47cd5c10714" Aug 13 07:17:09.018951 containerd[1565]: time="2025-08-13T07:17:09.018897870Z" level=error msg="Failed to destroy network for sandbox \"bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:09.019892 containerd[1565]: time="2025-08-13T07:17:09.019867821Z" level=error msg="encountered an error cleaning up failed sandbox \"bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:09.020432 containerd[1565]: time="2025-08-13T07:17:09.020408497Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6ccf7ff454-9mgkm,Uid:00def675-358e-43ab-abab-2b7a68814926,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:09.020917 kubelet[2651]: E0813 07:17:09.020815 2651 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:09.021005 containerd[1565]: time="2025-08-13T07:17:09.020834074Z" level=error msg="Failed to destroy network for sandbox \"29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:09.021111 kubelet[2651]: E0813 07:17:09.021058 2651 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6ccf7ff454-9mgkm" Aug 13 07:17:09.021111 kubelet[2651]: E0813 07:17:09.021092 2651 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6ccf7ff454-9mgkm" Aug 13 07:17:09.021364 kubelet[2651]: E0813 07:17:09.021145 2651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6ccf7ff454-9mgkm_calico-system(00def675-358e-43ab-abab-2b7a68814926)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6ccf7ff454-9mgkm_calico-system(00def675-358e-43ab-abab-2b7a68814926)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6ccf7ff454-9mgkm" podUID="00def675-358e-43ab-abab-2b7a68814926" Aug 13 07:17:09.021511 containerd[1565]: time="2025-08-13T07:17:09.021190482Z" level=error msg="encountered an error cleaning up failed sandbox \"29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:09.021511 containerd[1565]: time="2025-08-13T07:17:09.021224561Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b999fc99-ng8mj,Uid:c3703c35-893a-4eb4-b160-0a5c2f7c54ca,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:09.022201 kubelet[2651]: E0813 07:17:09.021448 2651 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:09.022201 kubelet[2651]: E0813 07:17:09.021486 2651 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b999fc99-ng8mj" Aug 13 07:17:09.022201 kubelet[2651]: E0813 07:17:09.021503 2651 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b999fc99-ng8mj" Aug 13 07:17:09.022315 kubelet[2651]: E0813 07:17:09.021532 2651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74b999fc99-ng8mj_calico-apiserver(c3703c35-893a-4eb4-b160-0a5c2f7c54ca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74b999fc99-ng8mj_calico-apiserver(c3703c35-893a-4eb4-b160-0a5c2f7c54ca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74b999fc99-ng8mj" podUID="c3703c35-893a-4eb4-b160-0a5c2f7c54ca" Aug 13 07:17:09.029175 containerd[1565]: time="2025-08-13T07:17:09.029124082Z" level=error msg="Failed to destroy network for sandbox \"900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:09.029731 containerd[1565]: time="2025-08-13T07:17:09.029698477Z" level=error msg="encountered an error cleaning up failed sandbox \"900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:09.029855 containerd[1565]: time="2025-08-13T07:17:09.029752281Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-jdqsn,Uid:d55d2c19-4154-4c8b-a129-b8b3f108e610,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:09.030061 kubelet[2651]: E0813 07:17:09.030014 2651 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:09.030139 kubelet[2651]: E0813 07:17:09.030075 2651 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-jdqsn" Aug 13 07:17:09.030139 kubelet[2651]: E0813 07:17:09.030097 2651 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-jdqsn" Aug 13 07:17:09.030201 kubelet[2651]: E0813 07:17:09.030144 2651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-jdqsn_kube-system(d55d2c19-4154-4c8b-a129-b8b3f108e610)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-jdqsn_kube-system(d55d2c19-4154-4c8b-a129-b8b3f108e610)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-jdqsn" podUID="d55d2c19-4154-4c8b-a129-b8b3f108e610" Aug 13 07:17:09.031480 containerd[1565]: time="2025-08-13T07:17:09.031407387Z" level=error msg="Failed to destroy network for sandbox \"5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:09.031902 containerd[1565]: time="2025-08-13T07:17:09.031868616Z" level=error msg="encountered an error cleaning up failed sandbox \"5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:09.031976 containerd[1565]: time="2025-08-13T07:17:09.031919725Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b999fc99-cfksv,Uid:dbdac039-2576-4669-ab05-2a44aa4184c7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:09.032156 kubelet[2651]: E0813 07:17:09.032114 2651 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:09.032196 kubelet[2651]: E0813 07:17:09.032185 2651 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b999fc99-cfksv" Aug 13 07:17:09.032245 kubelet[2651]: E0813 07:17:09.032207 2651 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b999fc99-cfksv" Aug 13 07:17:09.032307 kubelet[2651]: E0813 07:17:09.032279 2651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74b999fc99-cfksv_calico-apiserver(dbdac039-2576-4669-ab05-2a44aa4184c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74b999fc99-cfksv_calico-apiserver(dbdac039-2576-4669-ab05-2a44aa4184c7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74b999fc99-cfksv" podUID="dbdac039-2576-4669-ab05-2a44aa4184c7" Aug 13 07:17:09.356758 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c-shm.mount: Deactivated successfully. Aug 13 07:17:09.681711 containerd[1565]: time="2025-08-13T07:17:09.681545413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s2k55,Uid:f09470c1-c77d-44b2-8331-61723edd172c,Namespace:calico-system,Attempt:0,}" Aug 13 07:17:09.747195 containerd[1565]: time="2025-08-13T07:17:09.747113422Z" level=error msg="Failed to destroy network for sandbox \"5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:09.748816 containerd[1565]: time="2025-08-13T07:17:09.748753583Z" level=error msg="encountered an error cleaning up failed sandbox \"5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:09.748816 containerd[1565]: time="2025-08-13T07:17:09.748830026Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s2k55,Uid:f09470c1-c77d-44b2-8331-61723edd172c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:09.749139 kubelet[2651]: E0813 07:17:09.749077 2651 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:09.749747 kubelet[2651]: E0813 07:17:09.749145 2651 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s2k55" Aug 13 07:17:09.749747 kubelet[2651]: E0813 07:17:09.749174 2651 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s2k55" Aug 13 07:17:09.749747 kubelet[2651]: E0813 07:17:09.749217 2651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-s2k55_calico-system(f09470c1-c77d-44b2-8331-61723edd172c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-s2k55_calico-system(f09470c1-c77d-44b2-8331-61723edd172c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s2k55" podUID="f09470c1-c77d-44b2-8331-61723edd172c" Aug 13 07:17:09.750174 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8-shm.mount: Deactivated successfully. Aug 13 07:17:09.772283 kubelet[2651]: I0813 07:17:09.772240 2651 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a" Aug 13 07:17:09.773322 kubelet[2651]: I0813 07:17:09.773287 2651 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c" Aug 13 07:17:09.777831 kubelet[2651]: I0813 07:17:09.776542 2651 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc" Aug 13 07:17:09.778073 kubelet[2651]: I0813 07:17:09.778030 2651 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52" Aug 13 07:17:09.802651 containerd[1565]: time="2025-08-13T07:17:09.802394833Z" level=info msg="StopPodSandbox for \"a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52\"" Aug 13 07:17:09.802825 containerd[1565]: time="2025-08-13T07:17:09.802739470Z" level=info msg="StopPodSandbox for \"29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc\"" Aug 13 07:17:09.803569 containerd[1565]: time="2025-08-13T07:17:09.803498916Z" level=info msg="StopPodSandbox for \"b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c\"" Aug 13 07:17:09.804510 containerd[1565]: time="2025-08-13T07:17:09.804453198Z" level=info msg="StopPodSandbox for \"09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a\"" Aug 13 07:17:09.805663 containerd[1565]: time="2025-08-13T07:17:09.805574471Z" level=info msg="Ensure that sandbox a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52 in task-service has been cleanup successfully" Aug 13 07:17:09.805663 containerd[1565]: time="2025-08-13T07:17:09.805595347Z" level=info msg="Ensure that sandbox 09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a in task-service has been cleanup successfully" Aug 13 07:17:09.805910 containerd[1565]: time="2025-08-13T07:17:09.805585590Z" level=info msg="Ensure that sandbox 29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc in task-service has been cleanup successfully" Aug 13 07:17:09.806211 containerd[1565]: time="2025-08-13T07:17:09.805587713Z" level=info msg="Ensure that sandbox b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c in task-service has been cleanup successfully" Aug 13 07:17:09.807449 kubelet[2651]: I0813 07:17:09.807421 2651 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766" Aug 13 07:17:09.812369 containerd[1565]: time="2025-08-13T07:17:09.812266772Z" level=info msg="StopPodSandbox for \"bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766\"" Aug 13 07:17:09.814122 kubelet[2651]: I0813 07:17:09.813718 2651 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8" Aug 13 07:17:09.817070 containerd[1565]: time="2025-08-13T07:17:09.817018223Z" level=info msg="StopPodSandbox for \"5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8\"" Aug 13 07:17:09.817943 containerd[1565]: time="2025-08-13T07:17:09.817227685Z" level=info msg="Ensure that sandbox 5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8 in task-service has been cleanup successfully" Aug 13 07:17:09.819703 containerd[1565]: time="2025-08-13T07:17:09.819644561Z" level=info msg="Ensure that sandbox bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766 in task-service has been cleanup successfully" Aug 13 07:17:09.821599 kubelet[2651]: I0813 07:17:09.821334 2651 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3" Aug 13 07:17:09.822282 containerd[1565]: time="2025-08-13T07:17:09.822236240Z" level=info msg="StopPodSandbox for \"5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3\"" Aug 13 07:17:09.822450 containerd[1565]: time="2025-08-13T07:17:09.822424746Z" level=info msg="Ensure that sandbox 5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3 in task-service has been cleanup successfully" Aug 13 07:17:09.824401 kubelet[2651]: I0813 07:17:09.823827 2651 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f" Aug 13 07:17:09.824798 containerd[1565]: time="2025-08-13T07:17:09.824489553Z" level=info msg="StopPodSandbox for \"900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f\"" Aug 13 07:17:09.824798 containerd[1565]: time="2025-08-13T07:17:09.824632160Z" level=info msg="Ensure that sandbox 900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f in task-service has been cleanup successfully" Aug 13 07:17:09.860075 containerd[1565]: time="2025-08-13T07:17:09.860033167Z" level=error msg="StopPodSandbox for \"b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c\" failed" error="failed to destroy network for sandbox \"b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:09.864190 containerd[1565]: time="2025-08-13T07:17:09.864017087Z" level=error msg="StopPodSandbox for \"5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8\" failed" error="failed to destroy network for sandbox \"5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:09.866131 kubelet[2651]: E0813 07:17:09.866086 2651 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8" Aug 13 07:17:09.866234 kubelet[2651]: E0813 07:17:09.866148 2651 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8"} Aug 13 07:17:09.866234 kubelet[2651]: E0813 07:17:09.866209 2651 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f09470c1-c77d-44b2-8331-61723edd172c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:17:09.866318 kubelet[2651]: E0813 07:17:09.866232 2651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f09470c1-c77d-44b2-8331-61723edd172c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s2k55" podUID="f09470c1-c77d-44b2-8331-61723edd172c" Aug 13 07:17:09.866318 kubelet[2651]: E0813 07:17:09.866258 2651 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c" Aug 13 07:17:09.866318 kubelet[2651]: E0813 07:17:09.866272 2651 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c"} Aug 13 07:17:09.866318 kubelet[2651]: E0813 07:17:09.866288 2651 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0a0b3cbe-9aa7-400d-968e-cb12067ca892\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:17:09.866567 kubelet[2651]: E0813 07:17:09.866303 2651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0a0b3cbe-9aa7-400d-968e-cb12067ca892\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-7467l" podUID="0a0b3cbe-9aa7-400d-968e-cb12067ca892" Aug 13 07:17:09.875191 containerd[1565]: time="2025-08-13T07:17:09.875135475Z" level=error msg="StopPodSandbox for \"09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a\" failed" error="failed to destroy network for sandbox \"09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:09.875712 kubelet[2651]: E0813 07:17:09.875439 2651 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a" Aug 13 07:17:09.875712 kubelet[2651]: E0813 07:17:09.875517 2651 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a"} Aug 13 07:17:09.875712 kubelet[2651]: E0813 07:17:09.875621 2651 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a91f2a95-61d2-44d1-8e65-0711a3ca46ef\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:17:09.875712 kubelet[2651]: E0813 07:17:09.875649 2651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a91f2a95-61d2-44d1-8e65-0711a3ca46ef\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-b9j6l" podUID="a91f2a95-61d2-44d1-8e65-0711a3ca46ef" Aug 13 07:17:09.876629 containerd[1565]: time="2025-08-13T07:17:09.876564961Z" level=error msg="StopPodSandbox for \"29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc\" failed" error="failed to destroy network for sandbox \"29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:09.877091 kubelet[2651]: E0813 07:17:09.877046 2651 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc" Aug 13 07:17:09.877229 kubelet[2651]: E0813 07:17:09.877115 2651 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc"} Aug 13 07:17:09.877229 kubelet[2651]: E0813 07:17:09.877176 2651 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c3703c35-893a-4eb4-b160-0a5c2f7c54ca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:17:09.877229 kubelet[2651]: E0813 07:17:09.877200 2651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c3703c35-893a-4eb4-b160-0a5c2f7c54ca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74b999fc99-ng8mj" podUID="c3703c35-893a-4eb4-b160-0a5c2f7c54ca" Aug 13 07:17:09.896920 containerd[1565]: time="2025-08-13T07:17:09.896849787Z" level=error msg="StopPodSandbox for \"a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52\" failed" error="failed to destroy network for sandbox \"a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:09.897405 kubelet[2651]: E0813 07:17:09.897357 2651 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52" Aug 13 07:17:09.897527 kubelet[2651]: E0813 07:17:09.897414 2651 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52"} Aug 13 07:17:09.897527 kubelet[2651]: E0813 07:17:09.897455 2651 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e400ac0b-ae46-4ac2-83f3-c47cd5c10714\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:17:09.897527 kubelet[2651]: E0813 07:17:09.897485 2651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e400ac0b-ae46-4ac2-83f3-c47cd5c10714\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5896fd98dd-kf2hf" podUID="e400ac0b-ae46-4ac2-83f3-c47cd5c10714" Aug 13 07:17:09.897755 containerd[1565]: time="2025-08-13T07:17:09.897511332Z" level=error msg="StopPodSandbox for \"bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766\" failed" error="failed to destroy network for sandbox \"bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:09.897977 kubelet[2651]: E0813 07:17:09.897945 2651 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766" Aug 13 07:17:09.898025 kubelet[2651]: E0813 07:17:09.897978 2651 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766"} Aug 13 07:17:09.898025 kubelet[2651]: E0813 07:17:09.897997 2651 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"00def675-358e-43ab-abab-2b7a68814926\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:17:09.898025 kubelet[2651]: E0813 07:17:09.898015 2651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"00def675-358e-43ab-abab-2b7a68814926\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6ccf7ff454-9mgkm" podUID="00def675-358e-43ab-abab-2b7a68814926" Aug 13 07:17:09.899744 containerd[1565]: time="2025-08-13T07:17:09.899690557Z" level=error msg="StopPodSandbox for \"900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f\" failed" error="failed to destroy network for sandbox \"900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:09.899931 kubelet[2651]: E0813 07:17:09.899890 2651 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f" Aug 13 07:17:09.899980 kubelet[2651]: E0813 07:17:09.899950 2651 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f"} Aug 13 07:17:09.900016 kubelet[2651]: E0813 07:17:09.899994 2651 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d55d2c19-4154-4c8b-a129-b8b3f108e610\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:17:09.900090 kubelet[2651]: E0813 07:17:09.900021 2651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d55d2c19-4154-4c8b-a129-b8b3f108e610\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-jdqsn" podUID="d55d2c19-4154-4c8b-a129-b8b3f108e610" Aug 13 07:17:09.905867 containerd[1565]: time="2025-08-13T07:17:09.905832585Z" level=error msg="StopPodSandbox for \"5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3\" failed" error="failed to destroy network for sandbox \"5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:09.906138 kubelet[2651]: E0813 07:17:09.906082 2651 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3" Aug 13 07:17:09.906186 kubelet[2651]: E0813 07:17:09.906150 2651 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3"} Aug 13 07:17:09.906218 kubelet[2651]: E0813 07:17:09.906183 2651 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dbdac039-2576-4669-ab05-2a44aa4184c7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:17:09.906218 kubelet[2651]: E0813 07:17:09.906206 2651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dbdac039-2576-4669-ab05-2a44aa4184c7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74b999fc99-cfksv" podUID="dbdac039-2576-4669-ab05-2a44aa4184c7" Aug 13 07:17:12.887313 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount615773272.mount: Deactivated successfully. Aug 13 07:17:13.941045 kubelet[2651]: I0813 07:17:13.940986 2651 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:17:13.941610 kubelet[2651]: E0813 07:17:13.941408 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:17:13.950876 containerd[1565]: time="2025-08-13T07:17:13.950818341Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:13.952047 containerd[1565]: time="2025-08-13T07:17:13.951995097Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 07:17:13.954740 containerd[1565]: time="2025-08-13T07:17:13.954691065Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:13.959181 containerd[1565]: time="2025-08-13T07:17:13.958413623Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:13.959181 containerd[1565]: time="2025-08-13T07:17:13.959094084Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 5.188398883s" Aug 13 07:17:13.959181 containerd[1565]: time="2025-08-13T07:17:13.959142439Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Aug 13 07:17:13.971475 containerd[1565]: time="2025-08-13T07:17:13.971426775Z" level=info msg="CreateContainer within sandbox \"d1f0cebbbaa87805d9cc2fed534a6a47d488f620f0669c92610d42a9468d1e59\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 13 07:17:14.013616 containerd[1565]: time="2025-08-13T07:17:14.013109776Z" level=info msg="CreateContainer within sandbox \"d1f0cebbbaa87805d9cc2fed534a6a47d488f620f0669c92610d42a9468d1e59\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b213931673e656a7074f245b7c1f8664137801f0ff9ed5cc21bf704a8dadc803\"" Aug 13 07:17:14.016970 containerd[1565]: time="2025-08-13T07:17:14.016830968Z" level=info msg="StartContainer for \"b213931673e656a7074f245b7c1f8664137801f0ff9ed5cc21bf704a8dadc803\"" Aug 13 07:17:14.117144 containerd[1565]: time="2025-08-13T07:17:14.117080700Z" level=info msg="StartContainer for \"b213931673e656a7074f245b7c1f8664137801f0ff9ed5cc21bf704a8dadc803\" returns successfully" Aug 13 07:17:14.212327 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 13 07:17:14.212539 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 13 07:17:14.311667 containerd[1565]: time="2025-08-13T07:17:14.311559790Z" level=info msg="StopPodSandbox for \"bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766\"" Aug 13 07:17:14.471414 containerd[1565]: 2025-08-13 07:17:14.379 [INFO][3956] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766" Aug 13 07:17:14.471414 containerd[1565]: 2025-08-13 07:17:14.379 [INFO][3956] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766" iface="eth0" netns="/var/run/netns/cni-2332d9a7-bfbe-a9b4-cc4d-b090e293b94f" Aug 13 07:17:14.471414 containerd[1565]: 2025-08-13 07:17:14.379 [INFO][3956] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766" iface="eth0" netns="/var/run/netns/cni-2332d9a7-bfbe-a9b4-cc4d-b090e293b94f" Aug 13 07:17:14.471414 containerd[1565]: 2025-08-13 07:17:14.380 [INFO][3956] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766" iface="eth0" netns="/var/run/netns/cni-2332d9a7-bfbe-a9b4-cc4d-b090e293b94f" Aug 13 07:17:14.471414 containerd[1565]: 2025-08-13 07:17:14.380 [INFO][3956] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766" Aug 13 07:17:14.471414 containerd[1565]: 2025-08-13 07:17:14.380 [INFO][3956] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766" Aug 13 07:17:14.471414 containerd[1565]: 2025-08-13 07:17:14.448 [INFO][3968] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766" HandleID="k8s-pod-network.bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766" Workload="localhost-k8s-whisker--6ccf7ff454--9mgkm-eth0" Aug 13 07:17:14.471414 containerd[1565]: 2025-08-13 07:17:14.449 [INFO][3968] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:14.471414 containerd[1565]: 2025-08-13 07:17:14.449 [INFO][3968] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:14.471414 containerd[1565]: 2025-08-13 07:17:14.456 [WARNING][3968] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766" HandleID="k8s-pod-network.bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766" Workload="localhost-k8s-whisker--6ccf7ff454--9mgkm-eth0" Aug 13 07:17:14.471414 containerd[1565]: 2025-08-13 07:17:14.457 [INFO][3968] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766" HandleID="k8s-pod-network.bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766" Workload="localhost-k8s-whisker--6ccf7ff454--9mgkm-eth0" Aug 13 07:17:14.471414 containerd[1565]: 2025-08-13 07:17:14.462 [INFO][3968] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:14.471414 containerd[1565]: 2025-08-13 07:17:14.467 [INFO][3956] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766" Aug 13 07:17:14.472044 containerd[1565]: time="2025-08-13T07:17:14.471590137Z" level=info msg="TearDown network for sandbox \"bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766\" successfully" Aug 13 07:17:14.472044 containerd[1565]: time="2025-08-13T07:17:14.471634827Z" level=info msg="StopPodSandbox for \"bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766\" returns successfully" Aug 13 07:17:14.496809 kubelet[2651]: I0813 07:17:14.496732 2651 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlgl5\" (UniqueName: \"kubernetes.io/projected/00def675-358e-43ab-abab-2b7a68814926-kube-api-access-dlgl5\") pod \"00def675-358e-43ab-abab-2b7a68814926\" (UID: \"00def675-358e-43ab-abab-2b7a68814926\") " Aug 13 07:17:14.496809 kubelet[2651]: I0813 07:17:14.496784 2651 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00def675-358e-43ab-abab-2b7a68814926-whisker-ca-bundle\") pod \"00def675-358e-43ab-abab-2b7a68814926\" (UID: \"00def675-358e-43ab-abab-2b7a68814926\") " Aug 13 07:17:14.496809 kubelet[2651]: I0813 07:17:14.496807 2651 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/00def675-358e-43ab-abab-2b7a68814926-whisker-backend-key-pair\") pod \"00def675-358e-43ab-abab-2b7a68814926\" (UID: \"00def675-358e-43ab-abab-2b7a68814926\") " Aug 13 07:17:14.497586 kubelet[2651]: I0813 07:17:14.497538 2651 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00def675-358e-43ab-abab-2b7a68814926-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "00def675-358e-43ab-abab-2b7a68814926" (UID: "00def675-358e-43ab-abab-2b7a68814926"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 07:17:14.503810 kubelet[2651]: I0813 07:17:14.503646 2651 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00def675-358e-43ab-abab-2b7a68814926-kube-api-access-dlgl5" (OuterVolumeSpecName: "kube-api-access-dlgl5") pod "00def675-358e-43ab-abab-2b7a68814926" (UID: "00def675-358e-43ab-abab-2b7a68814926"). InnerVolumeSpecName "kube-api-access-dlgl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 07:17:14.503810 kubelet[2651]: I0813 07:17:14.503751 2651 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00def675-358e-43ab-abab-2b7a68814926-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "00def675-358e-43ab-abab-2b7a68814926" (UID: "00def675-358e-43ab-abab-2b7a68814926"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 07:17:14.597323 kubelet[2651]: I0813 07:17:14.597127 2651 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00def675-358e-43ab-abab-2b7a68814926-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Aug 13 07:17:14.597323 kubelet[2651]: I0813 07:17:14.597175 2651 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/00def675-358e-43ab-abab-2b7a68814926-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Aug 13 07:17:14.597323 kubelet[2651]: I0813 07:17:14.597184 2651 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlgl5\" (UniqueName: \"kubernetes.io/projected/00def675-358e-43ab-abab-2b7a68814926-kube-api-access-dlgl5\") on node \"localhost\" DevicePath \"\"" Aug 13 07:17:14.835827 kubelet[2651]: E0813 07:17:14.835778 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:17:14.970149 systemd[1]: run-netns-cni\x2d2332d9a7\x2dbfbe\x2da9b4\x2dcc4d\x2db090e293b94f.mount: Deactivated successfully. Aug 13 07:17:14.970385 systemd[1]: var-lib-kubelet-pods-00def675\x2d358e\x2d43ab\x2dabab\x2d2b7a68814926-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddlgl5.mount: Deactivated successfully. Aug 13 07:17:14.970549 systemd[1]: var-lib-kubelet-pods-00def675\x2d358e\x2d43ab\x2dabab\x2d2b7a68814926-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 07:17:15.111431 kubelet[2651]: I0813 07:17:15.111363 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-46t4b" podStartSLOduration=1.9392255980000002 podStartE2EDuration="16.11132102s" podCreationTimestamp="2025-08-13 07:16:59 +0000 UTC" firstStartedPulling="2025-08-13 07:16:59.788106437 +0000 UTC m=+17.192968824" lastFinishedPulling="2025-08-13 07:17:13.960201859 +0000 UTC m=+31.365064246" observedRunningTime="2025-08-13 07:17:15.110974173 +0000 UTC m=+32.515836580" watchObservedRunningTime="2025-08-13 07:17:15.11132102 +0000 UTC m=+32.516183407" Aug 13 07:17:15.303062 kubelet[2651]: I0813 07:17:15.302988 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdm4m\" (UniqueName: \"kubernetes.io/projected/5cc8af9e-d302-4106-886a-fe00c5d2ed2c-kube-api-access-xdm4m\") pod \"whisker-59464c7c6b-c7vsd\" (UID: \"5cc8af9e-d302-4106-886a-fe00c5d2ed2c\") " pod="calico-system/whisker-59464c7c6b-c7vsd" Aug 13 07:17:15.303062 kubelet[2651]: I0813 07:17:15.303053 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5cc8af9e-d302-4106-886a-fe00c5d2ed2c-whisker-ca-bundle\") pod \"whisker-59464c7c6b-c7vsd\" (UID: \"5cc8af9e-d302-4106-886a-fe00c5d2ed2c\") " pod="calico-system/whisker-59464c7c6b-c7vsd" Aug 13 07:17:15.303062 kubelet[2651]: I0813 07:17:15.303075 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5cc8af9e-d302-4106-886a-fe00c5d2ed2c-whisker-backend-key-pair\") pod \"whisker-59464c7c6b-c7vsd\" (UID: \"5cc8af9e-d302-4106-886a-fe00c5d2ed2c\") " pod="calico-system/whisker-59464c7c6b-c7vsd" Aug 13 07:17:15.576301 containerd[1565]: time="2025-08-13T07:17:15.576153051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59464c7c6b-c7vsd,Uid:5cc8af9e-d302-4106-886a-fe00c5d2ed2c,Namespace:calico-system,Attempt:0,}" Aug 13 07:17:15.789377 kernel: bpftool[4127]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Aug 13 07:17:16.177929 systemd-networkd[1251]: vxlan.calico: Link UP Aug 13 07:17:16.177939 systemd-networkd[1251]: vxlan.calico: Gained carrier Aug 13 07:17:16.395557 systemd-networkd[1251]: calie99f76aae3b: Link UP Aug 13 07:17:16.396266 systemd-networkd[1251]: calie99f76aae3b: Gained carrier Aug 13 07:17:16.414654 containerd[1565]: 2025-08-13 07:17:16.328 [INFO][4209] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--59464c7c6b--c7vsd-eth0 whisker-59464c7c6b- calico-system 5cc8af9e-d302-4106-886a-fe00c5d2ed2c 937 0 2025-08-13 07:17:15 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:59464c7c6b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-59464c7c6b-c7vsd eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calie99f76aae3b [] [] }} ContainerID="2c214e94208971830013651ef85b8949ef4163700f3a522d0a71c610c15699ed" Namespace="calico-system" Pod="whisker-59464c7c6b-c7vsd" WorkloadEndpoint="localhost-k8s-whisker--59464c7c6b--c7vsd-" Aug 13 07:17:16.414654 containerd[1565]: 2025-08-13 07:17:16.328 [INFO][4209] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2c214e94208971830013651ef85b8949ef4163700f3a522d0a71c610c15699ed" Namespace="calico-system" Pod="whisker-59464c7c6b-c7vsd" WorkloadEndpoint="localhost-k8s-whisker--59464c7c6b--c7vsd-eth0" Aug 13 07:17:16.414654 containerd[1565]: 2025-08-13 07:17:16.355 [INFO][4224] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2c214e94208971830013651ef85b8949ef4163700f3a522d0a71c610c15699ed" HandleID="k8s-pod-network.2c214e94208971830013651ef85b8949ef4163700f3a522d0a71c610c15699ed" Workload="localhost-k8s-whisker--59464c7c6b--c7vsd-eth0" Aug 13 07:17:16.414654 containerd[1565]: 2025-08-13 07:17:16.355 [INFO][4224] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2c214e94208971830013651ef85b8949ef4163700f3a522d0a71c610c15699ed" HandleID="k8s-pod-network.2c214e94208971830013651ef85b8949ef4163700f3a522d0a71c610c15699ed" Workload="localhost-k8s-whisker--59464c7c6b--c7vsd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004edf0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-59464c7c6b-c7vsd", "timestamp":"2025-08-13 07:17:16.355433453 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:17:16.414654 containerd[1565]: 2025-08-13 07:17:16.355 [INFO][4224] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:16.414654 containerd[1565]: 2025-08-13 07:17:16.355 [INFO][4224] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:16.414654 containerd[1565]: 2025-08-13 07:17:16.355 [INFO][4224] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 07:17:16.414654 containerd[1565]: 2025-08-13 07:17:16.362 [INFO][4224] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2c214e94208971830013651ef85b8949ef4163700f3a522d0a71c610c15699ed" host="localhost" Aug 13 07:17:16.414654 containerd[1565]: 2025-08-13 07:17:16.368 [INFO][4224] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 07:17:16.414654 containerd[1565]: 2025-08-13 07:17:16.374 [INFO][4224] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 07:17:16.414654 containerd[1565]: 2025-08-13 07:17:16.376 [INFO][4224] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 07:17:16.414654 containerd[1565]: 2025-08-13 07:17:16.378 [INFO][4224] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 07:17:16.414654 containerd[1565]: 2025-08-13 07:17:16.378 [INFO][4224] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2c214e94208971830013651ef85b8949ef4163700f3a522d0a71c610c15699ed" host="localhost" Aug 13 07:17:16.414654 containerd[1565]: 2025-08-13 07:17:16.379 [INFO][4224] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2c214e94208971830013651ef85b8949ef4163700f3a522d0a71c610c15699ed Aug 13 07:17:16.414654 containerd[1565]: 2025-08-13 07:17:16.384 [INFO][4224] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2c214e94208971830013651ef85b8949ef4163700f3a522d0a71c610c15699ed" host="localhost" Aug 13 07:17:16.414654 containerd[1565]: 2025-08-13 07:17:16.389 [INFO][4224] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.2c214e94208971830013651ef85b8949ef4163700f3a522d0a71c610c15699ed" host="localhost" Aug 13 07:17:16.414654 containerd[1565]: 2025-08-13 07:17:16.389 [INFO][4224] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.2c214e94208971830013651ef85b8949ef4163700f3a522d0a71c610c15699ed" host="localhost" Aug 13 07:17:16.414654 containerd[1565]: 2025-08-13 07:17:16.389 [INFO][4224] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:16.414654 containerd[1565]: 2025-08-13 07:17:16.389 [INFO][4224] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="2c214e94208971830013651ef85b8949ef4163700f3a522d0a71c610c15699ed" HandleID="k8s-pod-network.2c214e94208971830013651ef85b8949ef4163700f3a522d0a71c610c15699ed" Workload="localhost-k8s-whisker--59464c7c6b--c7vsd-eth0" Aug 13 07:17:16.415486 containerd[1565]: 2025-08-13 07:17:16.392 [INFO][4209] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2c214e94208971830013651ef85b8949ef4163700f3a522d0a71c610c15699ed" Namespace="calico-system" Pod="whisker-59464c7c6b-c7vsd" WorkloadEndpoint="localhost-k8s-whisker--59464c7c6b--c7vsd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--59464c7c6b--c7vsd-eth0", GenerateName:"whisker-59464c7c6b-", Namespace:"calico-system", SelfLink:"", UID:"5cc8af9e-d302-4106-886a-fe00c5d2ed2c", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"59464c7c6b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-59464c7c6b-c7vsd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie99f76aae3b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:16.415486 containerd[1565]: 2025-08-13 07:17:16.392 [INFO][4209] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="2c214e94208971830013651ef85b8949ef4163700f3a522d0a71c610c15699ed" Namespace="calico-system" Pod="whisker-59464c7c6b-c7vsd" WorkloadEndpoint="localhost-k8s-whisker--59464c7c6b--c7vsd-eth0" Aug 13 07:17:16.415486 containerd[1565]: 2025-08-13 07:17:16.392 [INFO][4209] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie99f76aae3b ContainerID="2c214e94208971830013651ef85b8949ef4163700f3a522d0a71c610c15699ed" Namespace="calico-system" Pod="whisker-59464c7c6b-c7vsd" WorkloadEndpoint="localhost-k8s-whisker--59464c7c6b--c7vsd-eth0" Aug 13 07:17:16.415486 containerd[1565]: 2025-08-13 07:17:16.396 [INFO][4209] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2c214e94208971830013651ef85b8949ef4163700f3a522d0a71c610c15699ed" Namespace="calico-system" Pod="whisker-59464c7c6b-c7vsd" WorkloadEndpoint="localhost-k8s-whisker--59464c7c6b--c7vsd-eth0" Aug 13 07:17:16.415486 containerd[1565]: 2025-08-13 07:17:16.397 [INFO][4209] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2c214e94208971830013651ef85b8949ef4163700f3a522d0a71c610c15699ed" Namespace="calico-system" Pod="whisker-59464c7c6b-c7vsd" WorkloadEndpoint="localhost-k8s-whisker--59464c7c6b--c7vsd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--59464c7c6b--c7vsd-eth0", GenerateName:"whisker-59464c7c6b-", Namespace:"calico-system", SelfLink:"", UID:"5cc8af9e-d302-4106-886a-fe00c5d2ed2c", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"59464c7c6b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2c214e94208971830013651ef85b8949ef4163700f3a522d0a71c610c15699ed", Pod:"whisker-59464c7c6b-c7vsd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie99f76aae3b", MAC:"c6:04:e1:ec:3c:86", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:16.415486 containerd[1565]: 2025-08-13 07:17:16.409 [INFO][4209] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2c214e94208971830013651ef85b8949ef4163700f3a522d0a71c610c15699ed" Namespace="calico-system" Pod="whisker-59464c7c6b-c7vsd" WorkloadEndpoint="localhost-k8s-whisker--59464c7c6b--c7vsd-eth0" Aug 13 07:17:16.442504 containerd[1565]: time="2025-08-13T07:17:16.442126375Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:17:16.442504 containerd[1565]: time="2025-08-13T07:17:16.442235209Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:17:16.443943 containerd[1565]: time="2025-08-13T07:17:16.442404081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:16.444387 containerd[1565]: time="2025-08-13T07:17:16.444102690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:16.490193 systemd-resolved[1466]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:17:16.528133 containerd[1565]: time="2025-08-13T07:17:16.528077864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59464c7c6b-c7vsd,Uid:5cc8af9e-d302-4106-886a-fe00c5d2ed2c,Namespace:calico-system,Attempt:0,} returns sandbox id \"2c214e94208971830013651ef85b8949ef4163700f3a522d0a71c610c15699ed\"" Aug 13 07:17:16.529900 containerd[1565]: time="2025-08-13T07:17:16.529873317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 13 07:17:16.681887 kubelet[2651]: I0813 07:17:16.681814 2651 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00def675-358e-43ab-abab-2b7a68814926" path="/var/lib/kubelet/pods/00def675-358e-43ab-abab-2b7a68814926/volumes" Aug 13 07:17:17.390562 systemd-networkd[1251]: vxlan.calico: Gained IPv6LL Aug 13 07:17:17.902639 systemd-networkd[1251]: calie99f76aae3b: Gained IPv6LL Aug 13 07:17:18.147040 containerd[1565]: time="2025-08-13T07:17:18.146981151Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:18.148835 containerd[1565]: time="2025-08-13T07:17:18.148801317Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Aug 13 07:17:18.150214 containerd[1565]: time="2025-08-13T07:17:18.150183469Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:18.152451 containerd[1565]: time="2025-08-13T07:17:18.152405496Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:18.153310 containerd[1565]: time="2025-08-13T07:17:18.153216131Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.623310546s" Aug 13 07:17:18.153310 containerd[1565]: time="2025-08-13T07:17:18.153262946Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Aug 13 07:17:18.156571 containerd[1565]: time="2025-08-13T07:17:18.156509783Z" level=info msg="CreateContainer within sandbox \"2c214e94208971830013651ef85b8949ef4163700f3a522d0a71c610c15699ed\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 13 07:17:18.170739 containerd[1565]: time="2025-08-13T07:17:18.170706926Z" level=info msg="CreateContainer within sandbox \"2c214e94208971830013651ef85b8949ef4163700f3a522d0a71c610c15699ed\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"c0322e796378803796edf966b03d6e9f08b4343dc829154c5947a74deb104501\"" Aug 13 07:17:18.171150 containerd[1565]: time="2025-08-13T07:17:18.171115569Z" level=info msg="StartContainer for \"c0322e796378803796edf966b03d6e9f08b4343dc829154c5947a74deb104501\"" Aug 13 07:17:18.265115 containerd[1565]: time="2025-08-13T07:17:18.265053156Z" level=info msg="StartContainer for \"c0322e796378803796edf966b03d6e9f08b4343dc829154c5947a74deb104501\" returns successfully" Aug 13 07:17:18.267194 containerd[1565]: time="2025-08-13T07:17:18.267147937Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 13 07:17:18.442665 systemd[1]: Started sshd@7-10.0.0.130:22-10.0.0.1:42174.service - OpenSSH per-connection server daemon (10.0.0.1:42174). Aug 13 07:17:18.480076 sshd[4367]: Accepted publickey for core from 10.0.0.1 port 42174 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:17:18.482009 sshd[4367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:17:18.486702 systemd-logind[1548]: New session 8 of user core. Aug 13 07:17:18.494650 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 07:17:18.630686 sshd[4367]: pam_unix(sshd:session): session closed for user core Aug 13 07:17:18.634621 systemd[1]: sshd@7-10.0.0.130:22-10.0.0.1:42174.service: Deactivated successfully. Aug 13 07:17:18.637161 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 07:17:18.637680 systemd-logind[1548]: Session 8 logged out. Waiting for processes to exit. Aug 13 07:17:18.638871 systemd-logind[1548]: Removed session 8. Aug 13 07:17:20.110185 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1359546251.mount: Deactivated successfully. Aug 13 07:17:20.165512 containerd[1565]: time="2025-08-13T07:17:20.165454146Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:20.166134 containerd[1565]: time="2025-08-13T07:17:20.166063185Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Aug 13 07:17:20.167209 containerd[1565]: time="2025-08-13T07:17:20.167171204Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:20.169419 containerd[1565]: time="2025-08-13T07:17:20.169372327Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:20.170006 containerd[1565]: time="2025-08-13T07:17:20.169970496Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 1.902786022s" Aug 13 07:17:20.170059 containerd[1565]: time="2025-08-13T07:17:20.170010969Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Aug 13 07:17:20.172623 containerd[1565]: time="2025-08-13T07:17:20.172591765Z" level=info msg="CreateContainer within sandbox \"2c214e94208971830013651ef85b8949ef4163700f3a522d0a71c610c15699ed\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 13 07:17:20.371705 containerd[1565]: time="2025-08-13T07:17:20.371530300Z" level=info msg="CreateContainer within sandbox \"2c214e94208971830013651ef85b8949ef4163700f3a522d0a71c610c15699ed\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"cee1e808ea2c56e93a2fcefb502f61be9bca212ae4c5bef0e8d0d6edb7a54eba\"" Aug 13 07:17:20.372458 containerd[1565]: time="2025-08-13T07:17:20.372119884Z" level=info msg="StartContainer for \"cee1e808ea2c56e93a2fcefb502f61be9bca212ae4c5bef0e8d0d6edb7a54eba\"" Aug 13 07:17:20.437765 containerd[1565]: time="2025-08-13T07:17:20.437718129Z" level=info msg="StartContainer for \"cee1e808ea2c56e93a2fcefb502f61be9bca212ae4c5bef0e8d0d6edb7a54eba\" returns successfully" Aug 13 07:17:20.679349 containerd[1565]: time="2025-08-13T07:17:20.679157328Z" level=info msg="StopPodSandbox for \"29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc\"" Aug 13 07:17:20.679474 containerd[1565]: time="2025-08-13T07:17:20.679430911Z" level=info msg="StopPodSandbox for \"5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8\"" Aug 13 07:17:20.680391 containerd[1565]: time="2025-08-13T07:17:20.679567128Z" level=info msg="StopPodSandbox for \"a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52\"" Aug 13 07:17:20.680391 containerd[1565]: time="2025-08-13T07:17:20.679995610Z" level=info msg="StopPodSandbox for \"09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a\"" Aug 13 07:17:20.796211 containerd[1565]: 2025-08-13 07:17:20.742 [INFO][4472] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a" Aug 13 07:17:20.796211 containerd[1565]: 2025-08-13 07:17:20.742 [INFO][4472] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a" iface="eth0" netns="/var/run/netns/cni-10d996b2-84e2-b8a9-351f-75a417533c5f" Aug 13 07:17:20.796211 containerd[1565]: 2025-08-13 07:17:20.743 [INFO][4472] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a" iface="eth0" netns="/var/run/netns/cni-10d996b2-84e2-b8a9-351f-75a417533c5f" Aug 13 07:17:20.796211 containerd[1565]: 2025-08-13 07:17:20.743 [INFO][4472] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a" iface="eth0" netns="/var/run/netns/cni-10d996b2-84e2-b8a9-351f-75a417533c5f" Aug 13 07:17:20.796211 containerd[1565]: 2025-08-13 07:17:20.743 [INFO][4472] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a" Aug 13 07:17:20.796211 containerd[1565]: 2025-08-13 07:17:20.743 [INFO][4472] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a" Aug 13 07:17:20.796211 containerd[1565]: 2025-08-13 07:17:20.782 [INFO][4506] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a" HandleID="k8s-pod-network.09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a" Workload="localhost-k8s-goldmane--58fd7646b9--b9j6l-eth0" Aug 13 07:17:20.796211 containerd[1565]: 2025-08-13 07:17:20.782 [INFO][4506] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:20.796211 containerd[1565]: 2025-08-13 07:17:20.782 [INFO][4506] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:20.796211 containerd[1565]: 2025-08-13 07:17:20.788 [WARNING][4506] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a" HandleID="k8s-pod-network.09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a" Workload="localhost-k8s-goldmane--58fd7646b9--b9j6l-eth0" Aug 13 07:17:20.796211 containerd[1565]: 2025-08-13 07:17:20.788 [INFO][4506] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a" HandleID="k8s-pod-network.09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a" Workload="localhost-k8s-goldmane--58fd7646b9--b9j6l-eth0" Aug 13 07:17:20.796211 containerd[1565]: 2025-08-13 07:17:20.790 [INFO][4506] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:20.796211 containerd[1565]: 2025-08-13 07:17:20.793 [INFO][4472] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a" Aug 13 07:17:20.800375 containerd[1565]: time="2025-08-13T07:17:20.799799923Z" level=info msg="TearDown network for sandbox \"09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a\" successfully" Aug 13 07:17:20.800509 containerd[1565]: time="2025-08-13T07:17:20.800462329Z" level=info msg="StopPodSandbox for \"09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a\" returns successfully" Aug 13 07:17:20.801127 systemd[1]: run-netns-cni\x2d10d996b2\x2d84e2\x2db8a9\x2d351f\x2d75a417533c5f.mount: Deactivated successfully. Aug 13 07:17:20.802252 containerd[1565]: time="2025-08-13T07:17:20.801217582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-b9j6l,Uid:a91f2a95-61d2-44d1-8e65-0711a3ca46ef,Namespace:calico-system,Attempt:1,}" Aug 13 07:17:20.807935 containerd[1565]: 2025-08-13 07:17:20.752 [INFO][4471] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8" Aug 13 07:17:20.807935 containerd[1565]: 2025-08-13 07:17:20.754 [INFO][4471] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8" iface="eth0" netns="/var/run/netns/cni-421f060e-d610-311f-cb6c-af2fdc8083df" Aug 13 07:17:20.807935 containerd[1565]: 2025-08-13 07:17:20.754 [INFO][4471] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8" iface="eth0" netns="/var/run/netns/cni-421f060e-d610-311f-cb6c-af2fdc8083df" Aug 13 07:17:20.807935 containerd[1565]: 2025-08-13 07:17:20.758 [INFO][4471] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8" iface="eth0" netns="/var/run/netns/cni-421f060e-d610-311f-cb6c-af2fdc8083df" Aug 13 07:17:20.807935 containerd[1565]: 2025-08-13 07:17:20.758 [INFO][4471] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8" Aug 13 07:17:20.807935 containerd[1565]: 2025-08-13 07:17:20.758 [INFO][4471] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8" Aug 13 07:17:20.807935 containerd[1565]: 2025-08-13 07:17:20.786 [INFO][4525] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8" HandleID="k8s-pod-network.5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8" Workload="localhost-k8s-csi--node--driver--s2k55-eth0" Aug 13 07:17:20.807935 containerd[1565]: 2025-08-13 07:17:20.786 [INFO][4525] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:20.807935 containerd[1565]: 2025-08-13 07:17:20.790 [INFO][4525] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:20.807935 containerd[1565]: 2025-08-13 07:17:20.798 [WARNING][4525] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8" HandleID="k8s-pod-network.5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8" Workload="localhost-k8s-csi--node--driver--s2k55-eth0" Aug 13 07:17:20.807935 containerd[1565]: 2025-08-13 07:17:20.798 [INFO][4525] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8" HandleID="k8s-pod-network.5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8" Workload="localhost-k8s-csi--node--driver--s2k55-eth0" Aug 13 07:17:20.807935 containerd[1565]: 2025-08-13 07:17:20.802 [INFO][4525] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:20.807935 containerd[1565]: 2025-08-13 07:17:20.804 [INFO][4471] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8" Aug 13 07:17:20.808393 containerd[1565]: time="2025-08-13T07:17:20.808064027Z" level=info msg="TearDown network for sandbox \"5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8\" successfully" Aug 13 07:17:20.808393 containerd[1565]: time="2025-08-13T07:17:20.808085455Z" level=info msg="StopPodSandbox for \"5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8\" returns successfully" Aug 13 07:17:20.809284 containerd[1565]: time="2025-08-13T07:17:20.809233407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s2k55,Uid:f09470c1-c77d-44b2-8331-61723edd172c,Namespace:calico-system,Attempt:1,}" Aug 13 07:17:20.811857 systemd[1]: run-netns-cni\x2d421f060e\x2dd610\x2d311f\x2dcb6c\x2daf2fdc8083df.mount: Deactivated successfully. Aug 13 07:17:20.817746 containerd[1565]: 2025-08-13 07:17:20.749 [INFO][4481] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc" Aug 13 07:17:20.817746 containerd[1565]: 2025-08-13 07:17:20.749 [INFO][4481] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc" iface="eth0" netns="/var/run/netns/cni-a024eea1-f885-5015-c456-25cb0463e901" Aug 13 07:17:20.817746 containerd[1565]: 2025-08-13 07:17:20.750 [INFO][4481] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc" iface="eth0" netns="/var/run/netns/cni-a024eea1-f885-5015-c456-25cb0463e901" Aug 13 07:17:20.817746 containerd[1565]: 2025-08-13 07:17:20.750 [INFO][4481] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc" iface="eth0" netns="/var/run/netns/cni-a024eea1-f885-5015-c456-25cb0463e901" Aug 13 07:17:20.817746 containerd[1565]: 2025-08-13 07:17:20.750 [INFO][4481] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc" Aug 13 07:17:20.817746 containerd[1565]: 2025-08-13 07:17:20.750 [INFO][4481] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc" Aug 13 07:17:20.817746 containerd[1565]: 2025-08-13 07:17:20.799 [INFO][4508] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc" HandleID="k8s-pod-network.29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc" Workload="localhost-k8s-calico--apiserver--74b999fc99--ng8mj-eth0" Aug 13 07:17:20.817746 containerd[1565]: 2025-08-13 07:17:20.799 [INFO][4508] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:20.817746 containerd[1565]: 2025-08-13 07:17:20.802 [INFO][4508] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:20.817746 containerd[1565]: 2025-08-13 07:17:20.808 [WARNING][4508] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc" HandleID="k8s-pod-network.29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc" Workload="localhost-k8s-calico--apiserver--74b999fc99--ng8mj-eth0" Aug 13 07:17:20.817746 containerd[1565]: 2025-08-13 07:17:20.808 [INFO][4508] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc" HandleID="k8s-pod-network.29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc" Workload="localhost-k8s-calico--apiserver--74b999fc99--ng8mj-eth0" Aug 13 07:17:20.817746 containerd[1565]: 2025-08-13 07:17:20.809 [INFO][4508] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:20.817746 containerd[1565]: 2025-08-13 07:17:20.814 [INFO][4481] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc" Aug 13 07:17:20.818930 containerd[1565]: time="2025-08-13T07:17:20.818801803Z" level=info msg="TearDown network for sandbox \"29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc\" successfully" Aug 13 07:17:20.818930 containerd[1565]: time="2025-08-13T07:17:20.818829503Z" level=info msg="StopPodSandbox for \"29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc\" returns successfully" Aug 13 07:17:20.819964 containerd[1565]: time="2025-08-13T07:17:20.819595145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b999fc99-ng8mj,Uid:c3703c35-893a-4eb4-b160-0a5c2f7c54ca,Namespace:calico-apiserver,Attempt:1,}" Aug 13 07:17:20.827269 systemd[1]: run-netns-cni\x2da024eea1\x2df885\x2d5015\x2dc456\x2d25cb0463e901.mount: Deactivated successfully. Aug 13 07:17:20.832909 containerd[1565]: 2025-08-13 07:17:20.750 [INFO][4489] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52" Aug 13 07:17:20.832909 containerd[1565]: 2025-08-13 07:17:20.750 [INFO][4489] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52" iface="eth0" netns="/var/run/netns/cni-37f1264f-50c7-6d7d-f490-dd95a54fc24b" Aug 13 07:17:20.832909 containerd[1565]: 2025-08-13 07:17:20.755 [INFO][4489] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52" iface="eth0" netns="/var/run/netns/cni-37f1264f-50c7-6d7d-f490-dd95a54fc24b" Aug 13 07:17:20.832909 containerd[1565]: 2025-08-13 07:17:20.755 [INFO][4489] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52" iface="eth0" netns="/var/run/netns/cni-37f1264f-50c7-6d7d-f490-dd95a54fc24b" Aug 13 07:17:20.832909 containerd[1565]: 2025-08-13 07:17:20.755 [INFO][4489] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52" Aug 13 07:17:20.832909 containerd[1565]: 2025-08-13 07:17:20.755 [INFO][4489] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52" Aug 13 07:17:20.832909 containerd[1565]: 2025-08-13 07:17:20.801 [INFO][4519] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52" HandleID="k8s-pod-network.a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52" Workload="localhost-k8s-calico--kube--controllers--5896fd98dd--kf2hf-eth0" Aug 13 07:17:20.832909 containerd[1565]: 2025-08-13 07:17:20.801 [INFO][4519] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:20.832909 containerd[1565]: 2025-08-13 07:17:20.809 [INFO][4519] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:20.832909 containerd[1565]: 2025-08-13 07:17:20.816 [WARNING][4519] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52" HandleID="k8s-pod-network.a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52" Workload="localhost-k8s-calico--kube--controllers--5896fd98dd--kf2hf-eth0" Aug 13 07:17:20.832909 containerd[1565]: 2025-08-13 07:17:20.816 [INFO][4519] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52" HandleID="k8s-pod-network.a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52" Workload="localhost-k8s-calico--kube--controllers--5896fd98dd--kf2hf-eth0" Aug 13 07:17:20.832909 containerd[1565]: 2025-08-13 07:17:20.819 [INFO][4519] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:20.832909 containerd[1565]: 2025-08-13 07:17:20.828 [INFO][4489] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52" Aug 13 07:17:20.833376 containerd[1565]: time="2025-08-13T07:17:20.832953250Z" level=info msg="TearDown network for sandbox \"a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52\" successfully" Aug 13 07:17:20.833376 containerd[1565]: time="2025-08-13T07:17:20.832985669Z" level=info msg="StopPodSandbox for \"a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52\" returns successfully" Aug 13 07:17:20.835452 containerd[1565]: time="2025-08-13T07:17:20.835396719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5896fd98dd-kf2hf,Uid:e400ac0b-ae46-4ac2-83f3-c47cd5c10714,Namespace:calico-system,Attempt:1,}" Aug 13 07:17:20.868205 kubelet[2651]: I0813 07:17:20.866623 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-59464c7c6b-c7vsd" podStartSLOduration=2.225219838 podStartE2EDuration="5.866602041s" podCreationTimestamp="2025-08-13 07:17:15 +0000 UTC" firstStartedPulling="2025-08-13 07:17:16.529613523 +0000 UTC m=+33.934475910" lastFinishedPulling="2025-08-13 07:17:20.170995726 +0000 UTC m=+37.575858113" observedRunningTime="2025-08-13 07:17:20.866260815 +0000 UTC m=+38.271123202" watchObservedRunningTime="2025-08-13 07:17:20.866602041 +0000 UTC m=+38.271464428" Aug 13 07:17:20.992281 systemd-networkd[1251]: calif2263044042: Link UP Aug 13 07:17:20.992530 systemd-networkd[1251]: calif2263044042: Gained carrier Aug 13 07:17:21.012030 containerd[1565]: 2025-08-13 07:17:20.891 [INFO][4537] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--58fd7646b9--b9j6l-eth0 goldmane-58fd7646b9- calico-system a91f2a95-61d2-44d1-8e65-0711a3ca46ef 1004 0 2025-08-13 07:16:59 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-58fd7646b9-b9j6l eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calif2263044042 [] [] }} ContainerID="15002f56fdd38c045d8279af321bddf6c318eab111dfccd218fe6afce3cb0227" Namespace="calico-system" Pod="goldmane-58fd7646b9-b9j6l" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--b9j6l-" Aug 13 07:17:21.012030 containerd[1565]: 2025-08-13 07:17:20.892 [INFO][4537] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="15002f56fdd38c045d8279af321bddf6c318eab111dfccd218fe6afce3cb0227" Namespace="calico-system" Pod="goldmane-58fd7646b9-b9j6l" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--b9j6l-eth0" Aug 13 07:17:21.012030 containerd[1565]: 2025-08-13 07:17:20.946 [INFO][4593] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="15002f56fdd38c045d8279af321bddf6c318eab111dfccd218fe6afce3cb0227" HandleID="k8s-pod-network.15002f56fdd38c045d8279af321bddf6c318eab111dfccd218fe6afce3cb0227" Workload="localhost-k8s-goldmane--58fd7646b9--b9j6l-eth0" Aug 13 07:17:21.012030 containerd[1565]: 2025-08-13 07:17:20.947 [INFO][4593] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="15002f56fdd38c045d8279af321bddf6c318eab111dfccd218fe6afce3cb0227" HandleID="k8s-pod-network.15002f56fdd38c045d8279af321bddf6c318eab111dfccd218fe6afce3cb0227" Workload="localhost-k8s-goldmane--58fd7646b9--b9j6l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7560), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-58fd7646b9-b9j6l", "timestamp":"2025-08-13 07:17:20.946160328 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:17:21.012030 containerd[1565]: 2025-08-13 07:17:20.947 [INFO][4593] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:21.012030 containerd[1565]: 2025-08-13 07:17:20.947 [INFO][4593] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:21.012030 containerd[1565]: 2025-08-13 07:17:20.947 [INFO][4593] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 07:17:21.012030 containerd[1565]: 2025-08-13 07:17:20.959 [INFO][4593] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.15002f56fdd38c045d8279af321bddf6c318eab111dfccd218fe6afce3cb0227" host="localhost" Aug 13 07:17:21.012030 containerd[1565]: 2025-08-13 07:17:20.966 [INFO][4593] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 07:17:21.012030 containerd[1565]: 2025-08-13 07:17:20.970 [INFO][4593] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 07:17:21.012030 containerd[1565]: 2025-08-13 07:17:20.972 [INFO][4593] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 07:17:21.012030 containerd[1565]: 2025-08-13 07:17:20.974 [INFO][4593] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 07:17:21.012030 containerd[1565]: 2025-08-13 07:17:20.974 [INFO][4593] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.15002f56fdd38c045d8279af321bddf6c318eab111dfccd218fe6afce3cb0227" host="localhost" Aug 13 07:17:21.012030 containerd[1565]: 2025-08-13 07:17:20.975 [INFO][4593] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.15002f56fdd38c045d8279af321bddf6c318eab111dfccd218fe6afce3cb0227 Aug 13 07:17:21.012030 containerd[1565]: 2025-08-13 07:17:20.979 [INFO][4593] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.15002f56fdd38c045d8279af321bddf6c318eab111dfccd218fe6afce3cb0227" host="localhost" Aug 13 07:17:21.012030 containerd[1565]: 2025-08-13 07:17:20.986 [INFO][4593] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.15002f56fdd38c045d8279af321bddf6c318eab111dfccd218fe6afce3cb0227" host="localhost" Aug 13 07:17:21.012030 containerd[1565]: 2025-08-13 07:17:20.986 [INFO][4593] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.15002f56fdd38c045d8279af321bddf6c318eab111dfccd218fe6afce3cb0227" host="localhost" Aug 13 07:17:21.012030 containerd[1565]: 2025-08-13 07:17:20.986 [INFO][4593] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:21.012030 containerd[1565]: 2025-08-13 07:17:20.986 [INFO][4593] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="15002f56fdd38c045d8279af321bddf6c318eab111dfccd218fe6afce3cb0227" HandleID="k8s-pod-network.15002f56fdd38c045d8279af321bddf6c318eab111dfccd218fe6afce3cb0227" Workload="localhost-k8s-goldmane--58fd7646b9--b9j6l-eth0" Aug 13 07:17:21.012700 containerd[1565]: 2025-08-13 07:17:20.990 [INFO][4537] cni-plugin/k8s.go 418: Populated endpoint ContainerID="15002f56fdd38c045d8279af321bddf6c318eab111dfccd218fe6afce3cb0227" Namespace="calico-system" Pod="goldmane-58fd7646b9-b9j6l" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--b9j6l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--b9j6l-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"a91f2a95-61d2-44d1-8e65-0711a3ca46ef", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-58fd7646b9-b9j6l", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif2263044042", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:21.012700 containerd[1565]: 2025-08-13 07:17:20.990 [INFO][4537] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="15002f56fdd38c045d8279af321bddf6c318eab111dfccd218fe6afce3cb0227" Namespace="calico-system" Pod="goldmane-58fd7646b9-b9j6l" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--b9j6l-eth0" Aug 13 07:17:21.012700 containerd[1565]: 2025-08-13 07:17:20.990 [INFO][4537] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif2263044042 ContainerID="15002f56fdd38c045d8279af321bddf6c318eab111dfccd218fe6afce3cb0227" Namespace="calico-system" Pod="goldmane-58fd7646b9-b9j6l" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--b9j6l-eth0" Aug 13 07:17:21.012700 containerd[1565]: 2025-08-13 07:17:20.996 [INFO][4537] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="15002f56fdd38c045d8279af321bddf6c318eab111dfccd218fe6afce3cb0227" Namespace="calico-system" Pod="goldmane-58fd7646b9-b9j6l" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--b9j6l-eth0" Aug 13 07:17:21.012700 containerd[1565]: 2025-08-13 07:17:20.996 [INFO][4537] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="15002f56fdd38c045d8279af321bddf6c318eab111dfccd218fe6afce3cb0227" Namespace="calico-system" Pod="goldmane-58fd7646b9-b9j6l" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--b9j6l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--b9j6l-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"a91f2a95-61d2-44d1-8e65-0711a3ca46ef", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"15002f56fdd38c045d8279af321bddf6c318eab111dfccd218fe6afce3cb0227", Pod:"goldmane-58fd7646b9-b9j6l", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif2263044042", MAC:"82:89:1a:a6:c8:dc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:21.012700 containerd[1565]: 2025-08-13 07:17:21.007 [INFO][4537] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="15002f56fdd38c045d8279af321bddf6c318eab111dfccd218fe6afce3cb0227" Namespace="calico-system" Pod="goldmane-58fd7646b9-b9j6l" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--b9j6l-eth0" Aug 13 07:17:21.034501 containerd[1565]: time="2025-08-13T07:17:21.033730714Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:17:21.034501 containerd[1565]: time="2025-08-13T07:17:21.034430390Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:17:21.034501 containerd[1565]: time="2025-08-13T07:17:21.034443033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:21.034710 containerd[1565]: time="2025-08-13T07:17:21.034584148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:21.061813 systemd-resolved[1466]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:17:21.097745 systemd-networkd[1251]: cali13d0d044388: Link UP Aug 13 07:17:21.098697 systemd-networkd[1251]: cali13d0d044388: Gained carrier Aug 13 07:17:21.100957 containerd[1565]: time="2025-08-13T07:17:21.100900741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-b9j6l,Uid:a91f2a95-61d2-44d1-8e65-0711a3ca46ef,Namespace:calico-system,Attempt:1,} returns sandbox id \"15002f56fdd38c045d8279af321bddf6c318eab111dfccd218fe6afce3cb0227\"" Aug 13 07:17:21.103704 containerd[1565]: time="2025-08-13T07:17:21.103624788Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 13 07:17:21.116202 containerd[1565]: 2025-08-13 07:17:20.922 [INFO][4549] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--s2k55-eth0 csi-node-driver- calico-system f09470c1-c77d-44b2-8331-61723edd172c 1007 0 2025-08-13 07:16:59 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-s2k55 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali13d0d044388 [] [] }} ContainerID="c62a409ad3bccbab65fdf61ad7a60f45668b5c3e7b850238beb74c48d2fee9e4" Namespace="calico-system" Pod="csi-node-driver-s2k55" WorkloadEndpoint="localhost-k8s-csi--node--driver--s2k55-" Aug 13 07:17:21.116202 containerd[1565]: 2025-08-13 07:17:20.923 [INFO][4549] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c62a409ad3bccbab65fdf61ad7a60f45668b5c3e7b850238beb74c48d2fee9e4" Namespace="calico-system" Pod="csi-node-driver-s2k55" WorkloadEndpoint="localhost-k8s-csi--node--driver--s2k55-eth0" Aug 13 07:17:21.116202 containerd[1565]: 2025-08-13 07:17:20.961 [INFO][4610] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c62a409ad3bccbab65fdf61ad7a60f45668b5c3e7b850238beb74c48d2fee9e4" HandleID="k8s-pod-network.c62a409ad3bccbab65fdf61ad7a60f45668b5c3e7b850238beb74c48d2fee9e4" Workload="localhost-k8s-csi--node--driver--s2k55-eth0" Aug 13 07:17:21.116202 containerd[1565]: 2025-08-13 07:17:20.962 [INFO][4610] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c62a409ad3bccbab65fdf61ad7a60f45668b5c3e7b850238beb74c48d2fee9e4" HandleID="k8s-pod-network.c62a409ad3bccbab65fdf61ad7a60f45668b5c3e7b850238beb74c48d2fee9e4" Workload="localhost-k8s-csi--node--driver--s2k55-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000588570), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-s2k55", "timestamp":"2025-08-13 07:17:20.961212871 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:17:21.116202 containerd[1565]: 2025-08-13 07:17:20.962 [INFO][4610] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:21.116202 containerd[1565]: 2025-08-13 07:17:20.986 [INFO][4610] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:21.116202 containerd[1565]: 2025-08-13 07:17:20.986 [INFO][4610] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 07:17:21.116202 containerd[1565]: 2025-08-13 07:17:21.059 [INFO][4610] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c62a409ad3bccbab65fdf61ad7a60f45668b5c3e7b850238beb74c48d2fee9e4" host="localhost" Aug 13 07:17:21.116202 containerd[1565]: 2025-08-13 07:17:21.067 [INFO][4610] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 07:17:21.116202 containerd[1565]: 2025-08-13 07:17:21.071 [INFO][4610] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 07:17:21.116202 containerd[1565]: 2025-08-13 07:17:21.072 [INFO][4610] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 07:17:21.116202 containerd[1565]: 2025-08-13 07:17:21.075 [INFO][4610] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 07:17:21.116202 containerd[1565]: 2025-08-13 07:17:21.075 [INFO][4610] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c62a409ad3bccbab65fdf61ad7a60f45668b5c3e7b850238beb74c48d2fee9e4" host="localhost" Aug 13 07:17:21.116202 containerd[1565]: 2025-08-13 07:17:21.076 [INFO][4610] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c62a409ad3bccbab65fdf61ad7a60f45668b5c3e7b850238beb74c48d2fee9e4 Aug 13 07:17:21.116202 containerd[1565]: 2025-08-13 07:17:21.080 [INFO][4610] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c62a409ad3bccbab65fdf61ad7a60f45668b5c3e7b850238beb74c48d2fee9e4" host="localhost" Aug 13 07:17:21.116202 containerd[1565]: 2025-08-13 07:17:21.085 [INFO][4610] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.c62a409ad3bccbab65fdf61ad7a60f45668b5c3e7b850238beb74c48d2fee9e4" host="localhost" Aug 13 07:17:21.116202 containerd[1565]: 2025-08-13 07:17:21.085 [INFO][4610] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.c62a409ad3bccbab65fdf61ad7a60f45668b5c3e7b850238beb74c48d2fee9e4" host="localhost" Aug 13 07:17:21.116202 containerd[1565]: 2025-08-13 07:17:21.085 [INFO][4610] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:21.116202 containerd[1565]: 2025-08-13 07:17:21.086 [INFO][4610] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="c62a409ad3bccbab65fdf61ad7a60f45668b5c3e7b850238beb74c48d2fee9e4" HandleID="k8s-pod-network.c62a409ad3bccbab65fdf61ad7a60f45668b5c3e7b850238beb74c48d2fee9e4" Workload="localhost-k8s-csi--node--driver--s2k55-eth0" Aug 13 07:17:21.116827 containerd[1565]: 2025-08-13 07:17:21.095 [INFO][4549] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c62a409ad3bccbab65fdf61ad7a60f45668b5c3e7b850238beb74c48d2fee9e4" Namespace="calico-system" Pod="csi-node-driver-s2k55" WorkloadEndpoint="localhost-k8s-csi--node--driver--s2k55-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--s2k55-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f09470c1-c77d-44b2-8331-61723edd172c", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-s2k55", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali13d0d044388", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:21.116827 containerd[1565]: 2025-08-13 07:17:21.095 [INFO][4549] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="c62a409ad3bccbab65fdf61ad7a60f45668b5c3e7b850238beb74c48d2fee9e4" Namespace="calico-system" Pod="csi-node-driver-s2k55" WorkloadEndpoint="localhost-k8s-csi--node--driver--s2k55-eth0" Aug 13 07:17:21.116827 containerd[1565]: 2025-08-13 07:17:21.095 [INFO][4549] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali13d0d044388 ContainerID="c62a409ad3bccbab65fdf61ad7a60f45668b5c3e7b850238beb74c48d2fee9e4" Namespace="calico-system" Pod="csi-node-driver-s2k55" WorkloadEndpoint="localhost-k8s-csi--node--driver--s2k55-eth0" Aug 13 07:17:21.116827 containerd[1565]: 2025-08-13 07:17:21.098 [INFO][4549] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c62a409ad3bccbab65fdf61ad7a60f45668b5c3e7b850238beb74c48d2fee9e4" Namespace="calico-system" Pod="csi-node-driver-s2k55" WorkloadEndpoint="localhost-k8s-csi--node--driver--s2k55-eth0" Aug 13 07:17:21.116827 containerd[1565]: 2025-08-13 07:17:21.099 [INFO][4549] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c62a409ad3bccbab65fdf61ad7a60f45668b5c3e7b850238beb74c48d2fee9e4" Namespace="calico-system" Pod="csi-node-driver-s2k55" WorkloadEndpoint="localhost-k8s-csi--node--driver--s2k55-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--s2k55-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f09470c1-c77d-44b2-8331-61723edd172c", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c62a409ad3bccbab65fdf61ad7a60f45668b5c3e7b850238beb74c48d2fee9e4", Pod:"csi-node-driver-s2k55", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali13d0d044388", MAC:"4a:20:a2:05:b2:63", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:21.116827 containerd[1565]: 2025-08-13 07:17:21.111 [INFO][4549] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c62a409ad3bccbab65fdf61ad7a60f45668b5c3e7b850238beb74c48d2fee9e4" Namespace="calico-system" Pod="csi-node-driver-s2k55" WorkloadEndpoint="localhost-k8s-csi--node--driver--s2k55-eth0" Aug 13 07:17:21.136869 containerd[1565]: time="2025-08-13T07:17:21.136705920Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:17:21.136869 containerd[1565]: time="2025-08-13T07:17:21.136866810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:17:21.137099 containerd[1565]: time="2025-08-13T07:17:21.136899710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:21.137099 containerd[1565]: time="2025-08-13T07:17:21.137040605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:21.177583 systemd-resolved[1466]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:17:21.196275 containerd[1565]: time="2025-08-13T07:17:21.196215458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s2k55,Uid:f09470c1-c77d-44b2-8331-61723edd172c,Namespace:calico-system,Attempt:1,} returns sandbox id \"c62a409ad3bccbab65fdf61ad7a60f45668b5c3e7b850238beb74c48d2fee9e4\"" Aug 13 07:17:21.204300 systemd-networkd[1251]: cali476deee431c: Link UP Aug 13 07:17:21.205614 systemd-networkd[1251]: cali476deee431c: Gained carrier Aug 13 07:17:21.221193 containerd[1565]: 2025-08-13 07:17:20.918 [INFO][4560] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--74b999fc99--ng8mj-eth0 calico-apiserver-74b999fc99- calico-apiserver c3703c35-893a-4eb4-b160-0a5c2f7c54ca 1005 0 2025-08-13 07:16:57 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:74b999fc99 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-74b999fc99-ng8mj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali476deee431c [] [] }} ContainerID="56882fe5de728325c520548ca60a3cf7212d40c1f43336409d38cec5708bd24c" Namespace="calico-apiserver" Pod="calico-apiserver-74b999fc99-ng8mj" WorkloadEndpoint="localhost-k8s-calico--apiserver--74b999fc99--ng8mj-" Aug 13 07:17:21.221193 containerd[1565]: 2025-08-13 07:17:20.918 [INFO][4560] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="56882fe5de728325c520548ca60a3cf7212d40c1f43336409d38cec5708bd24c" Namespace="calico-apiserver" Pod="calico-apiserver-74b999fc99-ng8mj" WorkloadEndpoint="localhost-k8s-calico--apiserver--74b999fc99--ng8mj-eth0" Aug 13 07:17:21.221193 containerd[1565]: 2025-08-13 07:17:20.967 [INFO][4604] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="56882fe5de728325c520548ca60a3cf7212d40c1f43336409d38cec5708bd24c" HandleID="k8s-pod-network.56882fe5de728325c520548ca60a3cf7212d40c1f43336409d38cec5708bd24c" Workload="localhost-k8s-calico--apiserver--74b999fc99--ng8mj-eth0" Aug 13 07:17:21.221193 containerd[1565]: 2025-08-13 07:17:20.967 [INFO][4604] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="56882fe5de728325c520548ca60a3cf7212d40c1f43336409d38cec5708bd24c" HandleID="k8s-pod-network.56882fe5de728325c520548ca60a3cf7212d40c1f43336409d38cec5708bd24c" Workload="localhost-k8s-calico--apiserver--74b999fc99--ng8mj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325490), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-74b999fc99-ng8mj", "timestamp":"2025-08-13 07:17:20.967671666 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:17:21.221193 containerd[1565]: 2025-08-13 07:17:20.968 [INFO][4604] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:21.221193 containerd[1565]: 2025-08-13 07:17:21.085 [INFO][4604] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:21.221193 containerd[1565]: 2025-08-13 07:17:21.086 [INFO][4604] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 07:17:21.221193 containerd[1565]: 2025-08-13 07:17:21.159 [INFO][4604] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.56882fe5de728325c520548ca60a3cf7212d40c1f43336409d38cec5708bd24c" host="localhost" Aug 13 07:17:21.221193 containerd[1565]: 2025-08-13 07:17:21.168 [INFO][4604] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 07:17:21.221193 containerd[1565]: 2025-08-13 07:17:21.174 [INFO][4604] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 07:17:21.221193 containerd[1565]: 2025-08-13 07:17:21.177 [INFO][4604] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 07:17:21.221193 containerd[1565]: 2025-08-13 07:17:21.179 [INFO][4604] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 07:17:21.221193 containerd[1565]: 2025-08-13 07:17:21.179 [INFO][4604] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.56882fe5de728325c520548ca60a3cf7212d40c1f43336409d38cec5708bd24c" host="localhost" Aug 13 07:17:21.221193 containerd[1565]: 2025-08-13 07:17:21.180 [INFO][4604] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.56882fe5de728325c520548ca60a3cf7212d40c1f43336409d38cec5708bd24c Aug 13 07:17:21.221193 containerd[1565]: 2025-08-13 07:17:21.184 [INFO][4604] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.56882fe5de728325c520548ca60a3cf7212d40c1f43336409d38cec5708bd24c" host="localhost" Aug 13 07:17:21.221193 containerd[1565]: 2025-08-13 07:17:21.193 [INFO][4604] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.56882fe5de728325c520548ca60a3cf7212d40c1f43336409d38cec5708bd24c" host="localhost" Aug 13 07:17:21.221193 containerd[1565]: 2025-08-13 07:17:21.193 [INFO][4604] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.56882fe5de728325c520548ca60a3cf7212d40c1f43336409d38cec5708bd24c" host="localhost" Aug 13 07:17:21.221193 containerd[1565]: 2025-08-13 07:17:21.193 [INFO][4604] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:21.221193 containerd[1565]: 2025-08-13 07:17:21.193 [INFO][4604] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="56882fe5de728325c520548ca60a3cf7212d40c1f43336409d38cec5708bd24c" HandleID="k8s-pod-network.56882fe5de728325c520548ca60a3cf7212d40c1f43336409d38cec5708bd24c" Workload="localhost-k8s-calico--apiserver--74b999fc99--ng8mj-eth0" Aug 13 07:17:21.221899 containerd[1565]: 2025-08-13 07:17:21.200 [INFO][4560] cni-plugin/k8s.go 418: Populated endpoint ContainerID="56882fe5de728325c520548ca60a3cf7212d40c1f43336409d38cec5708bd24c" Namespace="calico-apiserver" Pod="calico-apiserver-74b999fc99-ng8mj" WorkloadEndpoint="localhost-k8s-calico--apiserver--74b999fc99--ng8mj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--74b999fc99--ng8mj-eth0", GenerateName:"calico-apiserver-74b999fc99-", Namespace:"calico-apiserver", SelfLink:"", UID:"c3703c35-893a-4eb4-b160-0a5c2f7c54ca", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74b999fc99", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-74b999fc99-ng8mj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali476deee431c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:21.221899 containerd[1565]: 2025-08-13 07:17:21.200 [INFO][4560] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="56882fe5de728325c520548ca60a3cf7212d40c1f43336409d38cec5708bd24c" Namespace="calico-apiserver" Pod="calico-apiserver-74b999fc99-ng8mj" WorkloadEndpoint="localhost-k8s-calico--apiserver--74b999fc99--ng8mj-eth0" Aug 13 07:17:21.221899 containerd[1565]: 2025-08-13 07:17:21.200 [INFO][4560] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali476deee431c ContainerID="56882fe5de728325c520548ca60a3cf7212d40c1f43336409d38cec5708bd24c" Namespace="calico-apiserver" Pod="calico-apiserver-74b999fc99-ng8mj" WorkloadEndpoint="localhost-k8s-calico--apiserver--74b999fc99--ng8mj-eth0" Aug 13 07:17:21.221899 containerd[1565]: 2025-08-13 07:17:21.206 [INFO][4560] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="56882fe5de728325c520548ca60a3cf7212d40c1f43336409d38cec5708bd24c" Namespace="calico-apiserver" Pod="calico-apiserver-74b999fc99-ng8mj" WorkloadEndpoint="localhost-k8s-calico--apiserver--74b999fc99--ng8mj-eth0" Aug 13 07:17:21.221899 containerd[1565]: 2025-08-13 07:17:21.206 [INFO][4560] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="56882fe5de728325c520548ca60a3cf7212d40c1f43336409d38cec5708bd24c" Namespace="calico-apiserver" Pod="calico-apiserver-74b999fc99-ng8mj" WorkloadEndpoint="localhost-k8s-calico--apiserver--74b999fc99--ng8mj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--74b999fc99--ng8mj-eth0", GenerateName:"calico-apiserver-74b999fc99-", Namespace:"calico-apiserver", SelfLink:"", UID:"c3703c35-893a-4eb4-b160-0a5c2f7c54ca", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74b999fc99", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"56882fe5de728325c520548ca60a3cf7212d40c1f43336409d38cec5708bd24c", Pod:"calico-apiserver-74b999fc99-ng8mj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali476deee431c", MAC:"3a:a5:ee:95:14:18", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:21.221899 containerd[1565]: 2025-08-13 07:17:21.217 [INFO][4560] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="56882fe5de728325c520548ca60a3cf7212d40c1f43336409d38cec5708bd24c" Namespace="calico-apiserver" Pod="calico-apiserver-74b999fc99-ng8mj" WorkloadEndpoint="localhost-k8s-calico--apiserver--74b999fc99--ng8mj-eth0" Aug 13 07:17:21.250567 containerd[1565]: time="2025-08-13T07:17:21.248019628Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:17:21.250567 containerd[1565]: time="2025-08-13T07:17:21.248091748Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:17:21.250567 containerd[1565]: time="2025-08-13T07:17:21.248108038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:21.250567 containerd[1565]: time="2025-08-13T07:17:21.248253311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:21.283364 systemd-resolved[1466]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:17:21.295229 systemd-networkd[1251]: calie3df916f426: Link UP Aug 13 07:17:21.295907 systemd-networkd[1251]: calie3df916f426: Gained carrier Aug 13 07:17:21.312631 containerd[1565]: 2025-08-13 07:17:20.925 [INFO][4574] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5896fd98dd--kf2hf-eth0 calico-kube-controllers-5896fd98dd- calico-system e400ac0b-ae46-4ac2-83f3-c47cd5c10714 1006 0 2025-08-13 07:16:59 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5896fd98dd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5896fd98dd-kf2hf eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie3df916f426 [] [] }} ContainerID="8a5d5af9bdd494599adffa07a5688e2f29d4cde64f5e2f61af8c7262b333f350" Namespace="calico-system" Pod="calico-kube-controllers-5896fd98dd-kf2hf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5896fd98dd--kf2hf-" Aug 13 07:17:21.312631 containerd[1565]: 2025-08-13 07:17:20.925 [INFO][4574] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8a5d5af9bdd494599adffa07a5688e2f29d4cde64f5e2f61af8c7262b333f350" Namespace="calico-system" Pod="calico-kube-controllers-5896fd98dd-kf2hf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5896fd98dd--kf2hf-eth0" Aug 13 07:17:21.312631 containerd[1565]: 2025-08-13 07:17:20.974 [INFO][4612] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8a5d5af9bdd494599adffa07a5688e2f29d4cde64f5e2f61af8c7262b333f350" HandleID="k8s-pod-network.8a5d5af9bdd494599adffa07a5688e2f29d4cde64f5e2f61af8c7262b333f350" Workload="localhost-k8s-calico--kube--controllers--5896fd98dd--kf2hf-eth0" Aug 13 07:17:21.312631 containerd[1565]: 2025-08-13 07:17:20.974 [INFO][4612] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8a5d5af9bdd494599adffa07a5688e2f29d4cde64f5e2f61af8c7262b333f350" HandleID="k8s-pod-network.8a5d5af9bdd494599adffa07a5688e2f29d4cde64f5e2f61af8c7262b333f350" Workload="localhost-k8s-calico--kube--controllers--5896fd98dd--kf2hf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139780), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5896fd98dd-kf2hf", "timestamp":"2025-08-13 07:17:20.974728801 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:17:21.312631 containerd[1565]: 2025-08-13 07:17:20.975 [INFO][4612] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:21.312631 containerd[1565]: 2025-08-13 07:17:21.193 [INFO][4612] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:21.312631 containerd[1565]: 2025-08-13 07:17:21.193 [INFO][4612] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 07:17:21.312631 containerd[1565]: 2025-08-13 07:17:21.260 [INFO][4612] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8a5d5af9bdd494599adffa07a5688e2f29d4cde64f5e2f61af8c7262b333f350" host="localhost" Aug 13 07:17:21.312631 containerd[1565]: 2025-08-13 07:17:21.268 [INFO][4612] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 07:17:21.312631 containerd[1565]: 2025-08-13 07:17:21.273 [INFO][4612] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 07:17:21.312631 containerd[1565]: 2025-08-13 07:17:21.275 [INFO][4612] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 07:17:21.312631 containerd[1565]: 2025-08-13 07:17:21.277 [INFO][4612] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 07:17:21.312631 containerd[1565]: 2025-08-13 07:17:21.277 [INFO][4612] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8a5d5af9bdd494599adffa07a5688e2f29d4cde64f5e2f61af8c7262b333f350" host="localhost" Aug 13 07:17:21.312631 containerd[1565]: 2025-08-13 07:17:21.278 [INFO][4612] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8a5d5af9bdd494599adffa07a5688e2f29d4cde64f5e2f61af8c7262b333f350 Aug 13 07:17:21.312631 containerd[1565]: 2025-08-13 07:17:21.281 [INFO][4612] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8a5d5af9bdd494599adffa07a5688e2f29d4cde64f5e2f61af8c7262b333f350" host="localhost" Aug 13 07:17:21.312631 containerd[1565]: 2025-08-13 07:17:21.289 [INFO][4612] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.8a5d5af9bdd494599adffa07a5688e2f29d4cde64f5e2f61af8c7262b333f350" host="localhost" Aug 13 07:17:21.312631 containerd[1565]: 2025-08-13 07:17:21.289 [INFO][4612] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.8a5d5af9bdd494599adffa07a5688e2f29d4cde64f5e2f61af8c7262b333f350" host="localhost" Aug 13 07:17:21.312631 containerd[1565]: 2025-08-13 07:17:21.289 [INFO][4612] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:21.312631 containerd[1565]: 2025-08-13 07:17:21.289 [INFO][4612] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="8a5d5af9bdd494599adffa07a5688e2f29d4cde64f5e2f61af8c7262b333f350" HandleID="k8s-pod-network.8a5d5af9bdd494599adffa07a5688e2f29d4cde64f5e2f61af8c7262b333f350" Workload="localhost-k8s-calico--kube--controllers--5896fd98dd--kf2hf-eth0" Aug 13 07:17:21.313184 containerd[1565]: 2025-08-13 07:17:21.292 [INFO][4574] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8a5d5af9bdd494599adffa07a5688e2f29d4cde64f5e2f61af8c7262b333f350" Namespace="calico-system" Pod="calico-kube-controllers-5896fd98dd-kf2hf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5896fd98dd--kf2hf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5896fd98dd--kf2hf-eth0", GenerateName:"calico-kube-controllers-5896fd98dd-", Namespace:"calico-system", SelfLink:"", UID:"e400ac0b-ae46-4ac2-83f3-c47cd5c10714", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5896fd98dd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5896fd98dd-kf2hf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie3df916f426", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:21.313184 containerd[1565]: 2025-08-13 07:17:21.293 [INFO][4574] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="8a5d5af9bdd494599adffa07a5688e2f29d4cde64f5e2f61af8c7262b333f350" Namespace="calico-system" Pod="calico-kube-controllers-5896fd98dd-kf2hf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5896fd98dd--kf2hf-eth0" Aug 13 07:17:21.313184 containerd[1565]: 2025-08-13 07:17:21.293 [INFO][4574] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie3df916f426 ContainerID="8a5d5af9bdd494599adffa07a5688e2f29d4cde64f5e2f61af8c7262b333f350" Namespace="calico-system" Pod="calico-kube-controllers-5896fd98dd-kf2hf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5896fd98dd--kf2hf-eth0" Aug 13 07:17:21.313184 containerd[1565]: 2025-08-13 07:17:21.296 [INFO][4574] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8a5d5af9bdd494599adffa07a5688e2f29d4cde64f5e2f61af8c7262b333f350" Namespace="calico-system" Pod="calico-kube-controllers-5896fd98dd-kf2hf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5896fd98dd--kf2hf-eth0" Aug 13 07:17:21.313184 containerd[1565]: 2025-08-13 07:17:21.297 [INFO][4574] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8a5d5af9bdd494599adffa07a5688e2f29d4cde64f5e2f61af8c7262b333f350" Namespace="calico-system" Pod="calico-kube-controllers-5896fd98dd-kf2hf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5896fd98dd--kf2hf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5896fd98dd--kf2hf-eth0", GenerateName:"calico-kube-controllers-5896fd98dd-", Namespace:"calico-system", SelfLink:"", UID:"e400ac0b-ae46-4ac2-83f3-c47cd5c10714", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5896fd98dd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8a5d5af9bdd494599adffa07a5688e2f29d4cde64f5e2f61af8c7262b333f350", Pod:"calico-kube-controllers-5896fd98dd-kf2hf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie3df916f426", MAC:"da:b3:83:8c:94:bf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:21.313184 containerd[1565]: 2025-08-13 07:17:21.306 [INFO][4574] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8a5d5af9bdd494599adffa07a5688e2f29d4cde64f5e2f61af8c7262b333f350" Namespace="calico-system" Pod="calico-kube-controllers-5896fd98dd-kf2hf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5896fd98dd--kf2hf-eth0" Aug 13 07:17:21.323039 containerd[1565]: time="2025-08-13T07:17:21.323005732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b999fc99-ng8mj,Uid:c3703c35-893a-4eb4-b160-0a5c2f7c54ca,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"56882fe5de728325c520548ca60a3cf7212d40c1f43336409d38cec5708bd24c\"" Aug 13 07:17:21.334623 containerd[1565]: time="2025-08-13T07:17:21.334516845Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:17:21.334623 containerd[1565]: time="2025-08-13T07:17:21.334590279Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:17:21.334837 containerd[1565]: time="2025-08-13T07:17:21.334604945Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:21.334837 containerd[1565]: time="2025-08-13T07:17:21.334700477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:21.368561 systemd-resolved[1466]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:17:21.395509 containerd[1565]: time="2025-08-13T07:17:21.395382838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5896fd98dd-kf2hf,Uid:e400ac0b-ae46-4ac2-83f3-c47cd5c10714,Namespace:calico-system,Attempt:1,} returns sandbox id \"8a5d5af9bdd494599adffa07a5688e2f29d4cde64f5e2f61af8c7262b333f350\"" Aug 13 07:17:21.678742 containerd[1565]: time="2025-08-13T07:17:21.678594669Z" level=info msg="StopPodSandbox for \"b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c\"" Aug 13 07:17:21.759070 systemd[1]: run-netns-cni\x2d37f1264f\x2d50c7\x2d6d7d\x2df490\x2ddd95a54fc24b.mount: Deactivated successfully. Aug 13 07:17:21.763134 containerd[1565]: 2025-08-13 07:17:21.723 [INFO][4851] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c" Aug 13 07:17:21.763134 containerd[1565]: 2025-08-13 07:17:21.723 [INFO][4851] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c" iface="eth0" netns="/var/run/netns/cni-cde44c1f-0b05-3115-338f-2b3893437c6a" Aug 13 07:17:21.763134 containerd[1565]: 2025-08-13 07:17:21.723 [INFO][4851] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c" iface="eth0" netns="/var/run/netns/cni-cde44c1f-0b05-3115-338f-2b3893437c6a" Aug 13 07:17:21.763134 containerd[1565]: 2025-08-13 07:17:21.724 [INFO][4851] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c" iface="eth0" netns="/var/run/netns/cni-cde44c1f-0b05-3115-338f-2b3893437c6a" Aug 13 07:17:21.763134 containerd[1565]: 2025-08-13 07:17:21.724 [INFO][4851] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c" Aug 13 07:17:21.763134 containerd[1565]: 2025-08-13 07:17:21.724 [INFO][4851] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c" Aug 13 07:17:21.763134 containerd[1565]: 2025-08-13 07:17:21.745 [INFO][4859] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c" HandleID="k8s-pod-network.b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c" Workload="localhost-k8s-coredns--7c65d6cfc9--7467l-eth0" Aug 13 07:17:21.763134 containerd[1565]: 2025-08-13 07:17:21.745 [INFO][4859] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:21.763134 containerd[1565]: 2025-08-13 07:17:21.745 [INFO][4859] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:21.763134 containerd[1565]: 2025-08-13 07:17:21.753 [WARNING][4859] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c" HandleID="k8s-pod-network.b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c" Workload="localhost-k8s-coredns--7c65d6cfc9--7467l-eth0" Aug 13 07:17:21.763134 containerd[1565]: 2025-08-13 07:17:21.753 [INFO][4859] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c" HandleID="k8s-pod-network.b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c" Workload="localhost-k8s-coredns--7c65d6cfc9--7467l-eth0" Aug 13 07:17:21.763134 containerd[1565]: 2025-08-13 07:17:21.755 [INFO][4859] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:21.763134 containerd[1565]: 2025-08-13 07:17:21.759 [INFO][4851] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c" Aug 13 07:17:21.763670 containerd[1565]: time="2025-08-13T07:17:21.763364771Z" level=info msg="TearDown network for sandbox \"b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c\" successfully" Aug 13 07:17:21.763670 containerd[1565]: time="2025-08-13T07:17:21.763399233Z" level=info msg="StopPodSandbox for \"b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c\" returns successfully" Aug 13 07:17:21.763814 kubelet[2651]: E0813 07:17:21.763776 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:17:21.764889 containerd[1565]: time="2025-08-13T07:17:21.764848185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7467l,Uid:0a0b3cbe-9aa7-400d-968e-cb12067ca892,Namespace:kube-system,Attempt:1,}" Aug 13 07:17:21.766266 systemd[1]: run-netns-cni\x2dcde44c1f\x2d0b05\x2d3115\x2d338f\x2d2b3893437c6a.mount: Deactivated successfully. Aug 13 07:17:21.870895 systemd-networkd[1251]: cali6f627df81f5: Link UP Aug 13 07:17:21.871317 systemd-networkd[1251]: cali6f627df81f5: Gained carrier Aug 13 07:17:21.885188 containerd[1565]: 2025-08-13 07:17:21.810 [INFO][4868] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--7467l-eth0 coredns-7c65d6cfc9- kube-system 0a0b3cbe-9aa7-400d-968e-cb12067ca892 1037 0 2025-08-13 07:16:48 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-7467l eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6f627df81f5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4053e6b5f2df368bb68d0052a49d31c6bbeacb2773d3c9de963deaf2537723e0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7467l" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7467l-" Aug 13 07:17:21.885188 containerd[1565]: 2025-08-13 07:17:21.810 [INFO][4868] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4053e6b5f2df368bb68d0052a49d31c6bbeacb2773d3c9de963deaf2537723e0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7467l" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7467l-eth0" Aug 13 07:17:21.885188 containerd[1565]: 2025-08-13 07:17:21.836 [INFO][4881] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4053e6b5f2df368bb68d0052a49d31c6bbeacb2773d3c9de963deaf2537723e0" HandleID="k8s-pod-network.4053e6b5f2df368bb68d0052a49d31c6bbeacb2773d3c9de963deaf2537723e0" Workload="localhost-k8s-coredns--7c65d6cfc9--7467l-eth0" Aug 13 07:17:21.885188 containerd[1565]: 2025-08-13 07:17:21.836 [INFO][4881] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4053e6b5f2df368bb68d0052a49d31c6bbeacb2773d3c9de963deaf2537723e0" HandleID="k8s-pod-network.4053e6b5f2df368bb68d0052a49d31c6bbeacb2773d3c9de963deaf2537723e0" Workload="localhost-k8s-coredns--7c65d6cfc9--7467l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005155e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-7467l", "timestamp":"2025-08-13 07:17:21.836616489 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:17:21.885188 containerd[1565]: 2025-08-13 07:17:21.836 [INFO][4881] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:21.885188 containerd[1565]: 2025-08-13 07:17:21.837 [INFO][4881] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:21.885188 containerd[1565]: 2025-08-13 07:17:21.837 [INFO][4881] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 07:17:21.885188 containerd[1565]: 2025-08-13 07:17:21.843 [INFO][4881] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4053e6b5f2df368bb68d0052a49d31c6bbeacb2773d3c9de963deaf2537723e0" host="localhost" Aug 13 07:17:21.885188 containerd[1565]: 2025-08-13 07:17:21.847 [INFO][4881] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 07:17:21.885188 containerd[1565]: 2025-08-13 07:17:21.851 [INFO][4881] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 07:17:21.885188 containerd[1565]: 2025-08-13 07:17:21.852 [INFO][4881] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 07:17:21.885188 containerd[1565]: 2025-08-13 07:17:21.854 [INFO][4881] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 07:17:21.885188 containerd[1565]: 2025-08-13 07:17:21.854 [INFO][4881] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4053e6b5f2df368bb68d0052a49d31c6bbeacb2773d3c9de963deaf2537723e0" host="localhost" Aug 13 07:17:21.885188 containerd[1565]: 2025-08-13 07:17:21.855 [INFO][4881] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4053e6b5f2df368bb68d0052a49d31c6bbeacb2773d3c9de963deaf2537723e0 Aug 13 07:17:21.885188 containerd[1565]: 2025-08-13 07:17:21.859 [INFO][4881] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4053e6b5f2df368bb68d0052a49d31c6bbeacb2773d3c9de963deaf2537723e0" host="localhost" Aug 13 07:17:21.885188 containerd[1565]: 2025-08-13 07:17:21.865 [INFO][4881] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.4053e6b5f2df368bb68d0052a49d31c6bbeacb2773d3c9de963deaf2537723e0" host="localhost" Aug 13 07:17:21.885188 containerd[1565]: 2025-08-13 07:17:21.865 [INFO][4881] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.4053e6b5f2df368bb68d0052a49d31c6bbeacb2773d3c9de963deaf2537723e0" host="localhost" Aug 13 07:17:21.885188 containerd[1565]: 2025-08-13 07:17:21.865 [INFO][4881] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:21.885188 containerd[1565]: 2025-08-13 07:17:21.865 [INFO][4881] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="4053e6b5f2df368bb68d0052a49d31c6bbeacb2773d3c9de963deaf2537723e0" HandleID="k8s-pod-network.4053e6b5f2df368bb68d0052a49d31c6bbeacb2773d3c9de963deaf2537723e0" Workload="localhost-k8s-coredns--7c65d6cfc9--7467l-eth0" Aug 13 07:17:21.886070 containerd[1565]: 2025-08-13 07:17:21.868 [INFO][4868] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4053e6b5f2df368bb68d0052a49d31c6bbeacb2773d3c9de963deaf2537723e0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7467l" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7467l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--7467l-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"0a0b3cbe-9aa7-400d-968e-cb12067ca892", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-7467l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6f627df81f5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:21.886070 containerd[1565]: 2025-08-13 07:17:21.869 [INFO][4868] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="4053e6b5f2df368bb68d0052a49d31c6bbeacb2773d3c9de963deaf2537723e0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7467l" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7467l-eth0" Aug 13 07:17:21.886070 containerd[1565]: 2025-08-13 07:17:21.869 [INFO][4868] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6f627df81f5 ContainerID="4053e6b5f2df368bb68d0052a49d31c6bbeacb2773d3c9de963deaf2537723e0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7467l" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7467l-eth0" Aug 13 07:17:21.886070 containerd[1565]: 2025-08-13 07:17:21.871 [INFO][4868] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4053e6b5f2df368bb68d0052a49d31c6bbeacb2773d3c9de963deaf2537723e0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7467l" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7467l-eth0" Aug 13 07:17:21.886070 containerd[1565]: 2025-08-13 07:17:21.872 [INFO][4868] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4053e6b5f2df368bb68d0052a49d31c6bbeacb2773d3c9de963deaf2537723e0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7467l" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7467l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--7467l-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"0a0b3cbe-9aa7-400d-968e-cb12067ca892", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4053e6b5f2df368bb68d0052a49d31c6bbeacb2773d3c9de963deaf2537723e0", Pod:"coredns-7c65d6cfc9-7467l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6f627df81f5", MAC:"6e:e0:df:12:0f:53", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:21.886070 containerd[1565]: 2025-08-13 07:17:21.881 [INFO][4868] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4053e6b5f2df368bb68d0052a49d31c6bbeacb2773d3c9de963deaf2537723e0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7467l" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7467l-eth0" Aug 13 07:17:21.909174 containerd[1565]: time="2025-08-13T07:17:21.907516992Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:17:21.909174 containerd[1565]: time="2025-08-13T07:17:21.908064663Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:17:21.909174 containerd[1565]: time="2025-08-13T07:17:21.908081033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:21.909174 containerd[1565]: time="2025-08-13T07:17:21.908207041Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:21.935779 systemd-resolved[1466]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:17:21.961703 containerd[1565]: time="2025-08-13T07:17:21.961659432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7467l,Uid:0a0b3cbe-9aa7-400d-968e-cb12067ca892,Namespace:kube-system,Attempt:1,} returns sandbox id \"4053e6b5f2df368bb68d0052a49d31c6bbeacb2773d3c9de963deaf2537723e0\"" Aug 13 07:17:21.962682 kubelet[2651]: E0813 07:17:21.962514 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:17:21.964367 containerd[1565]: time="2025-08-13T07:17:21.964303966Z" level=info msg="CreateContainer within sandbox \"4053e6b5f2df368bb68d0052a49d31c6bbeacb2773d3c9de963deaf2537723e0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 07:17:22.011598 containerd[1565]: time="2025-08-13T07:17:22.011531509Z" level=info msg="CreateContainer within sandbox \"4053e6b5f2df368bb68d0052a49d31c6bbeacb2773d3c9de963deaf2537723e0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0902b3d8b2708f1c8070f72b1b1e8375a4762de7d6c16b74f5fcd03115ae53b2\"" Aug 13 07:17:22.012190 containerd[1565]: time="2025-08-13T07:17:22.012145282Z" level=info msg="StartContainer for \"0902b3d8b2708f1c8070f72b1b1e8375a4762de7d6c16b74f5fcd03115ae53b2\"" Aug 13 07:17:22.098611 containerd[1565]: time="2025-08-13T07:17:22.098559755Z" level=info msg="StartContainer for \"0902b3d8b2708f1c8070f72b1b1e8375a4762de7d6c16b74f5fcd03115ae53b2\" returns successfully" Aug 13 07:17:22.126602 systemd-networkd[1251]: calif2263044042: Gained IPv6LL Aug 13 07:17:22.446587 systemd-networkd[1251]: cali13d0d044388: Gained IPv6LL Aug 13 07:17:22.702625 systemd-networkd[1251]: cali476deee431c: Gained IPv6LL Aug 13 07:17:22.830775 systemd-networkd[1251]: calie3df916f426: Gained IPv6LL Aug 13 07:17:22.866448 kubelet[2651]: E0813 07:17:22.866323 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:17:23.144767 kubelet[2651]: I0813 07:17:23.144697 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-7467l" podStartSLOduration=35.144675673 podStartE2EDuration="35.144675673s" podCreationTimestamp="2025-08-13 07:16:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:17:23.144469088 +0000 UTC m=+40.549331495" watchObservedRunningTime="2025-08-13 07:17:23.144675673 +0000 UTC m=+40.549538060" Aug 13 07:17:23.470533 systemd-networkd[1251]: cali6f627df81f5: Gained IPv6LL Aug 13 07:17:23.492612 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3446363144.mount: Deactivated successfully. Aug 13 07:17:23.650879 systemd[1]: Started sshd@8-10.0.0.130:22-10.0.0.1:42188.service - OpenSSH per-connection server daemon (10.0.0.1:42188). Aug 13 07:17:23.679707 containerd[1565]: time="2025-08-13T07:17:23.679587788Z" level=info msg="StopPodSandbox for \"5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3\"" Aug 13 07:17:23.704825 sshd[4987]: Accepted publickey for core from 10.0.0.1 port 42188 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:17:23.704187 systemd-logind[1548]: New session 9 of user core. Aug 13 07:17:23.697989 sshd[4987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:17:23.713686 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 07:17:23.798190 containerd[1565]: 2025-08-13 07:17:23.743 [INFO][5000] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3" Aug 13 07:17:23.798190 containerd[1565]: 2025-08-13 07:17:23.744 [INFO][5000] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3" iface="eth0" netns="/var/run/netns/cni-f51e824c-5e04-f98e-b469-c3ce058c5fb0" Aug 13 07:17:23.798190 containerd[1565]: 2025-08-13 07:17:23.744 [INFO][5000] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3" iface="eth0" netns="/var/run/netns/cni-f51e824c-5e04-f98e-b469-c3ce058c5fb0" Aug 13 07:17:23.798190 containerd[1565]: 2025-08-13 07:17:23.744 [INFO][5000] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3" iface="eth0" netns="/var/run/netns/cni-f51e824c-5e04-f98e-b469-c3ce058c5fb0" Aug 13 07:17:23.798190 containerd[1565]: 2025-08-13 07:17:23.744 [INFO][5000] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3" Aug 13 07:17:23.798190 containerd[1565]: 2025-08-13 07:17:23.744 [INFO][5000] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3" Aug 13 07:17:23.798190 containerd[1565]: 2025-08-13 07:17:23.781 [INFO][5015] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3" HandleID="k8s-pod-network.5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3" Workload="localhost-k8s-calico--apiserver--74b999fc99--cfksv-eth0" Aug 13 07:17:23.798190 containerd[1565]: 2025-08-13 07:17:23.782 [INFO][5015] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:23.798190 containerd[1565]: 2025-08-13 07:17:23.782 [INFO][5015] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:23.798190 containerd[1565]: 2025-08-13 07:17:23.787 [WARNING][5015] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3" HandleID="k8s-pod-network.5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3" Workload="localhost-k8s-calico--apiserver--74b999fc99--cfksv-eth0" Aug 13 07:17:23.798190 containerd[1565]: 2025-08-13 07:17:23.787 [INFO][5015] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3" HandleID="k8s-pod-network.5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3" Workload="localhost-k8s-calico--apiserver--74b999fc99--cfksv-eth0" Aug 13 07:17:23.798190 containerd[1565]: 2025-08-13 07:17:23.789 [INFO][5015] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:23.798190 containerd[1565]: 2025-08-13 07:17:23.793 [INFO][5000] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3" Aug 13 07:17:23.801480 containerd[1565]: time="2025-08-13T07:17:23.801428810Z" level=info msg="TearDown network for sandbox \"5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3\" successfully" Aug 13 07:17:23.801480 containerd[1565]: time="2025-08-13T07:17:23.801467009Z" level=info msg="StopPodSandbox for \"5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3\" returns successfully" Aug 13 07:17:23.802224 systemd[1]: run-netns-cni\x2df51e824c\x2d5e04\x2df98e\x2db469\x2dc3ce058c5fb0.mount: Deactivated successfully. Aug 13 07:17:23.802750 containerd[1565]: time="2025-08-13T07:17:23.802723402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b999fc99-cfksv,Uid:dbdac039-2576-4669-ab05-2a44aa4184c7,Namespace:calico-apiserver,Attempt:1,}" Aug 13 07:17:23.868388 kubelet[2651]: E0813 07:17:23.868356 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:17:24.125591 sshd[4987]: pam_unix(sshd:session): session closed for user core Aug 13 07:17:24.130230 systemd[1]: sshd@8-10.0.0.130:22-10.0.0.1:42188.service: Deactivated successfully. Aug 13 07:17:24.141978 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 07:17:24.143640 systemd-logind[1548]: Session 9 logged out. Waiting for processes to exit. Aug 13 07:17:24.145004 systemd-logind[1548]: Removed session 9. Aug 13 07:17:24.321780 containerd[1565]: time="2025-08-13T07:17:24.321730676Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:24.322679 containerd[1565]: time="2025-08-13T07:17:24.322599959Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Aug 13 07:17:24.324228 containerd[1565]: time="2025-08-13T07:17:24.324204620Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:24.327326 containerd[1565]: time="2025-08-13T07:17:24.326987176Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:24.328720 containerd[1565]: time="2025-08-13T07:17:24.328696207Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 3.225031497s" Aug 13 07:17:24.328868 containerd[1565]: time="2025-08-13T07:17:24.328779920Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Aug 13 07:17:24.330547 containerd[1565]: time="2025-08-13T07:17:24.330525147Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 07:17:24.331890 containerd[1565]: time="2025-08-13T07:17:24.331846632Z" level=info msg="CreateContainer within sandbox \"15002f56fdd38c045d8279af321bddf6c318eab111dfccd218fe6afce3cb0227\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Aug 13 07:17:24.351714 containerd[1565]: time="2025-08-13T07:17:24.351658761Z" level=info msg="CreateContainer within sandbox \"15002f56fdd38c045d8279af321bddf6c318eab111dfccd218fe6afce3cb0227\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"f8da291da61578bf63fa991ea397f923c5e8b19963e0fc996f9d9353456a2c56\"" Aug 13 07:17:24.352314 containerd[1565]: time="2025-08-13T07:17:24.352268431Z" level=info msg="StartContainer for \"f8da291da61578bf63fa991ea397f923c5e8b19963e0fc996f9d9353456a2c56\"" Aug 13 07:17:24.406146 systemd-networkd[1251]: cali022348a1732: Link UP Aug 13 07:17:24.406996 systemd-networkd[1251]: cali022348a1732: Gained carrier Aug 13 07:17:24.425326 containerd[1565]: 2025-08-13 07:17:24.334 [INFO][5036] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--74b999fc99--cfksv-eth0 calico-apiserver-74b999fc99- calico-apiserver dbdac039-2576-4669-ab05-2a44aa4184c7 1063 0 2025-08-13 07:16:57 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:74b999fc99 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-74b999fc99-cfksv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali022348a1732 [] [] }} ContainerID="69d4aab0b48cca4c863dcfec5a6181330fbb0a4e58ffd7fba6f53194545b8ef9" Namespace="calico-apiserver" Pod="calico-apiserver-74b999fc99-cfksv" WorkloadEndpoint="localhost-k8s-calico--apiserver--74b999fc99--cfksv-" Aug 13 07:17:24.425326 containerd[1565]: 2025-08-13 07:17:24.334 [INFO][5036] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="69d4aab0b48cca4c863dcfec5a6181330fbb0a4e58ffd7fba6f53194545b8ef9" Namespace="calico-apiserver" Pod="calico-apiserver-74b999fc99-cfksv" WorkloadEndpoint="localhost-k8s-calico--apiserver--74b999fc99--cfksv-eth0" Aug 13 07:17:24.425326 containerd[1565]: 2025-08-13 07:17:24.362 [INFO][5055] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="69d4aab0b48cca4c863dcfec5a6181330fbb0a4e58ffd7fba6f53194545b8ef9" HandleID="k8s-pod-network.69d4aab0b48cca4c863dcfec5a6181330fbb0a4e58ffd7fba6f53194545b8ef9" Workload="localhost-k8s-calico--apiserver--74b999fc99--cfksv-eth0" Aug 13 07:17:24.425326 containerd[1565]: 2025-08-13 07:17:24.363 [INFO][5055] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="69d4aab0b48cca4c863dcfec5a6181330fbb0a4e58ffd7fba6f53194545b8ef9" HandleID="k8s-pod-network.69d4aab0b48cca4c863dcfec5a6181330fbb0a4e58ffd7fba6f53194545b8ef9" Workload="localhost-k8s-calico--apiserver--74b999fc99--cfksv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f730), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-74b999fc99-cfksv", "timestamp":"2025-08-13 07:17:24.362871954 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:17:24.425326 containerd[1565]: 2025-08-13 07:17:24.363 [INFO][5055] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:24.425326 containerd[1565]: 2025-08-13 07:17:24.363 [INFO][5055] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:24.425326 containerd[1565]: 2025-08-13 07:17:24.363 [INFO][5055] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 07:17:24.425326 containerd[1565]: 2025-08-13 07:17:24.371 [INFO][5055] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.69d4aab0b48cca4c863dcfec5a6181330fbb0a4e58ffd7fba6f53194545b8ef9" host="localhost" Aug 13 07:17:24.425326 containerd[1565]: 2025-08-13 07:17:24.376 [INFO][5055] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 07:17:24.425326 containerd[1565]: 2025-08-13 07:17:24.381 [INFO][5055] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 07:17:24.425326 containerd[1565]: 2025-08-13 07:17:24.382 [INFO][5055] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 07:17:24.425326 containerd[1565]: 2025-08-13 07:17:24.385 [INFO][5055] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 07:17:24.425326 containerd[1565]: 2025-08-13 07:17:24.385 [INFO][5055] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.69d4aab0b48cca4c863dcfec5a6181330fbb0a4e58ffd7fba6f53194545b8ef9" host="localhost" Aug 13 07:17:24.425326 containerd[1565]: 2025-08-13 07:17:24.387 [INFO][5055] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.69d4aab0b48cca4c863dcfec5a6181330fbb0a4e58ffd7fba6f53194545b8ef9 Aug 13 07:17:24.425326 containerd[1565]: 2025-08-13 07:17:24.391 [INFO][5055] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.69d4aab0b48cca4c863dcfec5a6181330fbb0a4e58ffd7fba6f53194545b8ef9" host="localhost" Aug 13 07:17:24.425326 containerd[1565]: 2025-08-13 07:17:24.399 [INFO][5055] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.69d4aab0b48cca4c863dcfec5a6181330fbb0a4e58ffd7fba6f53194545b8ef9" host="localhost" Aug 13 07:17:24.425326 containerd[1565]: 2025-08-13 07:17:24.399 [INFO][5055] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.69d4aab0b48cca4c863dcfec5a6181330fbb0a4e58ffd7fba6f53194545b8ef9" host="localhost" Aug 13 07:17:24.425326 containerd[1565]: 2025-08-13 07:17:24.399 [INFO][5055] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:24.425326 containerd[1565]: 2025-08-13 07:17:24.399 [INFO][5055] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="69d4aab0b48cca4c863dcfec5a6181330fbb0a4e58ffd7fba6f53194545b8ef9" HandleID="k8s-pod-network.69d4aab0b48cca4c863dcfec5a6181330fbb0a4e58ffd7fba6f53194545b8ef9" Workload="localhost-k8s-calico--apiserver--74b999fc99--cfksv-eth0" Aug 13 07:17:24.426130 containerd[1565]: 2025-08-13 07:17:24.402 [INFO][5036] cni-plugin/k8s.go 418: Populated endpoint ContainerID="69d4aab0b48cca4c863dcfec5a6181330fbb0a4e58ffd7fba6f53194545b8ef9" Namespace="calico-apiserver" Pod="calico-apiserver-74b999fc99-cfksv" WorkloadEndpoint="localhost-k8s-calico--apiserver--74b999fc99--cfksv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--74b999fc99--cfksv-eth0", GenerateName:"calico-apiserver-74b999fc99-", Namespace:"calico-apiserver", SelfLink:"", UID:"dbdac039-2576-4669-ab05-2a44aa4184c7", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74b999fc99", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-74b999fc99-cfksv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali022348a1732", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:24.426130 containerd[1565]: 2025-08-13 07:17:24.402 [INFO][5036] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="69d4aab0b48cca4c863dcfec5a6181330fbb0a4e58ffd7fba6f53194545b8ef9" Namespace="calico-apiserver" Pod="calico-apiserver-74b999fc99-cfksv" WorkloadEndpoint="localhost-k8s-calico--apiserver--74b999fc99--cfksv-eth0" Aug 13 07:17:24.426130 containerd[1565]: 2025-08-13 07:17:24.402 [INFO][5036] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali022348a1732 ContainerID="69d4aab0b48cca4c863dcfec5a6181330fbb0a4e58ffd7fba6f53194545b8ef9" Namespace="calico-apiserver" Pod="calico-apiserver-74b999fc99-cfksv" WorkloadEndpoint="localhost-k8s-calico--apiserver--74b999fc99--cfksv-eth0" Aug 13 07:17:24.426130 containerd[1565]: 2025-08-13 07:17:24.406 [INFO][5036] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="69d4aab0b48cca4c863dcfec5a6181330fbb0a4e58ffd7fba6f53194545b8ef9" Namespace="calico-apiserver" Pod="calico-apiserver-74b999fc99-cfksv" WorkloadEndpoint="localhost-k8s-calico--apiserver--74b999fc99--cfksv-eth0" Aug 13 07:17:24.426130 containerd[1565]: 2025-08-13 07:17:24.408 [INFO][5036] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="69d4aab0b48cca4c863dcfec5a6181330fbb0a4e58ffd7fba6f53194545b8ef9" Namespace="calico-apiserver" Pod="calico-apiserver-74b999fc99-cfksv" WorkloadEndpoint="localhost-k8s-calico--apiserver--74b999fc99--cfksv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--74b999fc99--cfksv-eth0", GenerateName:"calico-apiserver-74b999fc99-", Namespace:"calico-apiserver", SelfLink:"", UID:"dbdac039-2576-4669-ab05-2a44aa4184c7", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74b999fc99", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"69d4aab0b48cca4c863dcfec5a6181330fbb0a4e58ffd7fba6f53194545b8ef9", Pod:"calico-apiserver-74b999fc99-cfksv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali022348a1732", MAC:"5a:91:39:fd:ca:43", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:24.426130 containerd[1565]: 2025-08-13 07:17:24.419 [INFO][5036] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="69d4aab0b48cca4c863dcfec5a6181330fbb0a4e58ffd7fba6f53194545b8ef9" Namespace="calico-apiserver" Pod="calico-apiserver-74b999fc99-cfksv" WorkloadEndpoint="localhost-k8s-calico--apiserver--74b999fc99--cfksv-eth0" Aug 13 07:17:24.453010 containerd[1565]: time="2025-08-13T07:17:24.452230281Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:17:24.453138 containerd[1565]: time="2025-08-13T07:17:24.453094163Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:17:24.453162 containerd[1565]: time="2025-08-13T07:17:24.453143573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:24.453437 containerd[1565]: time="2025-08-13T07:17:24.453284871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:24.463449 containerd[1565]: time="2025-08-13T07:17:24.462190373Z" level=info msg="StartContainer for \"f8da291da61578bf63fa991ea397f923c5e8b19963e0fc996f9d9353456a2c56\" returns successfully" Aug 13 07:17:24.485438 systemd-resolved[1466]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:17:24.522377 containerd[1565]: time="2025-08-13T07:17:24.522284715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b999fc99-cfksv,Uid:dbdac039-2576-4669-ab05-2a44aa4184c7,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"69d4aab0b48cca4c863dcfec5a6181330fbb0a4e58ffd7fba6f53194545b8ef9\"" Aug 13 07:17:24.679814 containerd[1565]: time="2025-08-13T07:17:24.679678479Z" level=info msg="StopPodSandbox for \"900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f\"" Aug 13 07:17:24.764636 containerd[1565]: 2025-08-13 07:17:24.723 [INFO][5160] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f" Aug 13 07:17:24.764636 containerd[1565]: 2025-08-13 07:17:24.723 [INFO][5160] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f" iface="eth0" netns="/var/run/netns/cni-d9a30a5a-9a9a-2a09-73d9-105a096e1f2d" Aug 13 07:17:24.764636 containerd[1565]: 2025-08-13 07:17:24.724 [INFO][5160] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f" iface="eth0" netns="/var/run/netns/cni-d9a30a5a-9a9a-2a09-73d9-105a096e1f2d" Aug 13 07:17:24.764636 containerd[1565]: 2025-08-13 07:17:24.724 [INFO][5160] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f" iface="eth0" netns="/var/run/netns/cni-d9a30a5a-9a9a-2a09-73d9-105a096e1f2d" Aug 13 07:17:24.764636 containerd[1565]: 2025-08-13 07:17:24.724 [INFO][5160] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f" Aug 13 07:17:24.764636 containerd[1565]: 2025-08-13 07:17:24.724 [INFO][5160] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f" Aug 13 07:17:24.764636 containerd[1565]: 2025-08-13 07:17:24.749 [INFO][5169] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f" HandleID="k8s-pod-network.900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f" Workload="localhost-k8s-coredns--7c65d6cfc9--jdqsn-eth0" Aug 13 07:17:24.764636 containerd[1565]: 2025-08-13 07:17:24.749 [INFO][5169] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:24.764636 containerd[1565]: 2025-08-13 07:17:24.749 [INFO][5169] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:24.764636 containerd[1565]: 2025-08-13 07:17:24.756 [WARNING][5169] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f" HandleID="k8s-pod-network.900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f" Workload="localhost-k8s-coredns--7c65d6cfc9--jdqsn-eth0" Aug 13 07:17:24.764636 containerd[1565]: 2025-08-13 07:17:24.756 [INFO][5169] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f" HandleID="k8s-pod-network.900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f" Workload="localhost-k8s-coredns--7c65d6cfc9--jdqsn-eth0" Aug 13 07:17:24.764636 containerd[1565]: 2025-08-13 07:17:24.758 [INFO][5169] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:24.764636 containerd[1565]: 2025-08-13 07:17:24.761 [INFO][5160] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f" Aug 13 07:17:24.771435 containerd[1565]: time="2025-08-13T07:17:24.764769388Z" level=info msg="TearDown network for sandbox \"900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f\" successfully" Aug 13 07:17:24.771435 containerd[1565]: time="2025-08-13T07:17:24.764797649Z" level=info msg="StopPodSandbox for \"900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f\" returns successfully" Aug 13 07:17:24.771435 containerd[1565]: time="2025-08-13T07:17:24.765542034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-jdqsn,Uid:d55d2c19-4154-4c8b-a129-b8b3f108e610,Namespace:kube-system,Attempt:1,}" Aug 13 07:17:24.771661 kubelet[2651]: E0813 07:17:24.765149 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:17:24.802996 systemd[1]: run-netns-cni\x2dd9a30a5a\x2d9a9a\x2d2a09\x2d73d9\x2d105a096e1f2d.mount: Deactivated successfully. Aug 13 07:17:24.873799 kubelet[2651]: E0813 07:17:24.873763 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:17:25.087489 kubelet[2651]: I0813 07:17:25.087397 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-b9j6l" podStartSLOduration=22.860504839 podStartE2EDuration="26.087374811s" podCreationTimestamp="2025-08-13 07:16:59 +0000 UTC" firstStartedPulling="2025-08-13 07:17:21.102726644 +0000 UTC m=+38.507589031" lastFinishedPulling="2025-08-13 07:17:24.329596616 +0000 UTC m=+41.734459003" observedRunningTime="2025-08-13 07:17:25.086693899 +0000 UTC m=+42.491556286" watchObservedRunningTime="2025-08-13 07:17:25.087374811 +0000 UTC m=+42.492237198" Aug 13 07:17:25.253932 systemd-networkd[1251]: calib7082ce1b6d: Link UP Aug 13 07:17:25.255390 systemd-networkd[1251]: calib7082ce1b6d: Gained carrier Aug 13 07:17:25.267718 containerd[1565]: 2025-08-13 07:17:25.191 [INFO][5184] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--jdqsn-eth0 coredns-7c65d6cfc9- kube-system d55d2c19-4154-4c8b-a129-b8b3f108e610 1081 0 2025-08-13 07:16:48 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-jdqsn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib7082ce1b6d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="739706d1aff517b3d4c05f3c2c4809a4d8d4c2f3a3f7d01f817e8f45950d9c88" Namespace="kube-system" Pod="coredns-7c65d6cfc9-jdqsn" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--jdqsn-" Aug 13 07:17:25.267718 containerd[1565]: 2025-08-13 07:17:25.191 [INFO][5184] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="739706d1aff517b3d4c05f3c2c4809a4d8d4c2f3a3f7d01f817e8f45950d9c88" Namespace="kube-system" Pod="coredns-7c65d6cfc9-jdqsn" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--jdqsn-eth0" Aug 13 07:17:25.267718 containerd[1565]: 2025-08-13 07:17:25.218 [INFO][5193] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="739706d1aff517b3d4c05f3c2c4809a4d8d4c2f3a3f7d01f817e8f45950d9c88" HandleID="k8s-pod-network.739706d1aff517b3d4c05f3c2c4809a4d8d4c2f3a3f7d01f817e8f45950d9c88" Workload="localhost-k8s-coredns--7c65d6cfc9--jdqsn-eth0" Aug 13 07:17:25.267718 containerd[1565]: 2025-08-13 07:17:25.218 [INFO][5193] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="739706d1aff517b3d4c05f3c2c4809a4d8d4c2f3a3f7d01f817e8f45950d9c88" HandleID="k8s-pod-network.739706d1aff517b3d4c05f3c2c4809a4d8d4c2f3a3f7d01f817e8f45950d9c88" Workload="localhost-k8s-coredns--7c65d6cfc9--jdqsn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005979b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-jdqsn", "timestamp":"2025-08-13 07:17:25.217989496 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:17:25.267718 containerd[1565]: 2025-08-13 07:17:25.218 [INFO][5193] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:25.267718 containerd[1565]: 2025-08-13 07:17:25.218 [INFO][5193] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:25.267718 containerd[1565]: 2025-08-13 07:17:25.218 [INFO][5193] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 07:17:25.267718 containerd[1565]: 2025-08-13 07:17:25.224 [INFO][5193] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.739706d1aff517b3d4c05f3c2c4809a4d8d4c2f3a3f7d01f817e8f45950d9c88" host="localhost" Aug 13 07:17:25.267718 containerd[1565]: 2025-08-13 07:17:25.228 [INFO][5193] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 07:17:25.267718 containerd[1565]: 2025-08-13 07:17:25.232 [INFO][5193] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 07:17:25.267718 containerd[1565]: 2025-08-13 07:17:25.233 [INFO][5193] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 07:17:25.267718 containerd[1565]: 2025-08-13 07:17:25.235 [INFO][5193] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 07:17:25.267718 containerd[1565]: 2025-08-13 07:17:25.235 [INFO][5193] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.739706d1aff517b3d4c05f3c2c4809a4d8d4c2f3a3f7d01f817e8f45950d9c88" host="localhost" Aug 13 07:17:25.267718 containerd[1565]: 2025-08-13 07:17:25.237 [INFO][5193] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.739706d1aff517b3d4c05f3c2c4809a4d8d4c2f3a3f7d01f817e8f45950d9c88 Aug 13 07:17:25.267718 containerd[1565]: 2025-08-13 07:17:25.242 [INFO][5193] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.739706d1aff517b3d4c05f3c2c4809a4d8d4c2f3a3f7d01f817e8f45950d9c88" host="localhost" Aug 13 07:17:25.267718 containerd[1565]: 2025-08-13 07:17:25.247 [INFO][5193] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.739706d1aff517b3d4c05f3c2c4809a4d8d4c2f3a3f7d01f817e8f45950d9c88" host="localhost" Aug 13 07:17:25.267718 containerd[1565]: 2025-08-13 07:17:25.247 [INFO][5193] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.739706d1aff517b3d4c05f3c2c4809a4d8d4c2f3a3f7d01f817e8f45950d9c88" host="localhost" Aug 13 07:17:25.267718 containerd[1565]: 2025-08-13 07:17:25.247 [INFO][5193] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:25.267718 containerd[1565]: 2025-08-13 07:17:25.247 [INFO][5193] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="739706d1aff517b3d4c05f3c2c4809a4d8d4c2f3a3f7d01f817e8f45950d9c88" HandleID="k8s-pod-network.739706d1aff517b3d4c05f3c2c4809a4d8d4c2f3a3f7d01f817e8f45950d9c88" Workload="localhost-k8s-coredns--7c65d6cfc9--jdqsn-eth0" Aug 13 07:17:25.268315 containerd[1565]: 2025-08-13 07:17:25.251 [INFO][5184] cni-plugin/k8s.go 418: Populated endpoint ContainerID="739706d1aff517b3d4c05f3c2c4809a4d8d4c2f3a3f7d01f817e8f45950d9c88" Namespace="kube-system" Pod="coredns-7c65d6cfc9-jdqsn" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--jdqsn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--jdqsn-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"d55d2c19-4154-4c8b-a129-b8b3f108e610", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-jdqsn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib7082ce1b6d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:25.268315 containerd[1565]: 2025-08-13 07:17:25.251 [INFO][5184] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="739706d1aff517b3d4c05f3c2c4809a4d8d4c2f3a3f7d01f817e8f45950d9c88" Namespace="kube-system" Pod="coredns-7c65d6cfc9-jdqsn" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--jdqsn-eth0" Aug 13 07:17:25.268315 containerd[1565]: 2025-08-13 07:17:25.251 [INFO][5184] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib7082ce1b6d ContainerID="739706d1aff517b3d4c05f3c2c4809a4d8d4c2f3a3f7d01f817e8f45950d9c88" Namespace="kube-system" Pod="coredns-7c65d6cfc9-jdqsn" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--jdqsn-eth0" Aug 13 07:17:25.268315 containerd[1565]: 2025-08-13 07:17:25.254 [INFO][5184] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="739706d1aff517b3d4c05f3c2c4809a4d8d4c2f3a3f7d01f817e8f45950d9c88" Namespace="kube-system" Pod="coredns-7c65d6cfc9-jdqsn" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--jdqsn-eth0" Aug 13 07:17:25.268315 containerd[1565]: 2025-08-13 07:17:25.255 [INFO][5184] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="739706d1aff517b3d4c05f3c2c4809a4d8d4c2f3a3f7d01f817e8f45950d9c88" Namespace="kube-system" Pod="coredns-7c65d6cfc9-jdqsn" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--jdqsn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--jdqsn-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"d55d2c19-4154-4c8b-a129-b8b3f108e610", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"739706d1aff517b3d4c05f3c2c4809a4d8d4c2f3a3f7d01f817e8f45950d9c88", Pod:"coredns-7c65d6cfc9-jdqsn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib7082ce1b6d", MAC:"96:de:d8:e6:11:b4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:25.268315 containerd[1565]: 2025-08-13 07:17:25.264 [INFO][5184] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="739706d1aff517b3d4c05f3c2c4809a4d8d4c2f3a3f7d01f817e8f45950d9c88" Namespace="kube-system" Pod="coredns-7c65d6cfc9-jdqsn" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--jdqsn-eth0" Aug 13 07:17:25.290281 containerd[1565]: time="2025-08-13T07:17:25.290134530Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:17:25.290281 containerd[1565]: time="2025-08-13T07:17:25.290194389Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:17:25.290281 containerd[1565]: time="2025-08-13T07:17:25.290211881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:25.290576 containerd[1565]: time="2025-08-13T07:17:25.290302176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:25.316365 systemd-resolved[1466]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:17:25.343056 containerd[1565]: time="2025-08-13T07:17:25.342728165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-jdqsn,Uid:d55d2c19-4154-4c8b-a129-b8b3f108e610,Namespace:kube-system,Attempt:1,} returns sandbox id \"739706d1aff517b3d4c05f3c2c4809a4d8d4c2f3a3f7d01f817e8f45950d9c88\"" Aug 13 07:17:25.343921 kubelet[2651]: E0813 07:17:25.343885 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:17:25.346584 containerd[1565]: time="2025-08-13T07:17:25.346552390Z" level=info msg="CreateContainer within sandbox \"739706d1aff517b3d4c05f3c2c4809a4d8d4c2f3a3f7d01f817e8f45950d9c88\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 07:17:25.367666 containerd[1565]: time="2025-08-13T07:17:25.367589820Z" level=info msg="CreateContainer within sandbox \"739706d1aff517b3d4c05f3c2c4809a4d8d4c2f3a3f7d01f817e8f45950d9c88\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b4ae3e074f9be7e9672d15ff4385d514da86a67e5f13f0ead0d5a77f4807759b\"" Aug 13 07:17:25.368254 containerd[1565]: time="2025-08-13T07:17:25.368223736Z" level=info msg="StartContainer for \"b4ae3e074f9be7e9672d15ff4385d514da86a67e5f13f0ead0d5a77f4807759b\"" Aug 13 07:17:25.437595 containerd[1565]: time="2025-08-13T07:17:25.437449736Z" level=info msg="StartContainer for \"b4ae3e074f9be7e9672d15ff4385d514da86a67e5f13f0ead0d5a77f4807759b\" returns successfully" Aug 13 07:17:25.655884 containerd[1565]: time="2025-08-13T07:17:25.655741164Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:25.656488 containerd[1565]: time="2025-08-13T07:17:25.656426864Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Aug 13 07:17:25.657597 containerd[1565]: time="2025-08-13T07:17:25.657559921Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:25.659907 containerd[1565]: time="2025-08-13T07:17:25.659867590Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:25.660490 containerd[1565]: time="2025-08-13T07:17:25.660453328Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.329781073s" Aug 13 07:17:25.660490 containerd[1565]: time="2025-08-13T07:17:25.660486689Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Aug 13 07:17:25.661483 containerd[1565]: time="2025-08-13T07:17:25.661450356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 07:17:25.662964 containerd[1565]: time="2025-08-13T07:17:25.662936607Z" level=info msg="CreateContainer within sandbox \"c62a409ad3bccbab65fdf61ad7a60f45668b5c3e7b850238beb74c48d2fee9e4\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 13 07:17:25.678765 containerd[1565]: time="2025-08-13T07:17:25.678707722Z" level=info msg="CreateContainer within sandbox \"c62a409ad3bccbab65fdf61ad7a60f45668b5c3e7b850238beb74c48d2fee9e4\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"40dbaf168099cf299af92fbcc41f9c97b9987fbe98b68a7e6b34f81a225a7ee3\"" Aug 13 07:17:25.679419 containerd[1565]: time="2025-08-13T07:17:25.679306504Z" level=info msg="StartContainer for \"40dbaf168099cf299af92fbcc41f9c97b9987fbe98b68a7e6b34f81a225a7ee3\"" Aug 13 07:17:25.755530 containerd[1565]: time="2025-08-13T07:17:25.755331725Z" level=info msg="StartContainer for \"40dbaf168099cf299af92fbcc41f9c97b9987fbe98b68a7e6b34f81a225a7ee3\" returns successfully" Aug 13 07:17:25.804294 systemd[1]: run-containerd-runc-k8s.io-739706d1aff517b3d4c05f3c2c4809a4d8d4c2f3a3f7d01f817e8f45950d9c88-runc.9M9ppz.mount: Deactivated successfully. Aug 13 07:17:25.881625 kubelet[2651]: E0813 07:17:25.881420 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:17:25.967447 systemd-resolved[1466]: Under memory pressure, flushing caches. Aug 13 07:17:25.985514 systemd-journald[1158]: Under memory pressure, flushing caches. Aug 13 07:17:25.967507 systemd-resolved[1466]: Flushed all caches. Aug 13 07:17:26.002948 kubelet[2651]: I0813 07:17:26.002881 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-jdqsn" podStartSLOduration=38.002860312 podStartE2EDuration="38.002860312s" podCreationTimestamp="2025-08-13 07:16:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:17:26.001646635 +0000 UTC m=+43.406509022" watchObservedRunningTime="2025-08-13 07:17:26.002860312 +0000 UTC m=+43.407722699" Aug 13 07:17:26.286560 systemd-networkd[1251]: cali022348a1732: Gained IPv6LL Aug 13 07:17:26.670591 systemd-networkd[1251]: calib7082ce1b6d: Gained IPv6LL Aug 13 07:17:26.883679 kubelet[2651]: E0813 07:17:26.883639 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:17:27.885556 kubelet[2651]: E0813 07:17:27.885523 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:17:28.133805 containerd[1565]: time="2025-08-13T07:17:28.133746668Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:28.135026 containerd[1565]: time="2025-08-13T07:17:28.134976001Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Aug 13 07:17:28.136440 containerd[1565]: time="2025-08-13T07:17:28.136315115Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:28.138732 containerd[1565]: time="2025-08-13T07:17:28.138703883Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:28.139468 containerd[1565]: time="2025-08-13T07:17:28.139408225Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 2.477918587s" Aug 13 07:17:28.139468 containerd[1565]: time="2025-08-13T07:17:28.139439562Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 07:17:28.140872 containerd[1565]: time="2025-08-13T07:17:28.140727022Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 07:17:28.141953 containerd[1565]: time="2025-08-13T07:17:28.141908166Z" level=info msg="CreateContainer within sandbox \"56882fe5de728325c520548ca60a3cf7212d40c1f43336409d38cec5708bd24c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 07:17:28.157367 containerd[1565]: time="2025-08-13T07:17:28.156008229Z" level=info msg="CreateContainer within sandbox \"56882fe5de728325c520548ca60a3cf7212d40c1f43336409d38cec5708bd24c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c08941b0834350a457e6fdcc23cf5306e2aefc470ab325171b0c0709a71d850b\"" Aug 13 07:17:28.157367 containerd[1565]: time="2025-08-13T07:17:28.156763995Z" level=info msg="StartContainer for \"c08941b0834350a457e6fdcc23cf5306e2aefc470ab325171b0c0709a71d850b\"" Aug 13 07:17:28.345826 containerd[1565]: time="2025-08-13T07:17:28.345541699Z" level=info msg="StartContainer for \"c08941b0834350a457e6fdcc23cf5306e2aefc470ab325171b0c0709a71d850b\" returns successfully" Aug 13 07:17:29.136197 systemd[1]: Started sshd@9-10.0.0.130:22-10.0.0.1:35574.service - OpenSSH per-connection server daemon (10.0.0.1:35574). Aug 13 07:17:29.186884 sshd[5453]: Accepted publickey for core from 10.0.0.1 port 35574 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:17:29.188747 sshd[5453]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:17:29.192905 systemd-logind[1548]: New session 10 of user core. Aug 13 07:17:29.201738 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 07:17:29.485932 kubelet[2651]: I0813 07:17:29.485757 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-74b999fc99-ng8mj" podStartSLOduration=25.669761899 podStartE2EDuration="32.485736455s" podCreationTimestamp="2025-08-13 07:16:57 +0000 UTC" firstStartedPulling="2025-08-13 07:17:21.32461871 +0000 UTC m=+38.729481097" lastFinishedPulling="2025-08-13 07:17:28.140593266 +0000 UTC m=+45.545455653" observedRunningTime="2025-08-13 07:17:28.901912195 +0000 UTC m=+46.306774582" watchObservedRunningTime="2025-08-13 07:17:29.485736455 +0000 UTC m=+46.890598842" Aug 13 07:17:29.830131 sshd[5453]: pam_unix(sshd:session): session closed for user core Aug 13 07:17:29.834844 systemd-logind[1548]: Session 10 logged out. Waiting for processes to exit. Aug 13 07:17:29.835013 systemd[1]: sshd@9-10.0.0.130:22-10.0.0.1:35574.service: Deactivated successfully. Aug 13 07:17:29.841586 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 07:17:29.843090 systemd-logind[1548]: Removed session 10. Aug 13 07:17:29.999530 systemd-journald[1158]: Under memory pressure, flushing caches. Aug 13 07:17:29.998592 systemd-resolved[1466]: Under memory pressure, flushing caches. Aug 13 07:17:29.998625 systemd-resolved[1466]: Flushed all caches. Aug 13 07:17:30.284636 containerd[1565]: time="2025-08-13T07:17:30.284568561Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:30.285422 containerd[1565]: time="2025-08-13T07:17:30.285358022Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Aug 13 07:17:30.286731 containerd[1565]: time="2025-08-13T07:17:30.286689870Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:30.288960 containerd[1565]: time="2025-08-13T07:17:30.288917514Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:30.289582 containerd[1565]: time="2025-08-13T07:17:30.289541651Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 2.148780707s" Aug 13 07:17:30.289582 containerd[1565]: time="2025-08-13T07:17:30.289572878Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Aug 13 07:17:30.290819 containerd[1565]: time="2025-08-13T07:17:30.290783022Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 07:17:30.307119 containerd[1565]: time="2025-08-13T07:17:30.307010506Z" level=info msg="CreateContainer within sandbox \"8a5d5af9bdd494599adffa07a5688e2f29d4cde64f5e2f61af8c7262b333f350\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 13 07:17:30.322349 containerd[1565]: time="2025-08-13T07:17:30.322284809Z" level=info msg="CreateContainer within sandbox \"8a5d5af9bdd494599adffa07a5688e2f29d4cde64f5e2f61af8c7262b333f350\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"ebdce3fe80c091bcc8a3fef77ce6e7b651b463fc655fa6dfc46612231b5ab107\"" Aug 13 07:17:30.323103 containerd[1565]: time="2025-08-13T07:17:30.323065223Z" level=info msg="StartContainer for \"ebdce3fe80c091bcc8a3fef77ce6e7b651b463fc655fa6dfc46612231b5ab107\"" Aug 13 07:17:30.792849 containerd[1565]: time="2025-08-13T07:17:30.792461678Z" level=info msg="StartContainer for \"ebdce3fe80c091bcc8a3fef77ce6e7b651b463fc655fa6dfc46612231b5ab107\" returns successfully" Aug 13 07:17:30.856158 containerd[1565]: time="2025-08-13T07:17:30.856097795Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:30.917852 containerd[1565]: time="2025-08-13T07:17:30.917779638Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Aug 13 07:17:30.932981 containerd[1565]: time="2025-08-13T07:17:30.932946353Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 642.134509ms" Aug 13 07:17:30.933061 containerd[1565]: time="2025-08-13T07:17:30.932984643Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 07:17:30.934133 containerd[1565]: time="2025-08-13T07:17:30.933977688Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 07:17:30.935290 containerd[1565]: time="2025-08-13T07:17:30.935227054Z" level=info msg="CreateContainer within sandbox \"69d4aab0b48cca4c863dcfec5a6181330fbb0a4e58ffd7fba6f53194545b8ef9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 07:17:31.198516 containerd[1565]: time="2025-08-13T07:17:31.198327805Z" level=info msg="CreateContainer within sandbox \"69d4aab0b48cca4c863dcfec5a6181330fbb0a4e58ffd7fba6f53194545b8ef9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e742185f6038f773b633a9e383eda9803309c4cb5ca08f06f5aa9dbbcc686137\"" Aug 13 07:17:31.199721 containerd[1565]: time="2025-08-13T07:17:31.199671238Z" level=info msg="StartContainer for \"e742185f6038f773b633a9e383eda9803309c4cb5ca08f06f5aa9dbbcc686137\"" Aug 13 07:17:31.283749 containerd[1565]: time="2025-08-13T07:17:31.283691190Z" level=info msg="StartContainer for \"e742185f6038f773b633a9e383eda9803309c4cb5ca08f06f5aa9dbbcc686137\" returns successfully" Aug 13 07:17:31.927826 kubelet[2651]: I0813 07:17:31.927712 2651 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:17:31.968979 kubelet[2651]: I0813 07:17:31.968895 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5896fd98dd-kf2hf" podStartSLOduration=24.074818362 podStartE2EDuration="32.968862688s" podCreationTimestamp="2025-08-13 07:16:59 +0000 UTC" firstStartedPulling="2025-08-13 07:17:21.39652306 +0000 UTC m=+38.801385447" lastFinishedPulling="2025-08-13 07:17:30.290567386 +0000 UTC m=+47.695429773" observedRunningTime="2025-08-13 07:17:31.192023415 +0000 UTC m=+48.596885802" watchObservedRunningTime="2025-08-13 07:17:31.968862688 +0000 UTC m=+49.373725075" Aug 13 07:17:31.971191 kubelet[2651]: I0813 07:17:31.969258 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-74b999fc99-cfksv" podStartSLOduration=28.559147315 podStartE2EDuration="34.969252746s" podCreationTimestamp="2025-08-13 07:16:57 +0000 UTC" firstStartedPulling="2025-08-13 07:17:24.523675356 +0000 UTC m=+41.928537743" lastFinishedPulling="2025-08-13 07:17:30.933780787 +0000 UTC m=+48.338643174" observedRunningTime="2025-08-13 07:17:31.968990684 +0000 UTC m=+49.373853071" watchObservedRunningTime="2025-08-13 07:17:31.969252746 +0000 UTC m=+49.374115133" Aug 13 07:17:32.046625 systemd-resolved[1466]: Under memory pressure, flushing caches. Aug 13 07:17:32.046671 systemd-resolved[1466]: Flushed all caches. Aug 13 07:17:32.048381 systemd-journald[1158]: Under memory pressure, flushing caches. Aug 13 07:17:32.966784 containerd[1565]: time="2025-08-13T07:17:32.966724031Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:32.967648 containerd[1565]: time="2025-08-13T07:17:32.967590638Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Aug 13 07:17:32.969450 containerd[1565]: time="2025-08-13T07:17:32.969427723Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:32.971671 containerd[1565]: time="2025-08-13T07:17:32.971628618Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:32.972237 containerd[1565]: time="2025-08-13T07:17:32.972213475Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 2.038207173s" Aug 13 07:17:32.972288 containerd[1565]: time="2025-08-13T07:17:32.972243020Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Aug 13 07:17:32.974441 containerd[1565]: time="2025-08-13T07:17:32.974413669Z" level=info msg="CreateContainer within sandbox \"c62a409ad3bccbab65fdf61ad7a60f45668b5c3e7b850238beb74c48d2fee9e4\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 13 07:17:32.989307 containerd[1565]: time="2025-08-13T07:17:32.989245463Z" level=info msg="CreateContainer within sandbox \"c62a409ad3bccbab65fdf61ad7a60f45668b5c3e7b850238beb74c48d2fee9e4\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7011f454d7731acbe242666f7a253aa4ac0e15faa44ef279b1f1ab1bbafdd602\"" Aug 13 07:17:32.989910 containerd[1565]: time="2025-08-13T07:17:32.989869864Z" level=info msg="StartContainer for \"7011f454d7731acbe242666f7a253aa4ac0e15faa44ef279b1f1ab1bbafdd602\"" Aug 13 07:17:33.078378 containerd[1565]: time="2025-08-13T07:17:33.078281954Z" level=info msg="StartContainer for \"7011f454d7731acbe242666f7a253aa4ac0e15faa44ef279b1f1ab1bbafdd602\" returns successfully" Aug 13 07:17:33.756933 kubelet[2651]: I0813 07:17:33.756898 2651 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 13 07:17:33.756933 kubelet[2651]: I0813 07:17:33.756936 2651 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 13 07:17:34.844671 systemd[1]: Started sshd@10-10.0.0.130:22-10.0.0.1:35590.service - OpenSSH per-connection server daemon (10.0.0.1:35590). Aug 13 07:17:34.887419 sshd[5612]: Accepted publickey for core from 10.0.0.1 port 35590 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:17:34.889774 sshd[5612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:17:34.894822 systemd-logind[1548]: New session 11 of user core. Aug 13 07:17:34.899720 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 07:17:35.561353 sshd[5612]: pam_unix(sshd:session): session closed for user core Aug 13 07:17:35.568592 systemd[1]: Started sshd@11-10.0.0.130:22-10.0.0.1:35592.service - OpenSSH per-connection server daemon (10.0.0.1:35592). Aug 13 07:17:35.569079 systemd[1]: sshd@10-10.0.0.130:22-10.0.0.1:35590.service: Deactivated successfully. Aug 13 07:17:35.572650 systemd-logind[1548]: Session 11 logged out. Waiting for processes to exit. Aug 13 07:17:35.573638 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 07:17:35.574509 systemd-logind[1548]: Removed session 11. Aug 13 07:17:35.608984 systemd-logind[1548]: New session 12 of user core. Aug 13 07:17:35.604311 sshd[5629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:17:35.622594 sshd[5629]: Accepted publickey for core from 10.0.0.1 port 35592 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:17:35.622711 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 07:17:35.828440 sshd[5629]: pam_unix(sshd:session): session closed for user core Aug 13 07:17:35.838799 systemd[1]: Started sshd@12-10.0.0.130:22-10.0.0.1:35598.service - OpenSSH per-connection server daemon (10.0.0.1:35598). Aug 13 07:17:35.841901 systemd[1]: sshd@11-10.0.0.130:22-10.0.0.1:35592.service: Deactivated successfully. Aug 13 07:17:35.849476 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 07:17:35.852720 systemd-logind[1548]: Session 12 logged out. Waiting for processes to exit. Aug 13 07:17:35.854794 systemd-logind[1548]: Removed session 12. Aug 13 07:17:35.876775 sshd[5642]: Accepted publickey for core from 10.0.0.1 port 35598 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:17:35.878685 sshd[5642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:17:35.883380 systemd-logind[1548]: New session 13 of user core. Aug 13 07:17:35.889615 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 07:17:36.019246 sshd[5642]: pam_unix(sshd:session): session closed for user core Aug 13 07:17:36.025958 systemd[1]: sshd@12-10.0.0.130:22-10.0.0.1:35598.service: Deactivated successfully. Aug 13 07:17:36.029495 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 07:17:36.030030 systemd-logind[1548]: Session 13 logged out. Waiting for processes to exit. Aug 13 07:17:36.031604 systemd-logind[1548]: Removed session 13. Aug 13 07:17:38.737434 kubelet[2651]: I0813 07:17:38.737388 2651 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:17:38.878955 kubelet[2651]: I0813 07:17:38.878887 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-s2k55" podStartSLOduration=28.103615023 podStartE2EDuration="39.878867276s" podCreationTimestamp="2025-08-13 07:16:59 +0000 UTC" firstStartedPulling="2025-08-13 07:17:21.197705253 +0000 UTC m=+38.602567640" lastFinishedPulling="2025-08-13 07:17:32.972957506 +0000 UTC m=+50.377819893" observedRunningTime="2025-08-13 07:17:33.945285444 +0000 UTC m=+51.350147831" watchObservedRunningTime="2025-08-13 07:17:38.878867276 +0000 UTC m=+56.283729663" Aug 13 07:17:41.029573 systemd[1]: Started sshd@13-10.0.0.130:22-10.0.0.1:43848.service - OpenSSH per-connection server daemon (10.0.0.1:43848). Aug 13 07:17:41.060291 sshd[5710]: Accepted publickey for core from 10.0.0.1 port 43848 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:17:41.061980 sshd[5710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:17:41.066180 systemd-logind[1548]: New session 14 of user core. Aug 13 07:17:41.081670 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 07:17:41.225736 sshd[5710]: pam_unix(sshd:session): session closed for user core Aug 13 07:17:41.230816 systemd[1]: sshd@13-10.0.0.130:22-10.0.0.1:43848.service: Deactivated successfully. Aug 13 07:17:41.236496 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 07:17:41.237906 systemd-logind[1548]: Session 14 logged out. Waiting for processes to exit. Aug 13 07:17:41.239113 systemd-logind[1548]: Removed session 14. Aug 13 07:17:42.686297 containerd[1565]: time="2025-08-13T07:17:42.686253735Z" level=info msg="StopPodSandbox for \"a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52\"" Aug 13 07:17:42.784669 containerd[1565]: 2025-08-13 07:17:42.733 [WARNING][5760] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5896fd98dd--kf2hf-eth0", GenerateName:"calico-kube-controllers-5896fd98dd-", Namespace:"calico-system", SelfLink:"", UID:"e400ac0b-ae46-4ac2-83f3-c47cd5c10714", ResourceVersion:"1226", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5896fd98dd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8a5d5af9bdd494599adffa07a5688e2f29d4cde64f5e2f61af8c7262b333f350", Pod:"calico-kube-controllers-5896fd98dd-kf2hf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie3df916f426", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:42.784669 containerd[1565]: 2025-08-13 07:17:42.733 [INFO][5760] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52" Aug 13 07:17:42.784669 containerd[1565]: 2025-08-13 07:17:42.733 [INFO][5760] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52" iface="eth0" netns="" Aug 13 07:17:42.784669 containerd[1565]: 2025-08-13 07:17:42.733 [INFO][5760] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52" Aug 13 07:17:42.784669 containerd[1565]: 2025-08-13 07:17:42.733 [INFO][5760] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52" Aug 13 07:17:42.784669 containerd[1565]: 2025-08-13 07:17:42.767 [INFO][5769] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52" HandleID="k8s-pod-network.a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52" Workload="localhost-k8s-calico--kube--controllers--5896fd98dd--kf2hf-eth0" Aug 13 07:17:42.784669 containerd[1565]: 2025-08-13 07:17:42.767 [INFO][5769] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:42.784669 containerd[1565]: 2025-08-13 07:17:42.768 [INFO][5769] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:42.784669 containerd[1565]: 2025-08-13 07:17:42.774 [WARNING][5769] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52" HandleID="k8s-pod-network.a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52" Workload="localhost-k8s-calico--kube--controllers--5896fd98dd--kf2hf-eth0" Aug 13 07:17:42.784669 containerd[1565]: 2025-08-13 07:17:42.774 [INFO][5769] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52" HandleID="k8s-pod-network.a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52" Workload="localhost-k8s-calico--kube--controllers--5896fd98dd--kf2hf-eth0" Aug 13 07:17:42.784669 containerd[1565]: 2025-08-13 07:17:42.776 [INFO][5769] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:42.784669 containerd[1565]: 2025-08-13 07:17:42.780 [INFO][5760] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52" Aug 13 07:17:42.785134 containerd[1565]: time="2025-08-13T07:17:42.784717651Z" level=info msg="TearDown network for sandbox \"a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52\" successfully" Aug 13 07:17:42.785134 containerd[1565]: time="2025-08-13T07:17:42.784746224Z" level=info msg="StopPodSandbox for \"a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52\" returns successfully" Aug 13 07:17:42.826018 containerd[1565]: time="2025-08-13T07:17:42.825971619Z" level=info msg="RemovePodSandbox for \"a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52\"" Aug 13 07:17:42.828259 containerd[1565]: time="2025-08-13T07:17:42.828229522Z" level=info msg="Forcibly stopping sandbox \"a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52\"" Aug 13 07:17:42.897887 containerd[1565]: 2025-08-13 07:17:42.862 [WARNING][5786] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5896fd98dd--kf2hf-eth0", GenerateName:"calico-kube-controllers-5896fd98dd-", Namespace:"calico-system", SelfLink:"", UID:"e400ac0b-ae46-4ac2-83f3-c47cd5c10714", ResourceVersion:"1226", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5896fd98dd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8a5d5af9bdd494599adffa07a5688e2f29d4cde64f5e2f61af8c7262b333f350", Pod:"calico-kube-controllers-5896fd98dd-kf2hf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie3df916f426", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:42.897887 containerd[1565]: 2025-08-13 07:17:42.862 [INFO][5786] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52" Aug 13 07:17:42.897887 containerd[1565]: 2025-08-13 07:17:42.862 [INFO][5786] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52" iface="eth0" netns="" Aug 13 07:17:42.897887 containerd[1565]: 2025-08-13 07:17:42.863 [INFO][5786] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52" Aug 13 07:17:42.897887 containerd[1565]: 2025-08-13 07:17:42.863 [INFO][5786] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52" Aug 13 07:17:42.897887 containerd[1565]: 2025-08-13 07:17:42.885 [INFO][5795] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52" HandleID="k8s-pod-network.a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52" Workload="localhost-k8s-calico--kube--controllers--5896fd98dd--kf2hf-eth0" Aug 13 07:17:42.897887 containerd[1565]: 2025-08-13 07:17:42.885 [INFO][5795] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:42.897887 containerd[1565]: 2025-08-13 07:17:42.885 [INFO][5795] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:42.897887 containerd[1565]: 2025-08-13 07:17:42.891 [WARNING][5795] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52" HandleID="k8s-pod-network.a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52" Workload="localhost-k8s-calico--kube--controllers--5896fd98dd--kf2hf-eth0" Aug 13 07:17:42.897887 containerd[1565]: 2025-08-13 07:17:42.891 [INFO][5795] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52" HandleID="k8s-pod-network.a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52" Workload="localhost-k8s-calico--kube--controllers--5896fd98dd--kf2hf-eth0" Aug 13 07:17:42.897887 containerd[1565]: 2025-08-13 07:17:42.892 [INFO][5795] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:42.897887 containerd[1565]: 2025-08-13 07:17:42.895 [INFO][5786] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52" Aug 13 07:17:42.898981 containerd[1565]: time="2025-08-13T07:17:42.897929134Z" level=info msg="TearDown network for sandbox \"a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52\" successfully" Aug 13 07:17:42.922033 containerd[1565]: time="2025-08-13T07:17:42.921957250Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:17:42.922123 containerd[1565]: time="2025-08-13T07:17:42.922094656Z" level=info msg="RemovePodSandbox \"a4b17754b24808c8bc7f4f2bda631097f97dd67be2216aa55950809d905d4a52\" returns successfully" Aug 13 07:17:42.935047 containerd[1565]: time="2025-08-13T07:17:42.935011843Z" level=info msg="StopPodSandbox for \"09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a\"" Aug 13 07:17:43.011252 containerd[1565]: 2025-08-13 07:17:42.975 [WARNING][5813] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--b9j6l-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"a91f2a95-61d2-44d1-8e65-0711a3ca46ef", ResourceVersion:"1084", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"15002f56fdd38c045d8279af321bddf6c318eab111dfccd218fe6afce3cb0227", Pod:"goldmane-58fd7646b9-b9j6l", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif2263044042", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:43.011252 containerd[1565]: 2025-08-13 07:17:42.975 [INFO][5813] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a" Aug 13 07:17:43.011252 containerd[1565]: 2025-08-13 07:17:42.975 [INFO][5813] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a" iface="eth0" netns="" Aug 13 07:17:43.011252 containerd[1565]: 2025-08-13 07:17:42.975 [INFO][5813] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a" Aug 13 07:17:43.011252 containerd[1565]: 2025-08-13 07:17:42.975 [INFO][5813] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a" Aug 13 07:17:43.011252 containerd[1565]: 2025-08-13 07:17:42.998 [INFO][5822] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a" HandleID="k8s-pod-network.09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a" Workload="localhost-k8s-goldmane--58fd7646b9--b9j6l-eth0" Aug 13 07:17:43.011252 containerd[1565]: 2025-08-13 07:17:42.998 [INFO][5822] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:43.011252 containerd[1565]: 2025-08-13 07:17:42.999 [INFO][5822] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:43.011252 containerd[1565]: 2025-08-13 07:17:43.004 [WARNING][5822] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a" HandleID="k8s-pod-network.09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a" Workload="localhost-k8s-goldmane--58fd7646b9--b9j6l-eth0" Aug 13 07:17:43.011252 containerd[1565]: 2025-08-13 07:17:43.004 [INFO][5822] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a" HandleID="k8s-pod-network.09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a" Workload="localhost-k8s-goldmane--58fd7646b9--b9j6l-eth0" Aug 13 07:17:43.011252 containerd[1565]: 2025-08-13 07:17:43.006 [INFO][5822] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:43.011252 containerd[1565]: 2025-08-13 07:17:43.008 [INFO][5813] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a" Aug 13 07:17:43.011750 containerd[1565]: time="2025-08-13T07:17:43.011304242Z" level=info msg="TearDown network for sandbox \"09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a\" successfully" Aug 13 07:17:43.011750 containerd[1565]: time="2025-08-13T07:17:43.011349776Z" level=info msg="StopPodSandbox for \"09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a\" returns successfully" Aug 13 07:17:43.011862 containerd[1565]: time="2025-08-13T07:17:43.011829970Z" level=info msg="RemovePodSandbox for \"09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a\"" Aug 13 07:17:43.011895 containerd[1565]: time="2025-08-13T07:17:43.011865064Z" level=info msg="Forcibly stopping sandbox \"09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a\"" Aug 13 07:17:43.080012 containerd[1565]: 2025-08-13 07:17:43.045 [WARNING][5840] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--b9j6l-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"a91f2a95-61d2-44d1-8e65-0711a3ca46ef", ResourceVersion:"1084", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"15002f56fdd38c045d8279af321bddf6c318eab111dfccd218fe6afce3cb0227", Pod:"goldmane-58fd7646b9-b9j6l", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif2263044042", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:43.080012 containerd[1565]: 2025-08-13 07:17:43.045 [INFO][5840] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a" Aug 13 07:17:43.080012 containerd[1565]: 2025-08-13 07:17:43.045 [INFO][5840] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a" iface="eth0" netns="" Aug 13 07:17:43.080012 containerd[1565]: 2025-08-13 07:17:43.045 [INFO][5840] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a" Aug 13 07:17:43.080012 containerd[1565]: 2025-08-13 07:17:43.045 [INFO][5840] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a" Aug 13 07:17:43.080012 containerd[1565]: 2025-08-13 07:17:43.067 [INFO][5849] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a" HandleID="k8s-pod-network.09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a" Workload="localhost-k8s-goldmane--58fd7646b9--b9j6l-eth0" Aug 13 07:17:43.080012 containerd[1565]: 2025-08-13 07:17:43.067 [INFO][5849] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:43.080012 containerd[1565]: 2025-08-13 07:17:43.068 [INFO][5849] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:43.080012 containerd[1565]: 2025-08-13 07:17:43.073 [WARNING][5849] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a" HandleID="k8s-pod-network.09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a" Workload="localhost-k8s-goldmane--58fd7646b9--b9j6l-eth0" Aug 13 07:17:43.080012 containerd[1565]: 2025-08-13 07:17:43.073 [INFO][5849] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a" HandleID="k8s-pod-network.09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a" Workload="localhost-k8s-goldmane--58fd7646b9--b9j6l-eth0" Aug 13 07:17:43.080012 containerd[1565]: 2025-08-13 07:17:43.074 [INFO][5849] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:43.080012 containerd[1565]: 2025-08-13 07:17:43.077 [INFO][5840] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a" Aug 13 07:17:43.080467 containerd[1565]: time="2025-08-13T07:17:43.080053601Z" level=info msg="TearDown network for sandbox \"09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a\" successfully" Aug 13 07:17:43.085054 containerd[1565]: time="2025-08-13T07:17:43.085000726Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:17:43.085054 containerd[1565]: time="2025-08-13T07:17:43.085054005Z" level=info msg="RemovePodSandbox \"09304c2c5be754eca53871ae15ccc20227e9ba80949ee75ffafd9b0776a39a2a\" returns successfully" Aug 13 07:17:43.085564 containerd[1565]: time="2025-08-13T07:17:43.085541682Z" level=info msg="StopPodSandbox for \"29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc\"" Aug 13 07:17:43.149421 containerd[1565]: 2025-08-13 07:17:43.118 [WARNING][5867] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--74b999fc99--ng8mj-eth0", GenerateName:"calico-apiserver-74b999fc99-", Namespace:"calico-apiserver", SelfLink:"", UID:"c3703c35-893a-4eb4-b160-0a5c2f7c54ca", ResourceVersion:"1139", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74b999fc99", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"56882fe5de728325c520548ca60a3cf7212d40c1f43336409d38cec5708bd24c", Pod:"calico-apiserver-74b999fc99-ng8mj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali476deee431c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:43.149421 containerd[1565]: 2025-08-13 07:17:43.118 [INFO][5867] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc" Aug 13 07:17:43.149421 containerd[1565]: 2025-08-13 07:17:43.118 [INFO][5867] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc" iface="eth0" netns="" Aug 13 07:17:43.149421 containerd[1565]: 2025-08-13 07:17:43.118 [INFO][5867] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc" Aug 13 07:17:43.149421 containerd[1565]: 2025-08-13 07:17:43.118 [INFO][5867] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc" Aug 13 07:17:43.149421 containerd[1565]: 2025-08-13 07:17:43.137 [INFO][5876] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc" HandleID="k8s-pod-network.29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc" Workload="localhost-k8s-calico--apiserver--74b999fc99--ng8mj-eth0" Aug 13 07:17:43.149421 containerd[1565]: 2025-08-13 07:17:43.137 [INFO][5876] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:43.149421 containerd[1565]: 2025-08-13 07:17:43.137 [INFO][5876] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:43.149421 containerd[1565]: 2025-08-13 07:17:43.142 [WARNING][5876] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc" HandleID="k8s-pod-network.29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc" Workload="localhost-k8s-calico--apiserver--74b999fc99--ng8mj-eth0" Aug 13 07:17:43.149421 containerd[1565]: 2025-08-13 07:17:43.142 [INFO][5876] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc" HandleID="k8s-pod-network.29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc" Workload="localhost-k8s-calico--apiserver--74b999fc99--ng8mj-eth0" Aug 13 07:17:43.149421 containerd[1565]: 2025-08-13 07:17:43.144 [INFO][5876] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:43.149421 containerd[1565]: 2025-08-13 07:17:43.146 [INFO][5867] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc" Aug 13 07:17:43.149971 containerd[1565]: time="2025-08-13T07:17:43.149459048Z" level=info msg="TearDown network for sandbox \"29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc\" successfully" Aug 13 07:17:43.149971 containerd[1565]: time="2025-08-13T07:17:43.149489916Z" level=info msg="StopPodSandbox for \"29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc\" returns successfully" Aug 13 07:17:43.150040 containerd[1565]: time="2025-08-13T07:17:43.150010644Z" level=info msg="RemovePodSandbox for \"29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc\"" Aug 13 07:17:43.150081 containerd[1565]: time="2025-08-13T07:17:43.150041281Z" level=info msg="Forcibly stopping sandbox \"29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc\"" Aug 13 07:17:43.219065 containerd[1565]: 2025-08-13 07:17:43.185 [WARNING][5893] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--74b999fc99--ng8mj-eth0", GenerateName:"calico-apiserver-74b999fc99-", Namespace:"calico-apiserver", SelfLink:"", UID:"c3703c35-893a-4eb4-b160-0a5c2f7c54ca", ResourceVersion:"1139", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74b999fc99", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"56882fe5de728325c520548ca60a3cf7212d40c1f43336409d38cec5708bd24c", Pod:"calico-apiserver-74b999fc99-ng8mj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali476deee431c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:43.219065 containerd[1565]: 2025-08-13 07:17:43.186 [INFO][5893] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc" Aug 13 07:17:43.219065 containerd[1565]: 2025-08-13 07:17:43.186 [INFO][5893] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc" iface="eth0" netns="" Aug 13 07:17:43.219065 containerd[1565]: 2025-08-13 07:17:43.186 [INFO][5893] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc" Aug 13 07:17:43.219065 containerd[1565]: 2025-08-13 07:17:43.186 [INFO][5893] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc" Aug 13 07:17:43.219065 containerd[1565]: 2025-08-13 07:17:43.206 [INFO][5902] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc" HandleID="k8s-pod-network.29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc" Workload="localhost-k8s-calico--apiserver--74b999fc99--ng8mj-eth0" Aug 13 07:17:43.219065 containerd[1565]: 2025-08-13 07:17:43.206 [INFO][5902] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:43.219065 containerd[1565]: 2025-08-13 07:17:43.206 [INFO][5902] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:43.219065 containerd[1565]: 2025-08-13 07:17:43.211 [WARNING][5902] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc" HandleID="k8s-pod-network.29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc" Workload="localhost-k8s-calico--apiserver--74b999fc99--ng8mj-eth0" Aug 13 07:17:43.219065 containerd[1565]: 2025-08-13 07:17:43.212 [INFO][5902] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc" HandleID="k8s-pod-network.29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc" Workload="localhost-k8s-calico--apiserver--74b999fc99--ng8mj-eth0" Aug 13 07:17:43.219065 containerd[1565]: 2025-08-13 07:17:43.213 [INFO][5902] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:43.219065 containerd[1565]: 2025-08-13 07:17:43.216 [INFO][5893] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc" Aug 13 07:17:43.219645 containerd[1565]: time="2025-08-13T07:17:43.219104593Z" level=info msg="TearDown network for sandbox \"29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc\" successfully" Aug 13 07:17:43.336054 containerd[1565]: time="2025-08-13T07:17:43.335880370Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:17:43.336054 containerd[1565]: time="2025-08-13T07:17:43.335978513Z" level=info msg="RemovePodSandbox \"29af4faa8332628ddef369e66bfcd5dffaedf7f769e72c601b076cd9bcc5cadc\" returns successfully" Aug 13 07:17:43.336578 containerd[1565]: time="2025-08-13T07:17:43.336545477Z" level=info msg="StopPodSandbox for \"5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3\"" Aug 13 07:17:43.465726 containerd[1565]: 2025-08-13 07:17:43.373 [WARNING][5920] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--74b999fc99--cfksv-eth0", GenerateName:"calico-apiserver-74b999fc99-", Namespace:"calico-apiserver", SelfLink:"", UID:"dbdac039-2576-4669-ab05-2a44aa4184c7", ResourceVersion:"1175", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74b999fc99", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"69d4aab0b48cca4c863dcfec5a6181330fbb0a4e58ffd7fba6f53194545b8ef9", Pod:"calico-apiserver-74b999fc99-cfksv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali022348a1732", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:43.465726 containerd[1565]: 2025-08-13 07:17:43.374 [INFO][5920] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3" Aug 13 07:17:43.465726 containerd[1565]: 2025-08-13 07:17:43.374 [INFO][5920] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3" iface="eth0" netns="" Aug 13 07:17:43.465726 containerd[1565]: 2025-08-13 07:17:43.374 [INFO][5920] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3" Aug 13 07:17:43.465726 containerd[1565]: 2025-08-13 07:17:43.374 [INFO][5920] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3" Aug 13 07:17:43.465726 containerd[1565]: 2025-08-13 07:17:43.440 [INFO][5929] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3" HandleID="k8s-pod-network.5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3" Workload="localhost-k8s-calico--apiserver--74b999fc99--cfksv-eth0" Aug 13 07:17:43.465726 containerd[1565]: 2025-08-13 07:17:43.440 [INFO][5929] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:43.465726 containerd[1565]: 2025-08-13 07:17:43.440 [INFO][5929] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:43.465726 containerd[1565]: 2025-08-13 07:17:43.457 [WARNING][5929] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3" HandleID="k8s-pod-network.5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3" Workload="localhost-k8s-calico--apiserver--74b999fc99--cfksv-eth0" Aug 13 07:17:43.465726 containerd[1565]: 2025-08-13 07:17:43.457 [INFO][5929] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3" HandleID="k8s-pod-network.5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3" Workload="localhost-k8s-calico--apiserver--74b999fc99--cfksv-eth0" Aug 13 07:17:43.465726 containerd[1565]: 2025-08-13 07:17:43.459 [INFO][5929] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:43.465726 containerd[1565]: 2025-08-13 07:17:43.462 [INFO][5920] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3" Aug 13 07:17:43.466193 containerd[1565]: time="2025-08-13T07:17:43.465763732Z" level=info msg="TearDown network for sandbox \"5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3\" successfully" Aug 13 07:17:43.466193 containerd[1565]: time="2025-08-13T07:17:43.465793047Z" level=info msg="StopPodSandbox for \"5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3\" returns successfully" Aug 13 07:17:43.466409 containerd[1565]: time="2025-08-13T07:17:43.466385578Z" level=info msg="RemovePodSandbox for \"5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3\"" Aug 13 07:17:43.466450 containerd[1565]: time="2025-08-13T07:17:43.466414742Z" level=info msg="Forcibly stopping sandbox \"5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3\"" Aug 13 07:17:43.541793 containerd[1565]: 2025-08-13 07:17:43.504 [WARNING][5948] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--74b999fc99--cfksv-eth0", GenerateName:"calico-apiserver-74b999fc99-", Namespace:"calico-apiserver", SelfLink:"", UID:"dbdac039-2576-4669-ab05-2a44aa4184c7", ResourceVersion:"1175", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74b999fc99", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"69d4aab0b48cca4c863dcfec5a6181330fbb0a4e58ffd7fba6f53194545b8ef9", Pod:"calico-apiserver-74b999fc99-cfksv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali022348a1732", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:43.541793 containerd[1565]: 2025-08-13 07:17:43.504 [INFO][5948] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3" Aug 13 07:17:43.541793 containerd[1565]: 2025-08-13 07:17:43.504 [INFO][5948] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3" iface="eth0" netns="" Aug 13 07:17:43.541793 containerd[1565]: 2025-08-13 07:17:43.504 [INFO][5948] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3" Aug 13 07:17:43.541793 containerd[1565]: 2025-08-13 07:17:43.504 [INFO][5948] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3" Aug 13 07:17:43.541793 containerd[1565]: 2025-08-13 07:17:43.527 [INFO][5957] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3" HandleID="k8s-pod-network.5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3" Workload="localhost-k8s-calico--apiserver--74b999fc99--cfksv-eth0" Aug 13 07:17:43.541793 containerd[1565]: 2025-08-13 07:17:43.527 [INFO][5957] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:43.541793 containerd[1565]: 2025-08-13 07:17:43.527 [INFO][5957] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:43.541793 containerd[1565]: 2025-08-13 07:17:43.534 [WARNING][5957] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3" HandleID="k8s-pod-network.5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3" Workload="localhost-k8s-calico--apiserver--74b999fc99--cfksv-eth0" Aug 13 07:17:43.541793 containerd[1565]: 2025-08-13 07:17:43.534 [INFO][5957] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3" HandleID="k8s-pod-network.5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3" Workload="localhost-k8s-calico--apiserver--74b999fc99--cfksv-eth0" Aug 13 07:17:43.541793 containerd[1565]: 2025-08-13 07:17:43.535 [INFO][5957] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:43.541793 containerd[1565]: 2025-08-13 07:17:43.538 [INFO][5948] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3" Aug 13 07:17:43.542231 containerd[1565]: time="2025-08-13T07:17:43.541846581Z" level=info msg="TearDown network for sandbox \"5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3\" successfully" Aug 13 07:17:43.546213 containerd[1565]: time="2025-08-13T07:17:43.546177551Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:17:43.546272 containerd[1565]: time="2025-08-13T07:17:43.546236220Z" level=info msg="RemovePodSandbox \"5c34ed4c5cb1cab58d68f3730099baa683fdc50c455d83ad09015b9df6fecda3\" returns successfully" Aug 13 07:17:43.546964 containerd[1565]: time="2025-08-13T07:17:43.546919511Z" level=info msg="StopPodSandbox for \"bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766\"" Aug 13 07:17:43.616069 containerd[1565]: 2025-08-13 07:17:43.581 [WARNING][5975] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766" WorkloadEndpoint="localhost-k8s-whisker--6ccf7ff454--9mgkm-eth0" Aug 13 07:17:43.616069 containerd[1565]: 2025-08-13 07:17:43.581 [INFO][5975] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766" Aug 13 07:17:43.616069 containerd[1565]: 2025-08-13 07:17:43.582 [INFO][5975] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766" iface="eth0" netns="" Aug 13 07:17:43.616069 containerd[1565]: 2025-08-13 07:17:43.582 [INFO][5975] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766" Aug 13 07:17:43.616069 containerd[1565]: 2025-08-13 07:17:43.582 [INFO][5975] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766" Aug 13 07:17:43.616069 containerd[1565]: 2025-08-13 07:17:43.603 [INFO][5984] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766" HandleID="k8s-pod-network.bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766" Workload="localhost-k8s-whisker--6ccf7ff454--9mgkm-eth0" Aug 13 07:17:43.616069 containerd[1565]: 2025-08-13 07:17:43.603 [INFO][5984] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:43.616069 containerd[1565]: 2025-08-13 07:17:43.603 [INFO][5984] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:43.616069 containerd[1565]: 2025-08-13 07:17:43.609 [WARNING][5984] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766" HandleID="k8s-pod-network.bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766" Workload="localhost-k8s-whisker--6ccf7ff454--9mgkm-eth0" Aug 13 07:17:43.616069 containerd[1565]: 2025-08-13 07:17:43.609 [INFO][5984] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766" HandleID="k8s-pod-network.bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766" Workload="localhost-k8s-whisker--6ccf7ff454--9mgkm-eth0" Aug 13 07:17:43.616069 containerd[1565]: 2025-08-13 07:17:43.610 [INFO][5984] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:43.616069 containerd[1565]: 2025-08-13 07:17:43.613 [INFO][5975] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766" Aug 13 07:17:43.616069 containerd[1565]: time="2025-08-13T07:17:43.616037734Z" level=info msg="TearDown network for sandbox \"bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766\" successfully" Aug 13 07:17:43.616069 containerd[1565]: time="2025-08-13T07:17:43.616064424Z" level=info msg="StopPodSandbox for \"bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766\" returns successfully" Aug 13 07:17:43.617237 containerd[1565]: time="2025-08-13T07:17:43.616672755Z" level=info msg="RemovePodSandbox for \"bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766\"" Aug 13 07:17:43.617237 containerd[1565]: time="2025-08-13T07:17:43.616701879Z" level=info msg="Forcibly stopping sandbox \"bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766\"" Aug 13 07:17:43.683653 containerd[1565]: 2025-08-13 07:17:43.648 [WARNING][6002] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766" WorkloadEndpoint="localhost-k8s-whisker--6ccf7ff454--9mgkm-eth0" Aug 13 07:17:43.683653 containerd[1565]: 2025-08-13 07:17:43.649 [INFO][6002] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766" Aug 13 07:17:43.683653 containerd[1565]: 2025-08-13 07:17:43.649 [INFO][6002] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766" iface="eth0" netns="" Aug 13 07:17:43.683653 containerd[1565]: 2025-08-13 07:17:43.649 [INFO][6002] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766" Aug 13 07:17:43.683653 containerd[1565]: 2025-08-13 07:17:43.649 [INFO][6002] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766" Aug 13 07:17:43.683653 containerd[1565]: 2025-08-13 07:17:43.670 [INFO][6011] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766" HandleID="k8s-pod-network.bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766" Workload="localhost-k8s-whisker--6ccf7ff454--9mgkm-eth0" Aug 13 07:17:43.683653 containerd[1565]: 2025-08-13 07:17:43.670 [INFO][6011] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:43.683653 containerd[1565]: 2025-08-13 07:17:43.670 [INFO][6011] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:43.683653 containerd[1565]: 2025-08-13 07:17:43.676 [WARNING][6011] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766" HandleID="k8s-pod-network.bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766" Workload="localhost-k8s-whisker--6ccf7ff454--9mgkm-eth0" Aug 13 07:17:43.683653 containerd[1565]: 2025-08-13 07:17:43.676 [INFO][6011] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766" HandleID="k8s-pod-network.bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766" Workload="localhost-k8s-whisker--6ccf7ff454--9mgkm-eth0" Aug 13 07:17:43.683653 containerd[1565]: 2025-08-13 07:17:43.678 [INFO][6011] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:43.683653 containerd[1565]: 2025-08-13 07:17:43.680 [INFO][6002] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766" Aug 13 07:17:43.684067 containerd[1565]: time="2025-08-13T07:17:43.683702768Z" level=info msg="TearDown network for sandbox \"bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766\" successfully" Aug 13 07:17:43.687760 containerd[1565]: time="2025-08-13T07:17:43.687701821Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:17:43.688208 containerd[1565]: time="2025-08-13T07:17:43.687796286Z" level=info msg="RemovePodSandbox \"bafb4ee47031480a731dbc95e4a2ff4392507ddaa6beb5e480d8644af93b5766\" returns successfully" Aug 13 07:17:43.688386 containerd[1565]: time="2025-08-13T07:17:43.688330920Z" level=info msg="StopPodSandbox for \"900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f\"" Aug 13 07:17:43.752995 containerd[1565]: 2025-08-13 07:17:43.721 [WARNING][6030] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--jdqsn-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"d55d2c19-4154-4c8b-a129-b8b3f108e610", ResourceVersion:"1113", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"739706d1aff517b3d4c05f3c2c4809a4d8d4c2f3a3f7d01f817e8f45950d9c88", Pod:"coredns-7c65d6cfc9-jdqsn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib7082ce1b6d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:43.752995 containerd[1565]: 2025-08-13 07:17:43.721 [INFO][6030] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f" Aug 13 07:17:43.752995 containerd[1565]: 2025-08-13 07:17:43.721 [INFO][6030] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f" iface="eth0" netns="" Aug 13 07:17:43.752995 containerd[1565]: 2025-08-13 07:17:43.721 [INFO][6030] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f" Aug 13 07:17:43.752995 containerd[1565]: 2025-08-13 07:17:43.721 [INFO][6030] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f" Aug 13 07:17:43.752995 containerd[1565]: 2025-08-13 07:17:43.740 [INFO][6039] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f" HandleID="k8s-pod-network.900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f" Workload="localhost-k8s-coredns--7c65d6cfc9--jdqsn-eth0" Aug 13 07:17:43.752995 containerd[1565]: 2025-08-13 07:17:43.740 [INFO][6039] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:43.752995 containerd[1565]: 2025-08-13 07:17:43.740 [INFO][6039] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:43.752995 containerd[1565]: 2025-08-13 07:17:43.745 [WARNING][6039] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f" HandleID="k8s-pod-network.900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f" Workload="localhost-k8s-coredns--7c65d6cfc9--jdqsn-eth0" Aug 13 07:17:43.752995 containerd[1565]: 2025-08-13 07:17:43.745 [INFO][6039] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f" HandleID="k8s-pod-network.900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f" Workload="localhost-k8s-coredns--7c65d6cfc9--jdqsn-eth0" Aug 13 07:17:43.752995 containerd[1565]: 2025-08-13 07:17:43.747 [INFO][6039] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:43.752995 containerd[1565]: 2025-08-13 07:17:43.749 [INFO][6030] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f" Aug 13 07:17:43.753521 containerd[1565]: time="2025-08-13T07:17:43.753043796Z" level=info msg="TearDown network for sandbox \"900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f\" successfully" Aug 13 07:17:43.753521 containerd[1565]: time="2025-08-13T07:17:43.753081155Z" level=info msg="StopPodSandbox for \"900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f\" returns successfully" Aug 13 07:17:43.754206 containerd[1565]: time="2025-08-13T07:17:43.754163948Z" level=info msg="RemovePodSandbox for \"900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f\"" Aug 13 07:17:43.754260 containerd[1565]: time="2025-08-13T07:17:43.754207249Z" level=info msg="Forcibly stopping sandbox \"900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f\"" Aug 13 07:17:43.826220 containerd[1565]: 2025-08-13 07:17:43.788 [WARNING][6057] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--jdqsn-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"d55d2c19-4154-4c8b-a129-b8b3f108e610", ResourceVersion:"1113", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"739706d1aff517b3d4c05f3c2c4809a4d8d4c2f3a3f7d01f817e8f45950d9c88", Pod:"coredns-7c65d6cfc9-jdqsn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib7082ce1b6d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:43.826220 containerd[1565]: 2025-08-13 07:17:43.789 [INFO][6057] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f" Aug 13 07:17:43.826220 containerd[1565]: 2025-08-13 07:17:43.789 [INFO][6057] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f" iface="eth0" netns="" Aug 13 07:17:43.826220 containerd[1565]: 2025-08-13 07:17:43.789 [INFO][6057] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f" Aug 13 07:17:43.826220 containerd[1565]: 2025-08-13 07:17:43.789 [INFO][6057] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f" Aug 13 07:17:43.826220 containerd[1565]: 2025-08-13 07:17:43.812 [INFO][6066] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f" HandleID="k8s-pod-network.900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f" Workload="localhost-k8s-coredns--7c65d6cfc9--jdqsn-eth0" Aug 13 07:17:43.826220 containerd[1565]: 2025-08-13 07:17:43.813 [INFO][6066] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:43.826220 containerd[1565]: 2025-08-13 07:17:43.813 [INFO][6066] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:43.826220 containerd[1565]: 2025-08-13 07:17:43.818 [WARNING][6066] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f" HandleID="k8s-pod-network.900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f" Workload="localhost-k8s-coredns--7c65d6cfc9--jdqsn-eth0" Aug 13 07:17:43.826220 containerd[1565]: 2025-08-13 07:17:43.818 [INFO][6066] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f" HandleID="k8s-pod-network.900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f" Workload="localhost-k8s-coredns--7c65d6cfc9--jdqsn-eth0" Aug 13 07:17:43.826220 containerd[1565]: 2025-08-13 07:17:43.820 [INFO][6066] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:43.826220 containerd[1565]: 2025-08-13 07:17:43.823 [INFO][6057] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f" Aug 13 07:17:43.826678 containerd[1565]: time="2025-08-13T07:17:43.826268052Z" level=info msg="TearDown network for sandbox \"900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f\" successfully" Aug 13 07:17:43.830565 containerd[1565]: time="2025-08-13T07:17:43.830533400Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:17:43.830629 containerd[1565]: time="2025-08-13T07:17:43.830587781Z" level=info msg="RemovePodSandbox \"900f65da0af57b982ebc7bb3758015ed70b2bf9f418cfd0f2970f1a749eae08f\" returns successfully" Aug 13 07:17:43.831223 containerd[1565]: time="2025-08-13T07:17:43.831164514Z" level=info msg="StopPodSandbox for \"b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c\"" Aug 13 07:17:43.896976 containerd[1565]: 2025-08-13 07:17:43.864 [WARNING][6084] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--7467l-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"0a0b3cbe-9aa7-400d-968e-cb12067ca892", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4053e6b5f2df368bb68d0052a49d31c6bbeacb2773d3c9de963deaf2537723e0", Pod:"coredns-7c65d6cfc9-7467l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6f627df81f5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:43.896976 containerd[1565]: 2025-08-13 07:17:43.864 [INFO][6084] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c" Aug 13 07:17:43.896976 containerd[1565]: 2025-08-13 07:17:43.864 [INFO][6084] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c" iface="eth0" netns="" Aug 13 07:17:43.896976 containerd[1565]: 2025-08-13 07:17:43.864 [INFO][6084] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c" Aug 13 07:17:43.896976 containerd[1565]: 2025-08-13 07:17:43.864 [INFO][6084] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c" Aug 13 07:17:43.896976 containerd[1565]: 2025-08-13 07:17:43.883 [INFO][6093] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c" HandleID="k8s-pod-network.b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c" Workload="localhost-k8s-coredns--7c65d6cfc9--7467l-eth0" Aug 13 07:17:43.896976 containerd[1565]: 2025-08-13 07:17:43.884 [INFO][6093] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:43.896976 containerd[1565]: 2025-08-13 07:17:43.884 [INFO][6093] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:43.896976 containerd[1565]: 2025-08-13 07:17:43.889 [WARNING][6093] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c" HandleID="k8s-pod-network.b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c" Workload="localhost-k8s-coredns--7c65d6cfc9--7467l-eth0" Aug 13 07:17:43.896976 containerd[1565]: 2025-08-13 07:17:43.890 [INFO][6093] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c" HandleID="k8s-pod-network.b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c" Workload="localhost-k8s-coredns--7c65d6cfc9--7467l-eth0" Aug 13 07:17:43.896976 containerd[1565]: 2025-08-13 07:17:43.891 [INFO][6093] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:43.896976 containerd[1565]: 2025-08-13 07:17:43.894 [INFO][6084] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c" Aug 13 07:17:43.896976 containerd[1565]: time="2025-08-13T07:17:43.896948951Z" level=info msg="TearDown network for sandbox \"b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c\" successfully" Aug 13 07:17:43.896976 containerd[1565]: time="2025-08-13T07:17:43.896976471Z" level=info msg="StopPodSandbox for \"b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c\" returns successfully" Aug 13 07:17:43.897537 containerd[1565]: time="2025-08-13T07:17:43.897499324Z" level=info msg="RemovePodSandbox for \"b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c\"" Aug 13 07:17:43.897567 containerd[1565]: time="2025-08-13T07:17:43.897545359Z" level=info msg="Forcibly stopping sandbox \"b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c\"" Aug 13 07:17:43.982129 containerd[1565]: 2025-08-13 07:17:43.943 [WARNING][6111] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--7467l-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"0a0b3cbe-9aa7-400d-968e-cb12067ca892", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4053e6b5f2df368bb68d0052a49d31c6bbeacb2773d3c9de963deaf2537723e0", Pod:"coredns-7c65d6cfc9-7467l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6f627df81f5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:43.982129 containerd[1565]: 2025-08-13 07:17:43.943 [INFO][6111] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c" Aug 13 07:17:43.982129 containerd[1565]: 2025-08-13 07:17:43.943 [INFO][6111] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c" iface="eth0" netns="" Aug 13 07:17:43.982129 containerd[1565]: 2025-08-13 07:17:43.943 [INFO][6111] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c" Aug 13 07:17:43.982129 containerd[1565]: 2025-08-13 07:17:43.943 [INFO][6111] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c" Aug 13 07:17:43.982129 containerd[1565]: 2025-08-13 07:17:43.966 [INFO][6120] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c" HandleID="k8s-pod-network.b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c" Workload="localhost-k8s-coredns--7c65d6cfc9--7467l-eth0" Aug 13 07:17:43.982129 containerd[1565]: 2025-08-13 07:17:43.967 [INFO][6120] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:43.982129 containerd[1565]: 2025-08-13 07:17:43.967 [INFO][6120] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:43.982129 containerd[1565]: 2025-08-13 07:17:43.974 [WARNING][6120] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c" HandleID="k8s-pod-network.b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c" Workload="localhost-k8s-coredns--7c65d6cfc9--7467l-eth0" Aug 13 07:17:43.982129 containerd[1565]: 2025-08-13 07:17:43.974 [INFO][6120] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c" HandleID="k8s-pod-network.b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c" Workload="localhost-k8s-coredns--7c65d6cfc9--7467l-eth0" Aug 13 07:17:43.982129 containerd[1565]: 2025-08-13 07:17:43.975 [INFO][6120] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:43.982129 containerd[1565]: 2025-08-13 07:17:43.978 [INFO][6111] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c" Aug 13 07:17:43.982640 containerd[1565]: time="2025-08-13T07:17:43.982173081Z" level=info msg="TearDown network for sandbox \"b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c\" successfully" Aug 13 07:17:43.986904 containerd[1565]: time="2025-08-13T07:17:43.986848963Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:17:43.986959 containerd[1565]: time="2025-08-13T07:17:43.986933521Z" level=info msg="RemovePodSandbox \"b13931b9d198f78440b93ae61b89b4a684c4dd894ef486bc2520ba6c77612b2c\" returns successfully" Aug 13 07:17:43.987439 containerd[1565]: time="2025-08-13T07:17:43.987411148Z" level=info msg="StopPodSandbox for \"5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8\"" Aug 13 07:17:44.065742 containerd[1565]: 2025-08-13 07:17:44.026 [WARNING][6138] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--s2k55-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f09470c1-c77d-44b2-8331-61723edd172c", ResourceVersion:"1193", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c62a409ad3bccbab65fdf61ad7a60f45668b5c3e7b850238beb74c48d2fee9e4", Pod:"csi-node-driver-s2k55", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali13d0d044388", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:44.065742 containerd[1565]: 2025-08-13 07:17:44.026 [INFO][6138] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8" Aug 13 07:17:44.065742 containerd[1565]: 2025-08-13 07:17:44.026 [INFO][6138] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8" iface="eth0" netns="" Aug 13 07:17:44.065742 containerd[1565]: 2025-08-13 07:17:44.026 [INFO][6138] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8" Aug 13 07:17:44.065742 containerd[1565]: 2025-08-13 07:17:44.026 [INFO][6138] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8" Aug 13 07:17:44.065742 containerd[1565]: 2025-08-13 07:17:44.050 [INFO][6147] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8" HandleID="k8s-pod-network.5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8" Workload="localhost-k8s-csi--node--driver--s2k55-eth0" Aug 13 07:17:44.065742 containerd[1565]: 2025-08-13 07:17:44.051 [INFO][6147] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:44.065742 containerd[1565]: 2025-08-13 07:17:44.051 [INFO][6147] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:44.065742 containerd[1565]: 2025-08-13 07:17:44.056 [WARNING][6147] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8" HandleID="k8s-pod-network.5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8" Workload="localhost-k8s-csi--node--driver--s2k55-eth0" Aug 13 07:17:44.065742 containerd[1565]: 2025-08-13 07:17:44.056 [INFO][6147] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8" HandleID="k8s-pod-network.5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8" Workload="localhost-k8s-csi--node--driver--s2k55-eth0" Aug 13 07:17:44.065742 containerd[1565]: 2025-08-13 07:17:44.058 [INFO][6147] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:44.065742 containerd[1565]: 2025-08-13 07:17:44.062 [INFO][6138] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8" Aug 13 07:17:44.066187 containerd[1565]: time="2025-08-13T07:17:44.065825666Z" level=info msg="TearDown network for sandbox \"5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8\" successfully" Aug 13 07:17:44.066187 containerd[1565]: time="2025-08-13T07:17:44.065853408Z" level=info msg="StopPodSandbox for \"5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8\" returns successfully" Aug 13 07:17:44.066544 containerd[1565]: time="2025-08-13T07:17:44.066481285Z" level=info msg="RemovePodSandbox for \"5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8\"" Aug 13 07:17:44.066544 containerd[1565]: time="2025-08-13T07:17:44.066548711Z" level=info msg="Forcibly stopping sandbox \"5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8\"" Aug 13 07:17:44.137837 containerd[1565]: 2025-08-13 07:17:44.101 [WARNING][6166] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--s2k55-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f09470c1-c77d-44b2-8331-61723edd172c", ResourceVersion:"1193", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 16, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c62a409ad3bccbab65fdf61ad7a60f45668b5c3e7b850238beb74c48d2fee9e4", Pod:"csi-node-driver-s2k55", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali13d0d044388", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:44.137837 containerd[1565]: 2025-08-13 07:17:44.102 [INFO][6166] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8" Aug 13 07:17:44.137837 containerd[1565]: 2025-08-13 07:17:44.102 [INFO][6166] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8" iface="eth0" netns="" Aug 13 07:17:44.137837 containerd[1565]: 2025-08-13 07:17:44.102 [INFO][6166] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8" Aug 13 07:17:44.137837 containerd[1565]: 2025-08-13 07:17:44.102 [INFO][6166] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8" Aug 13 07:17:44.137837 containerd[1565]: 2025-08-13 07:17:44.123 [INFO][6175] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8" HandleID="k8s-pod-network.5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8" Workload="localhost-k8s-csi--node--driver--s2k55-eth0" Aug 13 07:17:44.137837 containerd[1565]: 2025-08-13 07:17:44.123 [INFO][6175] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:44.137837 containerd[1565]: 2025-08-13 07:17:44.123 [INFO][6175] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:44.137837 containerd[1565]: 2025-08-13 07:17:44.130 [WARNING][6175] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8" HandleID="k8s-pod-network.5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8" Workload="localhost-k8s-csi--node--driver--s2k55-eth0" Aug 13 07:17:44.137837 containerd[1565]: 2025-08-13 07:17:44.130 [INFO][6175] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8" HandleID="k8s-pod-network.5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8" Workload="localhost-k8s-csi--node--driver--s2k55-eth0" Aug 13 07:17:44.137837 containerd[1565]: 2025-08-13 07:17:44.132 [INFO][6175] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:44.137837 containerd[1565]: 2025-08-13 07:17:44.135 [INFO][6166] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8" Aug 13 07:17:44.138387 containerd[1565]: time="2025-08-13T07:17:44.137890950Z" level=info msg="TearDown network for sandbox \"5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8\" successfully" Aug 13 07:17:44.142279 containerd[1565]: time="2025-08-13T07:17:44.142227074Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:17:44.142351 containerd[1565]: time="2025-08-13T07:17:44.142307193Z" level=info msg="RemovePodSandbox \"5e419336e68dfe27ad981b115943b5e744fe5c95d964785c57088c94ba2128c8\" returns successfully" Aug 13 07:17:46.233636 systemd[1]: Started sshd@14-10.0.0.130:22-10.0.0.1:43852.service - OpenSSH per-connection server daemon (10.0.0.1:43852). Aug 13 07:17:46.278718 sshd[6186]: Accepted publickey for core from 10.0.0.1 port 43852 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:17:46.281002 sshd[6186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:17:46.289696 systemd[1]: run-containerd-runc-k8s.io-f8da291da61578bf63fa991ea397f923c5e8b19963e0fc996f9d9353456a2c56-runc.saQZgY.mount: Deactivated successfully. Aug 13 07:17:46.295561 systemd-logind[1548]: New session 15 of user core. Aug 13 07:17:46.296928 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 07:17:46.489394 sshd[6186]: pam_unix(sshd:session): session closed for user core Aug 13 07:17:46.494444 systemd[1]: sshd@14-10.0.0.130:22-10.0.0.1:43852.service: Deactivated successfully. Aug 13 07:17:46.497006 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 07:17:46.497745 systemd-logind[1548]: Session 15 logged out. Waiting for processes to exit. Aug 13 07:17:46.499178 systemd-logind[1548]: Removed session 15. Aug 13 07:17:51.508594 systemd[1]: Started sshd@15-10.0.0.130:22-10.0.0.1:47714.service - OpenSSH per-connection server daemon (10.0.0.1:47714). Aug 13 07:17:51.537933 sshd[6225]: Accepted publickey for core from 10.0.0.1 port 47714 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:17:51.539567 sshd[6225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:17:51.543673 systemd-logind[1548]: New session 16 of user core. Aug 13 07:17:51.555616 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 07:17:51.710598 sshd[6225]: pam_unix(sshd:session): session closed for user core Aug 13 07:17:51.715625 systemd[1]: sshd@15-10.0.0.130:22-10.0.0.1:47714.service: Deactivated successfully. Aug 13 07:17:51.718331 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 07:17:51.719160 systemd-logind[1548]: Session 16 logged out. Waiting for processes to exit. Aug 13 07:17:51.720077 systemd-logind[1548]: Removed session 16. Aug 13 07:17:56.679557 kubelet[2651]: E0813 07:17:56.679470 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:17:56.725774 systemd[1]: Started sshd@16-10.0.0.130:22-10.0.0.1:47728.service - OpenSSH per-connection server daemon (10.0.0.1:47728). Aug 13 07:17:56.765605 sshd[6247]: Accepted publickey for core from 10.0.0.1 port 47728 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:17:56.767388 sshd[6247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:17:56.772165 systemd-logind[1548]: New session 17 of user core. Aug 13 07:17:56.782612 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 07:17:56.903474 sshd[6247]: pam_unix(sshd:session): session closed for user core Aug 13 07:17:56.909725 systemd[1]: Started sshd@17-10.0.0.130:22-10.0.0.1:47744.service - OpenSSH per-connection server daemon (10.0.0.1:47744). Aug 13 07:17:56.910425 systemd[1]: sshd@16-10.0.0.130:22-10.0.0.1:47728.service: Deactivated successfully. Aug 13 07:17:56.913873 systemd-logind[1548]: Session 17 logged out. Waiting for processes to exit. Aug 13 07:17:56.914892 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 07:17:56.916095 systemd-logind[1548]: Removed session 17. Aug 13 07:17:56.946312 sshd[6259]: Accepted publickey for core from 10.0.0.1 port 47744 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:17:56.948229 sshd[6259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:17:56.952639 systemd-logind[1548]: New session 18 of user core. Aug 13 07:17:56.957629 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 07:17:57.233016 sshd[6259]: pam_unix(sshd:session): session closed for user core Aug 13 07:17:57.241673 systemd[1]: Started sshd@18-10.0.0.130:22-10.0.0.1:47760.service - OpenSSH per-connection server daemon (10.0.0.1:47760). Aug 13 07:17:57.242245 systemd[1]: sshd@17-10.0.0.130:22-10.0.0.1:47744.service: Deactivated successfully. Aug 13 07:17:57.245199 systemd-logind[1548]: Session 18 logged out. Waiting for processes to exit. Aug 13 07:17:57.246608 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 07:17:57.248015 systemd-logind[1548]: Removed session 18. Aug 13 07:17:57.274837 sshd[6273]: Accepted publickey for core from 10.0.0.1 port 47760 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:17:57.276393 sshd[6273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:17:57.280828 systemd-logind[1548]: New session 19 of user core. Aug 13 07:17:57.289641 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 07:17:57.679315 kubelet[2651]: E0813 07:17:57.679247 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:17:59.038993 sshd[6273]: pam_unix(sshd:session): session closed for user core Aug 13 07:17:59.047607 systemd[1]: Started sshd@19-10.0.0.130:22-10.0.0.1:45998.service - OpenSSH per-connection server daemon (10.0.0.1:45998). Aug 13 07:17:59.050239 systemd[1]: sshd@18-10.0.0.130:22-10.0.0.1:47760.service: Deactivated successfully. Aug 13 07:17:59.056254 systemd-logind[1548]: Session 19 logged out. Waiting for processes to exit. Aug 13 07:17:59.056727 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 07:17:59.061833 systemd-logind[1548]: Removed session 19. Aug 13 07:17:59.108623 sshd[6309]: Accepted publickey for core from 10.0.0.1 port 45998 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:17:59.110626 sshd[6309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:17:59.115513 systemd-logind[1548]: New session 20 of user core. Aug 13 07:17:59.124651 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 07:17:59.603167 sshd[6309]: pam_unix(sshd:session): session closed for user core Aug 13 07:17:59.611815 systemd[1]: Started sshd@20-10.0.0.130:22-10.0.0.1:46004.service - OpenSSH per-connection server daemon (10.0.0.1:46004). Aug 13 07:17:59.612452 systemd[1]: sshd@19-10.0.0.130:22-10.0.0.1:45998.service: Deactivated successfully. Aug 13 07:17:59.617145 systemd-logind[1548]: Session 20 logged out. Waiting for processes to exit. Aug 13 07:17:59.617796 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 07:17:59.619637 systemd-logind[1548]: Removed session 20. Aug 13 07:17:59.643830 sshd[6325]: Accepted publickey for core from 10.0.0.1 port 46004 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:17:59.645351 sshd[6325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:17:59.649451 systemd-logind[1548]: New session 21 of user core. Aug 13 07:17:59.657600 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 07:17:59.771497 sshd[6325]: pam_unix(sshd:session): session closed for user core Aug 13 07:17:59.775473 systemd[1]: sshd@20-10.0.0.130:22-10.0.0.1:46004.service: Deactivated successfully. Aug 13 07:17:59.777857 systemd-logind[1548]: Session 21 logged out. Waiting for processes to exit. Aug 13 07:17:59.777907 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 07:17:59.779092 systemd-logind[1548]: Removed session 21. Aug 13 07:18:04.784600 systemd[1]: Started sshd@21-10.0.0.130:22-10.0.0.1:46012.service - OpenSSH per-connection server daemon (10.0.0.1:46012). Aug 13 07:18:04.822976 sshd[6343]: Accepted publickey for core from 10.0.0.1 port 46012 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:18:04.824685 sshd[6343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:18:04.828961 systemd-logind[1548]: New session 22 of user core. Aug 13 07:18:04.840669 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 07:18:05.050536 sshd[6343]: pam_unix(sshd:session): session closed for user core Aug 13 07:18:05.054946 systemd[1]: sshd@21-10.0.0.130:22-10.0.0.1:46012.service: Deactivated successfully. Aug 13 07:18:05.057927 systemd-logind[1548]: Session 22 logged out. Waiting for processes to exit. Aug 13 07:18:05.058687 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 07:18:05.059886 systemd-logind[1548]: Removed session 22. Aug 13 07:18:09.678510 kubelet[2651]: E0813 07:18:09.678437 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:18:10.064747 systemd[1]: Started sshd@22-10.0.0.130:22-10.0.0.1:50438.service - OpenSSH per-connection server daemon (10.0.0.1:50438). Aug 13 07:18:10.095800 sshd[6385]: Accepted publickey for core from 10.0.0.1 port 50438 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:18:10.097585 sshd[6385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:18:10.101958 systemd-logind[1548]: New session 23 of user core. Aug 13 07:18:10.109617 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 07:18:10.268536 sshd[6385]: pam_unix(sshd:session): session closed for user core Aug 13 07:18:10.273266 systemd[1]: sshd@22-10.0.0.130:22-10.0.0.1:50438.service: Deactivated successfully. Aug 13 07:18:10.275509 systemd-logind[1548]: Session 23 logged out. Waiting for processes to exit. Aug 13 07:18:10.275585 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 07:18:10.276583 systemd-logind[1548]: Removed session 23. Aug 13 07:18:15.279574 systemd[1]: Started sshd@23-10.0.0.130:22-10.0.0.1:50442.service - OpenSSH per-connection server daemon (10.0.0.1:50442). Aug 13 07:18:15.318708 sshd[6441]: Accepted publickey for core from 10.0.0.1 port 50442 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:18:15.320468 sshd[6441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:18:15.325062 systemd-logind[1548]: New session 24 of user core. Aug 13 07:18:15.335711 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 07:18:15.517037 sshd[6441]: pam_unix(sshd:session): session closed for user core Aug 13 07:18:15.521970 systemd[1]: sshd@23-10.0.0.130:22-10.0.0.1:50442.service: Deactivated successfully. Aug 13 07:18:15.524974 systemd-logind[1548]: Session 24 logged out. Waiting for processes to exit. Aug 13 07:18:15.525006 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 07:18:15.526670 systemd-logind[1548]: Removed session 24. Aug 13 07:18:20.531460 systemd[1]: Started sshd@24-10.0.0.130:22-10.0.0.1:58080.service - OpenSSH per-connection server daemon (10.0.0.1:58080). Aug 13 07:18:20.573151 sshd[6479]: Accepted publickey for core from 10.0.0.1 port 58080 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:18:20.575414 sshd[6479]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:18:20.580011 systemd-logind[1548]: New session 25 of user core. Aug 13 07:18:20.587651 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 13 07:18:20.994246 sshd[6479]: pam_unix(sshd:session): session closed for user core Aug 13 07:18:21.000057 systemd[1]: sshd@24-10.0.0.130:22-10.0.0.1:58080.service: Deactivated successfully. Aug 13 07:18:21.002814 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 07:18:21.003985 systemd-logind[1548]: Session 25 logged out. Waiting for processes to exit. Aug 13 07:18:21.005657 systemd-logind[1548]: Removed session 25. Aug 13 07:18:21.678822 kubelet[2651]: E0813 07:18:21.678773 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"