Aug 13 07:14:22.936028 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 22:14:58 -00 2025 Aug 13 07:14:22.936059 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:14:22.936072 kernel: BIOS-provided physical RAM map: Aug 13 07:14:22.936078 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Aug 13 07:14:22.936084 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Aug 13 07:14:22.936090 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Aug 13 07:14:22.936098 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Aug 13 07:14:22.936105 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Aug 13 07:14:22.936113 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Aug 13 07:14:22.936119 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Aug 13 07:14:22.936130 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Aug 13 07:14:22.936137 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Aug 13 07:14:22.936146 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Aug 13 07:14:22.936152 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Aug 13 07:14:22.936163 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Aug 13 07:14:22.936170 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Aug 13 07:14:22.936179 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Aug 13 07:14:22.936186 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Aug 13 07:14:22.936193 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Aug 13 07:14:22.936200 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Aug 13 07:14:22.936206 kernel: NX (Execute Disable) protection: active Aug 13 07:14:22.936213 kernel: APIC: Static calls initialized Aug 13 07:14:22.936220 kernel: efi: EFI v2.7 by EDK II Aug 13 07:14:22.936226 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Aug 13 07:14:22.936233 kernel: SMBIOS 2.8 present. Aug 13 07:14:22.936240 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Aug 13 07:14:22.936246 kernel: Hypervisor detected: KVM Aug 13 07:14:22.936256 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 07:14:22.936262 kernel: kvm-clock: using sched offset of 6196323908 cycles Aug 13 07:14:22.936269 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 07:14:22.936276 kernel: tsc: Detected 2794.750 MHz processor Aug 13 07:14:22.936283 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 07:14:22.936291 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 07:14:22.936298 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Aug 13 07:14:22.936305 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Aug 13 07:14:22.936312 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 07:14:22.936321 kernel: Using GB pages for direct mapping Aug 13 07:14:22.936328 kernel: Secure boot disabled Aug 13 07:14:22.936335 kernel: ACPI: Early table checksum verification disabled Aug 13 07:14:22.936342 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Aug 13 07:14:22.936353 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Aug 13 07:14:22.936360 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:14:22.936368 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:14:22.936378 kernel: ACPI: FACS 0x000000009CBDD000 000040 Aug 13 07:14:22.936385 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:14:22.936395 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:14:22.936402 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:14:22.936409 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:14:22.936416 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Aug 13 07:14:22.936424 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Aug 13 07:14:22.936433 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Aug 13 07:14:22.936441 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Aug 13 07:14:22.936448 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Aug 13 07:14:22.936455 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Aug 13 07:14:22.936462 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Aug 13 07:14:22.936469 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Aug 13 07:14:22.936476 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Aug 13 07:14:22.936483 kernel: No NUMA configuration found Aug 13 07:14:22.936493 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Aug 13 07:14:22.936529 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Aug 13 07:14:22.936538 kernel: Zone ranges: Aug 13 07:14:22.936545 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 07:14:22.936552 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Aug 13 07:14:22.936560 kernel: Normal empty Aug 13 07:14:22.936567 kernel: Movable zone start for each node Aug 13 07:14:22.936574 kernel: Early memory node ranges Aug 13 07:14:22.936581 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Aug 13 07:14:22.936588 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Aug 13 07:14:22.936600 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Aug 13 07:14:22.936611 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Aug 13 07:14:22.936618 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Aug 13 07:14:22.936625 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Aug 13 07:14:22.936635 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Aug 13 07:14:22.936643 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 07:14:22.936650 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Aug 13 07:14:22.936657 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Aug 13 07:14:22.936664 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 07:14:22.936671 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Aug 13 07:14:22.936681 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Aug 13 07:14:22.936689 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Aug 13 07:14:22.936696 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 07:14:22.936703 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 07:14:22.936711 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 07:14:22.936718 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 07:14:22.936725 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 07:14:22.936732 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 07:14:22.936739 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 07:14:22.936749 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 07:14:22.936764 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 07:14:22.936771 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 07:14:22.936779 kernel: TSC deadline timer available Aug 13 07:14:22.936786 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Aug 13 07:14:22.936794 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 13 07:14:22.936801 kernel: kvm-guest: KVM setup pv remote TLB flush Aug 13 07:14:22.936808 kernel: kvm-guest: setup PV sched yield Aug 13 07:14:22.936816 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Aug 13 07:14:22.936825 kernel: Booting paravirtualized kernel on KVM Aug 13 07:14:22.936833 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 07:14:22.936840 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Aug 13 07:14:22.936847 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u524288 Aug 13 07:14:22.936855 kernel: pcpu-alloc: s197096 r8192 d32280 u524288 alloc=1*2097152 Aug 13 07:14:22.936862 kernel: pcpu-alloc: [0] 0 1 2 3 Aug 13 07:14:22.936869 kernel: kvm-guest: PV spinlocks enabled Aug 13 07:14:22.936876 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 07:14:22.936884 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:14:22.936897 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 07:14:22.936905 kernel: random: crng init done Aug 13 07:14:22.936912 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 07:14:22.936919 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 07:14:22.936926 kernel: Fallback order for Node 0: 0 Aug 13 07:14:22.936933 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Aug 13 07:14:22.936941 kernel: Policy zone: DMA32 Aug 13 07:14:22.936948 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 07:14:22.936977 kernel: Memory: 2395616K/2567000K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42876K init, 2316K bss, 171124K reserved, 0K cma-reserved) Aug 13 07:14:22.936988 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Aug 13 07:14:22.936995 kernel: ftrace: allocating 37968 entries in 149 pages Aug 13 07:14:22.937002 kernel: ftrace: allocated 149 pages with 4 groups Aug 13 07:14:22.937010 kernel: Dynamic Preempt: voluntary Aug 13 07:14:22.937025 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 07:14:22.937039 kernel: rcu: RCU event tracing is enabled. Aug 13 07:14:22.937047 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Aug 13 07:14:22.937054 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 07:14:22.937062 kernel: Rude variant of Tasks RCU enabled. Aug 13 07:14:22.937069 kernel: Tracing variant of Tasks RCU enabled. Aug 13 07:14:22.937077 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 07:14:22.937087 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Aug 13 07:14:22.937095 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Aug 13 07:14:22.937104 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 07:14:22.937112 kernel: Console: colour dummy device 80x25 Aug 13 07:14:22.937120 kernel: printk: console [ttyS0] enabled Aug 13 07:14:22.937130 kernel: ACPI: Core revision 20230628 Aug 13 07:14:22.937138 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 13 07:14:22.937145 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 07:14:22.937153 kernel: x2apic enabled Aug 13 07:14:22.937160 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 07:14:22.937168 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Aug 13 07:14:22.937176 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Aug 13 07:14:22.937183 kernel: kvm-guest: setup PV IPIs Aug 13 07:14:22.937191 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 07:14:22.937201 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Aug 13 07:14:22.937208 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Aug 13 07:14:22.937216 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 13 07:14:22.937223 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Aug 13 07:14:22.937231 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Aug 13 07:14:22.937239 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 07:14:22.937246 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 07:14:22.937254 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 07:14:22.937262 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Aug 13 07:14:22.937272 kernel: RETBleed: Mitigation: untrained return thunk Aug 13 07:14:22.937279 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 07:14:22.937287 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 07:14:22.937295 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Aug 13 07:14:22.937305 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Aug 13 07:14:22.937312 kernel: x86/bugs: return thunk changed Aug 13 07:14:22.937320 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Aug 13 07:14:22.937328 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 07:14:22.937335 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 07:14:22.937345 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 07:14:22.937353 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 07:14:22.937360 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Aug 13 07:14:22.937368 kernel: Freeing SMP alternatives memory: 32K Aug 13 07:14:22.937376 kernel: pid_max: default: 32768 minimum: 301 Aug 13 07:14:22.937383 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 07:14:22.937391 kernel: landlock: Up and running. Aug 13 07:14:22.937398 kernel: SELinux: Initializing. Aug 13 07:14:22.937406 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 07:14:22.937416 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 07:14:22.937424 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Aug 13 07:14:22.937432 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 13 07:14:22.937439 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 13 07:14:22.937447 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 13 07:14:22.937455 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Aug 13 07:14:22.937462 kernel: ... version: 0 Aug 13 07:14:22.937470 kernel: ... bit width: 48 Aug 13 07:14:22.937479 kernel: ... generic registers: 6 Aug 13 07:14:22.937487 kernel: ... value mask: 0000ffffffffffff Aug 13 07:14:22.937494 kernel: ... max period: 00007fffffffffff Aug 13 07:14:22.937502 kernel: ... fixed-purpose events: 0 Aug 13 07:14:22.937510 kernel: ... event mask: 000000000000003f Aug 13 07:14:22.937517 kernel: signal: max sigframe size: 1776 Aug 13 07:14:22.937525 kernel: rcu: Hierarchical SRCU implementation. Aug 13 07:14:22.937532 kernel: rcu: Max phase no-delay instances is 400. Aug 13 07:14:22.937540 kernel: smp: Bringing up secondary CPUs ... Aug 13 07:14:22.937547 kernel: smpboot: x86: Booting SMP configuration: Aug 13 07:14:22.937557 kernel: .... node #0, CPUs: #1 #2 #3 Aug 13 07:14:22.937565 kernel: smp: Brought up 1 node, 4 CPUs Aug 13 07:14:22.937572 kernel: smpboot: Max logical packages: 1 Aug 13 07:14:22.937580 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Aug 13 07:14:22.937587 kernel: devtmpfs: initialized Aug 13 07:14:22.937594 kernel: x86/mm: Memory block size: 128MB Aug 13 07:14:22.937602 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Aug 13 07:14:22.937610 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Aug 13 07:14:22.937618 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Aug 13 07:14:22.937628 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Aug 13 07:14:22.937636 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Aug 13 07:14:22.937643 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 07:14:22.937651 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Aug 13 07:14:22.937658 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 07:14:22.937666 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 07:14:22.937673 kernel: audit: initializing netlink subsys (disabled) Aug 13 07:14:22.937681 kernel: audit: type=2000 audit(1755069261.939:1): state=initialized audit_enabled=0 res=1 Aug 13 07:14:22.937691 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 07:14:22.937698 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 07:14:22.937706 kernel: cpuidle: using governor menu Aug 13 07:14:22.937713 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 07:14:22.937721 kernel: dca service started, version 1.12.1 Aug 13 07:14:22.937729 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Aug 13 07:14:22.937736 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Aug 13 07:14:22.937744 kernel: PCI: Using configuration type 1 for base access Aug 13 07:14:22.937751 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 07:14:22.937768 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 07:14:22.937776 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 07:14:22.937784 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 07:14:22.937792 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 07:14:22.937799 kernel: ACPI: Added _OSI(Module Device) Aug 13 07:14:22.937807 kernel: ACPI: Added _OSI(Processor Device) Aug 13 07:14:22.937815 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 07:14:22.937822 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 07:14:22.937830 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 13 07:14:22.937840 kernel: ACPI: Interpreter enabled Aug 13 07:14:22.937848 kernel: ACPI: PM: (supports S0 S3 S5) Aug 13 07:14:22.937855 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 07:14:22.937863 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 07:14:22.937870 kernel: PCI: Using E820 reservations for host bridge windows Aug 13 07:14:22.937878 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Aug 13 07:14:22.937886 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 07:14:22.938162 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 07:14:22.938308 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Aug 13 07:14:22.938437 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Aug 13 07:14:22.938447 kernel: PCI host bridge to bus 0000:00 Aug 13 07:14:22.938596 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 07:14:22.938714 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 07:14:22.938841 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 07:14:22.938973 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Aug 13 07:14:22.939121 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Aug 13 07:14:22.939240 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Aug 13 07:14:22.939355 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 07:14:22.939515 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Aug 13 07:14:22.939669 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Aug 13 07:14:22.939808 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Aug 13 07:14:22.940176 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Aug 13 07:14:22.940321 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Aug 13 07:14:22.940449 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Aug 13 07:14:22.940575 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 07:14:22.940729 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Aug 13 07:14:22.940869 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Aug 13 07:14:22.942136 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Aug 13 07:14:22.942279 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Aug 13 07:14:22.942426 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Aug 13 07:14:22.942556 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Aug 13 07:14:22.942683 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Aug 13 07:14:22.942820 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Aug 13 07:14:22.943029 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Aug 13 07:14:22.943161 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Aug 13 07:14:22.943292 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Aug 13 07:14:22.943417 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Aug 13 07:14:22.943541 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Aug 13 07:14:22.944927 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Aug 13 07:14:22.945103 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Aug 13 07:14:22.945255 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Aug 13 07:14:22.945384 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Aug 13 07:14:22.945517 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Aug 13 07:14:22.946031 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Aug 13 07:14:22.946175 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Aug 13 07:14:22.946186 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 07:14:22.946195 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 07:14:22.946205 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 07:14:22.946216 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 07:14:22.946225 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Aug 13 07:14:22.946238 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Aug 13 07:14:22.946246 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Aug 13 07:14:22.946254 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Aug 13 07:14:22.946262 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Aug 13 07:14:22.946269 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Aug 13 07:14:22.946277 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Aug 13 07:14:22.946285 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Aug 13 07:14:22.946292 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Aug 13 07:14:22.946300 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Aug 13 07:14:22.946311 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Aug 13 07:14:22.946319 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Aug 13 07:14:22.946326 kernel: iommu: Default domain type: Translated Aug 13 07:14:22.946334 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 07:14:22.946342 kernel: efivars: Registered efivars operations Aug 13 07:14:22.946350 kernel: PCI: Using ACPI for IRQ routing Aug 13 07:14:22.946358 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 07:14:22.946366 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Aug 13 07:14:22.946374 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Aug 13 07:14:22.946384 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Aug 13 07:14:22.946392 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Aug 13 07:14:22.946523 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Aug 13 07:14:22.946656 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Aug 13 07:14:22.946793 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 07:14:22.946804 kernel: vgaarb: loaded Aug 13 07:14:22.946813 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 13 07:14:22.946821 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 13 07:14:22.946833 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 07:14:22.946840 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 07:14:22.946848 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 07:14:22.946856 kernel: pnp: PnP ACPI init Aug 13 07:14:22.947047 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Aug 13 07:14:22.947060 kernel: pnp: PnP ACPI: found 6 devices Aug 13 07:14:22.947069 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 07:14:22.947077 kernel: NET: Registered PF_INET protocol family Aug 13 07:14:22.947085 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 07:14:22.947097 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 07:14:22.947105 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 07:14:22.947113 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 07:14:22.947121 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 13 07:14:22.947129 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 07:14:22.947137 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 07:14:22.947145 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 07:14:22.947152 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 07:14:22.947164 kernel: NET: Registered PF_XDP protocol family Aug 13 07:14:22.947304 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Aug 13 07:14:22.947432 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Aug 13 07:14:22.947560 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 07:14:22.947679 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 07:14:22.947810 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 07:14:22.947931 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Aug 13 07:14:22.948063 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Aug 13 07:14:22.948190 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Aug 13 07:14:22.948201 kernel: PCI: CLS 0 bytes, default 64 Aug 13 07:14:22.948210 kernel: Initialise system trusted keyrings Aug 13 07:14:22.948218 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 07:14:22.948226 kernel: Key type asymmetric registered Aug 13 07:14:22.948234 kernel: Asymmetric key parser 'x509' registered Aug 13 07:14:22.948242 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 13 07:14:22.948250 kernel: io scheduler mq-deadline registered Aug 13 07:14:22.948258 kernel: io scheduler kyber registered Aug 13 07:14:22.948269 kernel: io scheduler bfq registered Aug 13 07:14:22.948278 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 07:14:22.948286 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Aug 13 07:14:22.948294 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Aug 13 07:14:22.948302 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Aug 13 07:14:22.948310 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 07:14:22.948318 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 07:14:22.948326 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 07:14:22.948334 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 07:14:22.948344 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 07:14:22.948352 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 07:14:22.948503 kernel: rtc_cmos 00:04: RTC can wake from S4 Aug 13 07:14:22.948634 kernel: rtc_cmos 00:04: registered as rtc0 Aug 13 07:14:22.948975 kernel: rtc_cmos 00:04: setting system clock to 2025-08-13T07:14:22 UTC (1755069262) Aug 13 07:14:22.949105 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Aug 13 07:14:22.949116 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Aug 13 07:14:22.949124 kernel: efifb: probing for efifb Aug 13 07:14:22.949139 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Aug 13 07:14:22.949148 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Aug 13 07:14:22.949157 kernel: efifb: scrolling: redraw Aug 13 07:14:22.949165 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Aug 13 07:14:22.949173 kernel: Console: switching to colour frame buffer device 100x37 Aug 13 07:14:22.949181 kernel: fb0: EFI VGA frame buffer device Aug 13 07:14:22.949209 kernel: pstore: Using crash dump compression: deflate Aug 13 07:14:22.949222 kernel: pstore: Registered efi_pstore as persistent store backend Aug 13 07:14:22.949233 kernel: NET: Registered PF_INET6 protocol family Aug 13 07:14:22.949243 kernel: Segment Routing with IPv6 Aug 13 07:14:22.949251 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 07:14:22.949259 kernel: NET: Registered PF_PACKET protocol family Aug 13 07:14:22.949267 kernel: Key type dns_resolver registered Aug 13 07:14:22.949275 kernel: IPI shorthand broadcast: enabled Aug 13 07:14:22.949283 kernel: sched_clock: Marking stable (1031001935, 110080871)->(1160128617, -19045811) Aug 13 07:14:22.949291 kernel: registered taskstats version 1 Aug 13 07:14:22.949299 kernel: Loading compiled-in X.509 certificates Aug 13 07:14:22.949307 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: 264e720147fa8df9744bb9dc1c08171c0cb20041' Aug 13 07:14:22.949317 kernel: Key type .fscrypt registered Aug 13 07:14:22.949325 kernel: Key type fscrypt-provisioning registered Aug 13 07:14:22.949333 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 07:14:22.949341 kernel: ima: Allocated hash algorithm: sha1 Aug 13 07:14:22.949349 kernel: ima: No architecture policies found Aug 13 07:14:22.949357 kernel: clk: Disabling unused clocks Aug 13 07:14:22.949365 kernel: Freeing unused kernel image (initmem) memory: 42876K Aug 13 07:14:22.949373 kernel: Write protecting the kernel read-only data: 36864k Aug 13 07:14:22.949381 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Aug 13 07:14:22.949392 kernel: Run /init as init process Aug 13 07:14:22.949400 kernel: with arguments: Aug 13 07:14:22.949408 kernel: /init Aug 13 07:14:22.949415 kernel: with environment: Aug 13 07:14:22.949423 kernel: HOME=/ Aug 13 07:14:22.949431 kernel: TERM=linux Aug 13 07:14:22.949439 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 07:14:22.949449 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 07:14:22.949463 systemd[1]: Detected virtualization kvm. Aug 13 07:14:22.949471 systemd[1]: Detected architecture x86-64. Aug 13 07:14:22.949480 systemd[1]: Running in initrd. Aug 13 07:14:22.949488 systemd[1]: No hostname configured, using default hostname. Aug 13 07:14:22.949497 systemd[1]: Hostname set to . Aug 13 07:14:22.949511 systemd[1]: Initializing machine ID from VM UUID. Aug 13 07:14:22.949519 systemd[1]: Queued start job for default target initrd.target. Aug 13 07:14:22.949528 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:14:22.949539 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:14:22.949551 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 07:14:22.949559 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:14:22.949568 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 07:14:22.949580 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 07:14:22.949590 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 07:14:22.949599 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 07:14:22.949608 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:14:22.949616 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:14:22.949625 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:14:22.949633 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:14:22.949644 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:14:22.949653 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:14:22.949661 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:14:22.949670 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:14:22.949678 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 07:14:22.949687 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 07:14:22.949695 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:14:22.949704 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:14:22.949713 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:14:22.949724 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:14:22.949732 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 07:14:22.949741 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:14:22.949749 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 07:14:22.949767 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 07:14:22.949777 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:14:22.949789 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:14:22.949798 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:14:22.949810 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 07:14:22.949818 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:14:22.949827 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 07:14:22.949836 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 07:14:22.949872 systemd-journald[193]: Collecting audit messages is disabled. Aug 13 07:14:22.949896 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:14:22.949905 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:14:22.949914 systemd-journald[193]: Journal started Aug 13 07:14:22.949935 systemd-journald[193]: Runtime Journal (/run/log/journal/76e78fa8f0a04bea91028e6648146400) is 6.0M, max 48.3M, 42.2M free. Aug 13 07:14:22.934340 systemd-modules-load[194]: Inserted module 'overlay' Aug 13 07:14:22.961915 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:14:22.961965 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:14:22.961978 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:14:22.964103 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:14:22.972253 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 07:14:22.974123 systemd-modules-load[194]: Inserted module 'br_netfilter' Aug 13 07:14:22.974981 kernel: Bridge firewalling registered Aug 13 07:14:22.976230 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:14:22.976557 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:14:22.980658 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:14:22.982543 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:14:22.988727 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:14:22.991336 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 07:14:22.992802 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:14:22.996084 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:14:23.013201 dracut-cmdline[225]: dracut-dracut-053 Aug 13 07:14:23.016414 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:14:23.032483 systemd-resolved[228]: Positive Trust Anchors: Aug 13 07:14:23.032505 systemd-resolved[228]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:14:23.032535 systemd-resolved[228]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:14:23.035302 systemd-resolved[228]: Defaulting to hostname 'linux'. Aug 13 07:14:23.036640 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:14:23.041278 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:14:23.116015 kernel: SCSI subsystem initialized Aug 13 07:14:23.124995 kernel: Loading iSCSI transport class v2.0-870. Aug 13 07:14:23.135996 kernel: iscsi: registered transport (tcp) Aug 13 07:14:23.160015 kernel: iscsi: registered transport (qla4xxx) Aug 13 07:14:23.160157 kernel: QLogic iSCSI HBA Driver Aug 13 07:14:23.210760 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 07:14:23.217165 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 07:14:23.242992 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 07:14:23.243074 kernel: device-mapper: uevent: version 1.0.3 Aug 13 07:14:23.244532 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 07:14:23.311008 kernel: raid6: avx2x4 gen() 30468 MB/s Aug 13 07:14:23.327986 kernel: raid6: avx2x2 gen() 30895 MB/s Aug 13 07:14:23.345060 kernel: raid6: avx2x1 gen() 25715 MB/s Aug 13 07:14:23.345099 kernel: raid6: using algorithm avx2x2 gen() 30895 MB/s Aug 13 07:14:23.363143 kernel: raid6: .... xor() 19864 MB/s, rmw enabled Aug 13 07:14:23.363171 kernel: raid6: using avx2x2 recovery algorithm Aug 13 07:14:23.383985 kernel: xor: automatically using best checksumming function avx Aug 13 07:14:23.539998 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 07:14:23.554542 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:14:23.566179 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:14:23.581288 systemd-udevd[411]: Using default interface naming scheme 'v255'. Aug 13 07:14:23.587203 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:14:23.595110 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 07:14:23.612833 dracut-pre-trigger[415]: rd.md=0: removing MD RAID activation Aug 13 07:14:23.650785 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:14:23.820118 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:14:23.894435 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:14:23.967844 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Aug 13 07:14:23.968175 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 07:14:23.969121 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:14:23.969202 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:14:23.973736 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:14:23.978039 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Aug 13 07:14:23.982991 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 07:14:23.983049 kernel: GPT:9289727 != 19775487 Aug 13 07:14:23.983071 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 07:14:23.983095 kernel: GPT:9289727 != 19775487 Aug 13 07:14:23.983137 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 07:14:23.983161 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:14:23.986979 kernel: libata version 3.00 loaded. Aug 13 07:14:23.992279 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 07:14:23.999841 kernel: ahci 0000:00:1f.2: version 3.0 Aug 13 07:14:24.000102 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Aug 13 07:14:24.000120 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Aug 13 07:14:24.000269 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Aug 13 07:14:24.000419 kernel: scsi host0: ahci Aug 13 07:14:24.000594 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 07:14:24.000611 kernel: scsi host1: ahci Aug 13 07:14:23.992399 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:14:24.008311 kernel: AES CTR mode by8 optimization enabled Aug 13 07:14:24.008336 kernel: scsi host2: ahci Aug 13 07:14:24.008586 kernel: scsi host3: ahci Aug 13 07:14:24.008758 kernel: scsi host4: ahci Aug 13 07:14:24.008948 kernel: scsi host5: ahci Aug 13 07:14:24.009192 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Aug 13 07:14:24.009224 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Aug 13 07:14:24.009240 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Aug 13 07:14:24.009254 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Aug 13 07:14:23.992469 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:14:24.016754 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Aug 13 07:14:24.016774 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Aug 13 07:14:24.014829 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:14:24.024220 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (467) Aug 13 07:14:24.024253 kernel: BTRFS: device fsid 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (462) Aug 13 07:14:24.026451 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:14:24.029615 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 07:14:24.048132 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 13 07:14:24.064949 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 13 07:14:24.072526 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 07:14:24.079369 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 13 07:14:24.082582 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 13 07:14:24.085099 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:14:24.087506 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:14:24.089764 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:14:24.103146 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 07:14:24.106395 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 07:14:24.108587 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:14:24.108650 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:14:24.112068 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:14:24.114228 disk-uuid[550]: Primary Header is updated. Aug 13 07:14:24.114228 disk-uuid[550]: Secondary Entries is updated. Aug 13 07:14:24.114228 disk-uuid[550]: Secondary Header is updated. Aug 13 07:14:24.117435 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:14:24.119715 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:14:24.125123 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:14:24.127784 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:14:24.140092 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:14:24.145130 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:14:24.171706 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:14:24.319002 kernel: ata5: SATA link down (SStatus 0 SControl 300) Aug 13 07:14:24.319090 kernel: ata6: SATA link down (SStatus 0 SControl 300) Aug 13 07:14:24.319119 kernel: ata1: SATA link down (SStatus 0 SControl 300) Aug 13 07:14:24.320002 kernel: ata2: SATA link down (SStatus 0 SControl 300) Aug 13 07:14:24.321032 kernel: ata4: SATA link down (SStatus 0 SControl 300) Aug 13 07:14:24.322005 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Aug 13 07:14:24.323160 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Aug 13 07:14:24.323185 kernel: ata3.00: applying bridge limits Aug 13 07:14:24.323983 kernel: ata3.00: configured for UDMA/100 Aug 13 07:14:24.324999 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Aug 13 07:14:24.372006 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Aug 13 07:14:24.372364 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 13 07:14:24.391309 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Aug 13 07:14:25.125981 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:14:25.126115 disk-uuid[551]: The operation has completed successfully. Aug 13 07:14:25.155951 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 07:14:25.156139 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 07:14:25.185117 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 07:14:25.188485 sh[597]: Success Aug 13 07:14:25.201983 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Aug 13 07:14:25.237554 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 07:14:25.256228 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 07:14:25.259117 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 07:14:25.273364 kernel: BTRFS info (device dm-0): first mount of filesystem 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad Aug 13 07:14:25.273410 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:14:25.273421 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 07:14:25.274345 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 07:14:25.275049 kernel: BTRFS info (device dm-0): using free space tree Aug 13 07:14:25.281204 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 07:14:25.282059 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 07:14:25.297218 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 07:14:25.299937 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 07:14:25.310014 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:14:25.310044 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:14:25.310055 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:14:25.312999 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:14:25.323890 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 07:14:25.325615 kernel: BTRFS info (device vda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:14:25.335240 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 07:14:25.342254 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 07:14:25.412529 ignition[684]: Ignition 2.19.0 Aug 13 07:14:25.412540 ignition[684]: Stage: fetch-offline Aug 13 07:14:25.412589 ignition[684]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:14:25.412600 ignition[684]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:14:25.412722 ignition[684]: parsed url from cmdline: "" Aug 13 07:14:25.412726 ignition[684]: no config URL provided Aug 13 07:14:25.412732 ignition[684]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 07:14:25.412743 ignition[684]: no config at "/usr/lib/ignition/user.ign" Aug 13 07:14:25.412770 ignition[684]: op(1): [started] loading QEMU firmware config module Aug 13 07:14:25.412776 ignition[684]: op(1): executing: "modprobe" "qemu_fw_cfg" Aug 13 07:14:25.421636 ignition[684]: op(1): [finished] loading QEMU firmware config module Aug 13 07:14:25.440431 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:14:25.455315 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:14:25.463495 ignition[684]: parsing config with SHA512: 4157f1662b83c51ecfe7a4eeabdae9e6e661c769693873afc16f48ca3f32d0d7d6bc805e828aabda709dbfb7ed9a027b860af59013adac981240832cc3f79374 Aug 13 07:14:25.470922 unknown[684]: fetched base config from "system" Aug 13 07:14:25.470942 unknown[684]: fetched user config from "qemu" Aug 13 07:14:25.471439 ignition[684]: fetch-offline: fetch-offline passed Aug 13 07:14:25.471531 ignition[684]: Ignition finished successfully Aug 13 07:14:25.474937 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:14:25.476939 systemd-networkd[786]: lo: Link UP Aug 13 07:14:25.476943 systemd-networkd[786]: lo: Gained carrier Aug 13 07:14:25.478662 systemd-networkd[786]: Enumeration completed Aug 13 07:14:25.479131 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:14:25.479136 systemd-networkd[786]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 07:14:25.480213 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:14:25.480248 systemd-networkd[786]: eth0: Link UP Aug 13 07:14:25.480252 systemd-networkd[786]: eth0: Gained carrier Aug 13 07:14:25.480259 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:14:25.488235 systemd[1]: Reached target network.target - Network. Aug 13 07:14:25.492475 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 13 07:14:25.499013 systemd-networkd[786]: eth0: DHCPv4 address 10.0.0.120/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 13 07:14:25.502107 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 07:14:25.515578 ignition[789]: Ignition 2.19.0 Aug 13 07:14:25.515589 ignition[789]: Stage: kargs Aug 13 07:14:25.515782 ignition[789]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:14:25.515794 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:14:25.519442 ignition[789]: kargs: kargs passed Aug 13 07:14:25.519492 ignition[789]: Ignition finished successfully Aug 13 07:14:25.524146 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 07:14:25.536113 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 07:14:25.550990 ignition[797]: Ignition 2.19.0 Aug 13 07:14:25.551001 ignition[797]: Stage: disks Aug 13 07:14:25.551206 ignition[797]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:14:25.551221 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:14:25.552213 ignition[797]: disks: disks passed Aug 13 07:14:25.552274 ignition[797]: Ignition finished successfully Aug 13 07:14:25.557589 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 07:14:25.558853 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 07:14:25.560691 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 07:14:25.561978 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:14:25.564003 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:14:25.566180 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:14:25.577113 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 07:14:25.588621 systemd-resolved[228]: Detected conflict on linux IN A 10.0.0.120 Aug 13 07:14:25.588639 systemd-resolved[228]: Hostname conflict, changing published hostname from 'linux' to 'linux3'. Aug 13 07:14:25.591248 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 13 07:14:25.597704 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 07:14:25.608070 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 07:14:25.710991 kernel: EXT4-fs (vda9): mounted filesystem 98cc0201-e9ec-4d2c-8a62-5b521bf9317d r/w with ordered data mode. Quota mode: none. Aug 13 07:14:25.711942 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 07:14:25.714075 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 07:14:25.725042 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:14:25.727457 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 07:14:25.730003 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 13 07:14:25.730048 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 07:14:25.738589 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (815) Aug 13 07:14:25.738612 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:14:25.738623 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:14:25.738634 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:14:25.730070 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:14:25.740701 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:14:25.741943 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 07:14:25.743810 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:14:25.748046 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 07:14:25.788438 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 07:14:25.794420 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Aug 13 07:14:25.799866 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 07:14:25.805026 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 07:14:25.899284 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 07:14:25.908172 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 07:14:25.911462 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 07:14:25.916994 kernel: BTRFS info (device vda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:14:25.940305 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 07:14:25.946418 ignition[928]: INFO : Ignition 2.19.0 Aug 13 07:14:25.946418 ignition[928]: INFO : Stage: mount Aug 13 07:14:25.948189 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:14:25.948189 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:14:25.948189 ignition[928]: INFO : mount: mount passed Aug 13 07:14:25.948189 ignition[928]: INFO : Ignition finished successfully Aug 13 07:14:25.950115 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 07:14:25.958133 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 07:14:26.273121 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 07:14:26.282243 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:14:26.288980 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (941) Aug 13 07:14:26.291355 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:14:26.291375 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:14:26.291386 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:14:26.293990 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:14:26.295396 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:14:26.321637 ignition[958]: INFO : Ignition 2.19.0 Aug 13 07:14:26.321637 ignition[958]: INFO : Stage: files Aug 13 07:14:26.323317 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:14:26.323317 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:14:26.323317 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Aug 13 07:14:26.327006 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 07:14:26.327006 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 07:14:26.329954 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 07:14:26.331412 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 07:14:26.331412 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 07:14:26.330600 unknown[958]: wrote ssh authorized keys file for user: core Aug 13 07:14:26.335682 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 13 07:14:26.335682 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Aug 13 07:14:26.362196 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 07:14:26.579175 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 13 07:14:26.579175 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 13 07:14:26.582805 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 07:14:26.582805 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:14:26.586114 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:14:26.587774 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:14:26.589458 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:14:26.591090 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:14:26.593019 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:14:26.595164 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:14:26.596979 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:14:26.598678 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 07:14:26.601110 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 07:14:26.603445 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 07:14:26.605456 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Aug 13 07:14:26.908426 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 13 07:14:27.228821 systemd-networkd[786]: eth0: Gained IPv6LL Aug 13 07:14:27.855791 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 07:14:27.855791 ignition[958]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 13 07:14:27.859768 ignition[958]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:14:27.859768 ignition[958]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:14:27.859768 ignition[958]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 13 07:14:27.859768 ignition[958]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Aug 13 07:14:27.859768 ignition[958]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 07:14:27.859768 ignition[958]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 07:14:27.859768 ignition[958]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Aug 13 07:14:27.859768 ignition[958]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Aug 13 07:14:27.887471 ignition[958]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 07:14:27.894532 ignition[958]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 07:14:27.896194 ignition[958]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Aug 13 07:14:27.896194 ignition[958]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Aug 13 07:14:27.898867 ignition[958]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 07:14:27.900261 ignition[958]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:14:27.901987 ignition[958]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:14:27.903622 ignition[958]: INFO : files: files passed Aug 13 07:14:27.904344 ignition[958]: INFO : Ignition finished successfully Aug 13 07:14:27.905877 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 07:14:27.912137 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 07:14:27.914037 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 07:14:27.921009 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 07:14:27.921149 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 07:14:27.925781 initrd-setup-root-after-ignition[986]: grep: /sysroot/oem/oem-release: No such file or directory Aug 13 07:14:27.928810 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:14:27.928810 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:14:27.933171 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:14:27.931701 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:14:27.933949 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 07:14:27.949121 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 07:14:27.975661 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 07:14:27.975802 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 07:14:27.978641 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 07:14:27.980271 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 07:14:27.982269 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 07:14:27.983341 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 07:14:28.028453 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:14:28.042148 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 07:14:28.108806 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:14:28.111174 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:14:28.112468 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 07:14:28.114344 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 07:14:28.114491 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:14:28.116772 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 07:14:28.118268 systemd[1]: Stopped target basic.target - Basic System. Aug 13 07:14:28.120248 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 07:14:28.122261 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:14:28.124253 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 07:14:28.126384 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 07:14:28.128485 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:14:28.130699 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 07:14:28.132644 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 07:14:28.134765 systemd[1]: Stopped target swap.target - Swaps. Aug 13 07:14:28.136522 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 07:14:28.136651 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:14:28.138911 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:14:28.140322 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:14:28.142514 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 07:14:28.142668 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:14:28.144549 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 07:14:28.144715 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 07:14:28.147048 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 07:14:28.147213 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:14:28.148955 systemd[1]: Stopped target paths.target - Path Units. Aug 13 07:14:28.150588 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 07:14:28.156055 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:14:28.158081 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 07:14:28.159941 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 07:14:28.162256 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 07:14:28.162371 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:14:28.164093 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 07:14:28.164210 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:14:28.186820 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 07:14:28.187034 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:14:28.188715 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 07:14:28.188865 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 07:14:28.199203 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 07:14:28.201095 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 07:14:28.201257 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:14:28.204181 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 07:14:28.205229 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 07:14:28.205365 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:14:28.207531 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 07:14:28.207755 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:14:28.212685 ignition[1012]: INFO : Ignition 2.19.0 Aug 13 07:14:28.212685 ignition[1012]: INFO : Stage: umount Aug 13 07:14:28.212685 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:14:28.212685 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:14:28.219841 ignition[1012]: INFO : umount: umount passed Aug 13 07:14:28.219841 ignition[1012]: INFO : Ignition finished successfully Aug 13 07:14:28.214314 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 07:14:28.214452 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 07:14:28.216130 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 07:14:28.216240 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 07:14:28.218786 systemd[1]: Stopped target network.target - Network. Aug 13 07:14:28.219852 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 07:14:28.219914 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 07:14:28.221645 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 07:14:28.221696 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 07:14:28.223522 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 07:14:28.223573 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 07:14:28.225409 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 07:14:28.225458 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 07:14:28.226713 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 07:14:28.228861 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 07:14:28.231008 systemd-networkd[786]: eth0: DHCPv6 lease lost Aug 13 07:14:28.233383 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 07:14:28.233521 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 07:14:28.236699 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 07:14:28.237175 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 07:14:28.237217 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:14:28.245051 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 07:14:28.245299 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 07:14:28.245355 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:14:28.245751 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:14:28.246185 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 07:14:28.246309 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 07:14:28.254401 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 07:14:28.254519 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:14:28.256627 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 07:14:28.256681 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 07:14:28.258727 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 07:14:28.258782 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:14:28.278184 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 07:14:28.278389 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:14:28.282165 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 07:14:28.282223 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 07:14:28.284000 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 07:14:28.284046 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:14:28.285937 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 07:14:28.286008 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:14:28.288400 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 07:14:28.288453 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 07:14:28.289984 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:14:28.290039 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:14:28.300148 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 07:14:28.301238 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 07:14:28.301319 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:14:28.303583 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:14:28.303664 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:14:28.306231 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 07:14:28.306386 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 07:14:28.308756 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 07:14:28.308879 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 07:14:28.623155 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 07:14:28.624172 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 07:14:28.626209 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 07:14:28.628206 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 07:14:28.628271 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 07:14:28.637105 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 07:14:28.646328 systemd[1]: Switching root. Aug 13 07:14:28.678693 systemd-journald[193]: Journal stopped Aug 13 07:14:30.055420 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Aug 13 07:14:30.055609 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 07:14:30.055636 kernel: SELinux: policy capability open_perms=1 Aug 13 07:14:30.055656 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 07:14:30.055668 kernel: SELinux: policy capability always_check_network=0 Aug 13 07:14:30.055679 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 07:14:30.055691 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 07:14:30.055709 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 07:14:30.055726 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 07:14:30.055738 kernel: audit: type=1403 audit(1755069269.228:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 07:14:30.055751 systemd[1]: Successfully loaded SELinux policy in 42.678ms. Aug 13 07:14:30.055777 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.497ms. Aug 13 07:14:30.055813 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 07:14:30.055842 systemd[1]: Detected virtualization kvm. Aug 13 07:14:30.055854 systemd[1]: Detected architecture x86-64. Aug 13 07:14:30.055866 systemd[1]: Detected first boot. Aug 13 07:14:30.055883 systemd[1]: Initializing machine ID from VM UUID. Aug 13 07:14:30.055895 zram_generator::config[1057]: No configuration found. Aug 13 07:14:30.055920 systemd[1]: Populated /etc with preset unit settings. Aug 13 07:14:30.055934 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 07:14:30.055991 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 07:14:30.056022 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 07:14:30.056048 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 07:14:30.058145 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 07:14:30.058209 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 07:14:30.058243 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 07:14:30.058266 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 07:14:30.058303 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 07:14:30.058328 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 07:14:30.058342 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 07:14:30.058354 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:14:30.058368 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:14:30.058381 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 07:14:30.058398 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 07:14:30.058412 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 07:14:30.058424 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:14:30.058436 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 07:14:30.058458 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:14:30.058475 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 07:14:30.058493 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 07:14:30.058514 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 07:14:30.058538 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 07:14:30.058566 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:14:30.058581 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:14:30.058593 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:14:30.058606 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:14:30.058618 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 07:14:30.058630 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 07:14:30.058642 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:14:30.058662 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:14:30.058687 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:14:30.058717 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 07:14:30.058750 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 07:14:30.058771 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 07:14:30.058784 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 07:14:30.058797 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:14:30.058809 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 07:14:30.058833 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 07:14:30.058862 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 07:14:30.058886 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 07:14:30.058901 systemd[1]: Reached target machines.target - Containers. Aug 13 07:14:30.058913 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 07:14:30.058926 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:14:30.058938 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:14:30.058950 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 07:14:30.059010 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:14:30.059023 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 07:14:30.059038 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:14:30.059050 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 07:14:30.059062 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:14:30.059075 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 07:14:30.059089 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 07:14:30.059115 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 07:14:30.059133 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 07:14:30.059154 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 07:14:30.059175 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:14:30.059194 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:14:30.059219 kernel: fuse: init (API version 7.39) Aug 13 07:14:30.059242 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 07:14:30.059256 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 07:14:30.059272 kernel: loop: module loaded Aug 13 07:14:30.059288 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:14:30.059301 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 07:14:30.059313 systemd[1]: Stopped verity-setup.service. Aug 13 07:14:30.059326 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:14:30.059357 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 07:14:30.059371 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 07:14:30.059386 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 07:14:30.059398 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 07:14:30.059414 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 07:14:30.059427 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 07:14:30.059439 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:14:30.059458 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 07:14:30.059472 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 07:14:30.059485 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:14:30.059505 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:14:30.059518 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:14:30.059533 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:14:30.059567 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 07:14:30.059579 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 07:14:30.059602 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:14:30.059627 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:14:30.059642 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:14:30.059659 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 07:14:30.059739 systemd-journald[1122]: Collecting audit messages is disabled. Aug 13 07:14:30.060457 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 07:14:30.060474 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 07:14:30.060486 kernel: ACPI: bus type drm_connector registered Aug 13 07:14:30.060499 systemd-journald[1122]: Journal started Aug 13 07:14:30.060528 systemd-journald[1122]: Runtime Journal (/run/log/journal/76e78fa8f0a04bea91028e6648146400) is 6.0M, max 48.3M, 42.2M free. Aug 13 07:14:30.065249 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 07:14:29.760726 systemd[1]: Queued start job for default target multi-user.target. Aug 13 07:14:29.786428 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 13 07:14:29.787073 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 07:14:30.070984 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 07:14:30.074333 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 07:14:30.074508 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:14:30.079022 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 13 07:14:30.089031 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 07:14:30.095051 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 07:14:30.098988 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:14:30.104016 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 07:14:30.107920 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:14:30.115154 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 07:14:30.120146 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:14:30.130983 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:14:30.143224 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 07:14:30.146012 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:14:30.148499 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 07:14:30.156158 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 07:14:30.156351 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 07:14:30.158443 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 07:14:30.159975 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 07:14:30.161782 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 07:14:30.163504 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 07:14:30.172598 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:14:30.175219 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:14:30.180032 kernel: loop0: detected capacity change from 0 to 224512 Aug 13 07:14:30.189365 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 07:14:30.255459 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 07:14:30.260094 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 13 07:14:30.263220 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 07:14:30.268393 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 07:14:30.267140 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 13 07:14:30.271079 systemd-journald[1122]: Time spent on flushing to /var/log/journal/76e78fa8f0a04bea91028e6648146400 is 33.887ms for 1006 entries. Aug 13 07:14:30.271079 systemd-journald[1122]: System Journal (/var/log/journal/76e78fa8f0a04bea91028e6648146400) is 8.0M, max 195.6M, 187.6M free. Aug 13 07:14:30.328385 systemd-journald[1122]: Received client request to flush runtime journal. Aug 13 07:14:30.328445 kernel: loop1: detected capacity change from 0 to 142488 Aug 13 07:14:30.307274 udevadm[1184]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 13 07:14:30.319393 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 07:14:30.321482 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 13 07:14:30.330810 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 07:14:30.332601 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 07:14:30.343331 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:14:30.415017 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Aug 13 07:14:30.415047 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Aug 13 07:14:30.427053 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:14:30.428779 kernel: loop2: detected capacity change from 0 to 140768 Aug 13 07:14:30.466008 kernel: loop3: detected capacity change from 0 to 224512 Aug 13 07:14:30.504014 kernel: loop4: detected capacity change from 0 to 142488 Aug 13 07:14:30.520009 kernel: loop5: detected capacity change from 0 to 140768 Aug 13 07:14:30.532316 (sd-merge)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Aug 13 07:14:30.533220 (sd-merge)[1195]: Merged extensions into '/usr'. Aug 13 07:14:30.538198 systemd[1]: Reloading requested from client PID 1153 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 07:14:30.538218 systemd[1]: Reloading... Aug 13 07:14:30.603402 ldconfig[1149]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 07:14:30.645004 zram_generator::config[1224]: No configuration found. Aug 13 07:14:30.769487 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:14:30.842317 systemd[1]: Reloading finished in 303 ms. Aug 13 07:14:30.912720 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 07:14:30.915206 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 07:14:30.933264 systemd[1]: Starting ensure-sysext.service... Aug 13 07:14:30.935942 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:14:30.993310 systemd[1]: Reloading requested from client PID 1258 ('systemctl') (unit ensure-sysext.service)... Aug 13 07:14:30.993327 systemd[1]: Reloading... Aug 13 07:14:31.028700 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 07:14:31.029553 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 07:14:31.030703 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 07:14:31.031169 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Aug 13 07:14:31.031317 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Aug 13 07:14:31.038173 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 07:14:31.041277 systemd-tmpfiles[1260]: Skipping /boot Aug 13 07:14:31.058994 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 07:14:31.059151 systemd-tmpfiles[1260]: Skipping /boot Aug 13 07:14:31.061950 zram_generator::config[1293]: No configuration found. Aug 13 07:14:31.165354 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:14:31.216646 systemd[1]: Reloading finished in 222 ms. Aug 13 07:14:31.236080 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 07:14:31.250411 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:14:31.262350 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 07:14:31.266050 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 07:14:31.269762 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 07:14:31.279048 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:14:31.288245 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:14:31.292067 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 07:14:31.298034 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:14:31.298342 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:14:31.306298 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:14:31.309866 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:14:31.315727 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:14:31.317018 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:14:31.320863 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 07:14:31.321898 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:14:31.323130 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:14:31.323368 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:14:31.326467 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:14:31.326662 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:14:31.328516 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 07:14:31.338110 systemd-udevd[1331]: Using default interface naming scheme 'v255'. Aug 13 07:14:31.339179 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:14:31.339421 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:14:31.343887 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:14:31.344227 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:14:31.352405 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:14:31.356160 augenrules[1355]: No rules Aug 13 07:14:31.356905 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:14:31.363000 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:14:31.364154 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:14:31.369052 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 07:14:31.370187 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:14:31.371743 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:14:31.374824 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 07:14:31.376618 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:14:31.376804 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:14:31.378815 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:14:31.379005 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:14:31.381218 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:14:31.381470 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:14:31.383637 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 07:14:31.389421 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 07:14:31.400699 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 07:14:31.402730 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 07:14:31.417864 systemd[1]: Finished ensure-sysext.service. Aug 13 07:14:31.426158 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 13 07:14:31.426355 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:14:31.426638 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:14:31.434911 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:14:31.440255 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 07:14:31.442302 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:14:31.447690 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:14:31.448848 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:14:31.454141 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:14:31.460197 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 13 07:14:31.462083 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 07:14:31.462127 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:14:31.462792 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:14:31.463030 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:14:31.464577 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 07:14:31.464766 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 07:14:31.466182 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:14:31.466370 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:14:31.473329 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:14:31.484094 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:14:31.484302 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:14:31.486769 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:14:31.495673 systemd-resolved[1329]: Positive Trust Anchors: Aug 13 07:14:31.495992 systemd-resolved[1329]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:14:31.496092 systemd-resolved[1329]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:14:31.497982 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Aug 13 07:14:31.501684 systemd-resolved[1329]: Defaulting to hostname 'linux'. Aug 13 07:14:31.505726 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:14:31.582045 kernel: ACPI: button: Power Button [PWRF] Aug 13 07:14:31.584402 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:14:31.593036 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1386) Aug 13 07:14:31.608163 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 13 07:14:31.611454 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 07:14:31.622985 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Aug 13 07:14:31.636595 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 07:14:31.638106 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Aug 13 07:14:31.638396 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Aug 13 07:14:31.640722 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Aug 13 07:14:31.640950 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Aug 13 07:14:31.641876 systemd-networkd[1400]: lo: Link UP Aug 13 07:14:31.642056 systemd-networkd[1400]: lo: Gained carrier Aug 13 07:14:31.646554 systemd-networkd[1400]: Enumeration completed Aug 13 07:14:31.648540 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 07:14:31.648570 systemd-networkd[1400]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:14:31.648574 systemd-networkd[1400]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 07:14:31.650850 systemd-networkd[1400]: eth0: Link UP Aug 13 07:14:31.650909 systemd-networkd[1400]: eth0: Gained carrier Aug 13 07:14:31.651507 systemd-networkd[1400]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:14:31.658663 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:14:31.661663 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:14:31.663250 systemd[1]: Reached target network.target - Network. Aug 13 07:14:31.673987 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 07:14:31.675285 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 07:14:31.678905 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 07:14:31.679182 systemd-networkd[1400]: eth0: DHCPv4 address 10.0.0.120/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 13 07:14:31.683542 systemd-timesyncd[1403]: Network configuration changed, trying to establish connection. Aug 13 07:14:32.111064 systemd-resolved[1329]: Clock change detected. Flushing caches. Aug 13 07:14:32.111134 systemd-timesyncd[1403]: Contacted time server 10.0.0.1:123 (10.0.0.1). Aug 13 07:14:32.111176 systemd-timesyncd[1403]: Initial clock synchronization to Wed 2025-08-13 07:14:32.111027 UTC. Aug 13 07:14:32.120203 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:14:32.120469 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:14:32.174597 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:14:32.280362 kernel: kvm_amd: TSC scaling supported Aug 13 07:14:32.280428 kernel: kvm_amd: Nested Virtualization enabled Aug 13 07:14:32.280471 kernel: kvm_amd: Nested Paging enabled Aug 13 07:14:32.280485 kernel: kvm_amd: LBR virtualization supported Aug 13 07:14:32.282160 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Aug 13 07:14:32.282180 kernel: kvm_amd: Virtual GIF supported Aug 13 07:14:32.303882 kernel: EDAC MC: Ver: 3.0.0 Aug 13 07:14:32.327477 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:14:32.339195 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 13 07:14:32.352326 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 13 07:14:32.363482 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 07:14:32.401793 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 13 07:14:32.403552 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:14:32.404832 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:14:32.406188 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 07:14:32.407624 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 07:14:32.409293 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 07:14:32.411111 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 07:14:32.412594 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 07:14:32.414026 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 07:14:32.414065 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:14:32.415150 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:14:32.417278 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 07:14:32.420802 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 07:14:32.438803 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 07:14:32.441285 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 13 07:14:32.442885 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 07:14:32.444052 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:14:32.445042 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:14:32.446019 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 07:14:32.446046 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 07:14:32.447105 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 07:14:32.449216 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 07:14:32.451961 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 07:14:32.457278 lvm[1437]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 07:14:32.458133 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 07:14:32.461153 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 07:14:32.462428 jq[1440]: false Aug 13 07:14:32.462760 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 07:14:32.467001 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 07:14:32.470078 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 07:14:32.474935 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 07:14:32.481250 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 07:14:32.483477 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 07:14:32.484059 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 07:14:32.484888 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 07:14:32.488979 extend-filesystems[1441]: Found loop3 Aug 13 07:14:32.488979 extend-filesystems[1441]: Found loop4 Aug 13 07:14:32.488979 extend-filesystems[1441]: Found loop5 Aug 13 07:14:32.488979 extend-filesystems[1441]: Found sr0 Aug 13 07:14:32.488979 extend-filesystems[1441]: Found vda Aug 13 07:14:32.488979 extend-filesystems[1441]: Found vda1 Aug 13 07:14:32.488979 extend-filesystems[1441]: Found vda2 Aug 13 07:14:32.488979 extend-filesystems[1441]: Found vda3 Aug 13 07:14:32.488979 extend-filesystems[1441]: Found usr Aug 13 07:14:32.488979 extend-filesystems[1441]: Found vda4 Aug 13 07:14:32.488979 extend-filesystems[1441]: Found vda6 Aug 13 07:14:32.488979 extend-filesystems[1441]: Found vda7 Aug 13 07:14:32.488979 extend-filesystems[1441]: Found vda9 Aug 13 07:14:32.488979 extend-filesystems[1441]: Checking size of /dev/vda9 Aug 13 07:14:32.504605 extend-filesystems[1441]: Resized partition /dev/vda9 Aug 13 07:14:32.489010 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 07:14:32.489897 dbus-daemon[1439]: [system] SELinux support is enabled Aug 13 07:14:32.491568 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 07:14:32.509482 extend-filesystems[1462]: resize2fs 1.47.1 (20-May-2024) Aug 13 07:14:32.494723 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 13 07:14:32.500055 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 07:14:32.500345 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 07:14:32.508538 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 07:14:32.508782 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 07:14:32.510335 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 07:14:32.510562 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 07:14:32.516886 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Aug 13 07:14:32.523603 jq[1454]: true Aug 13 07:14:32.525120 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 07:14:32.525166 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 07:14:32.527345 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 07:14:32.527372 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 07:14:32.541570 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1366) Aug 13 07:14:32.541627 update_engine[1453]: I20250813 07:14:32.541147 1453 main.cc:92] Flatcar Update Engine starting Aug 13 07:14:32.543876 update_engine[1453]: I20250813 07:14:32.543824 1453 update_check_scheduler.cc:74] Next update check in 7m13s Aug 13 07:14:32.544452 systemd[1]: Started update-engine.service - Update Engine. Aug 13 07:14:32.545363 tar[1461]: linux-amd64/LICENSE Aug 13 07:14:32.547519 tar[1461]: linux-amd64/helm Aug 13 07:14:32.547601 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 07:14:32.549310 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Aug 13 07:14:32.576958 jq[1472]: true Aug 13 07:14:32.561226 (ntainerd)[1473]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 07:14:32.594893 extend-filesystems[1462]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 13 07:14:32.594893 extend-filesystems[1462]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 13 07:14:32.594893 extend-filesystems[1462]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Aug 13 07:14:32.580131 systemd-logind[1449]: Watching system buttons on /dev/input/event1 (Power Button) Aug 13 07:14:32.599809 extend-filesystems[1441]: Resized filesystem in /dev/vda9 Aug 13 07:14:32.580150 systemd-logind[1449]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 07:14:32.581626 systemd-logind[1449]: New seat seat0. Aug 13 07:14:32.598428 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 07:14:32.603142 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 07:14:32.604043 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 07:14:32.681498 locksmithd[1476]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 07:14:32.686765 bash[1495]: Updated "/home/core/.ssh/authorized_keys" Aug 13 07:14:32.688613 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 07:14:32.691902 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Aug 13 07:14:32.853189 sshd_keygen[1471]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 07:14:32.930593 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 07:14:32.939122 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 07:14:32.956638 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 07:14:32.956879 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 07:14:32.965589 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 07:14:33.011893 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 07:14:33.021304 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 07:14:33.023369 containerd[1473]: time="2025-08-13T07:14:33.023225329Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Aug 13 07:14:33.032148 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 07:14:33.033853 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 07:14:33.050388 containerd[1473]: time="2025-08-13T07:14:33.050329079Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:14:33.052674 containerd[1473]: time="2025-08-13T07:14:33.052623240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:14:33.052674 containerd[1473]: time="2025-08-13T07:14:33.052658827Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 07:14:33.052729 containerd[1473]: time="2025-08-13T07:14:33.052677712Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 07:14:33.052982 containerd[1473]: time="2025-08-13T07:14:33.052948009Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 13 07:14:33.052982 containerd[1473]: time="2025-08-13T07:14:33.052974198Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 13 07:14:33.053086 containerd[1473]: time="2025-08-13T07:14:33.053054859Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:14:33.053086 containerd[1473]: time="2025-08-13T07:14:33.053073474Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:14:33.053326 containerd[1473]: time="2025-08-13T07:14:33.053291823Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:14:33.053326 containerd[1473]: time="2025-08-13T07:14:33.053314035Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 07:14:33.053367 containerd[1473]: time="2025-08-13T07:14:33.053326859Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:14:33.053367 containerd[1473]: time="2025-08-13T07:14:33.053337148Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 07:14:33.053494 containerd[1473]: time="2025-08-13T07:14:33.053462283Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:14:33.053873 containerd[1473]: time="2025-08-13T07:14:33.053833749Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:14:33.054015 containerd[1473]: time="2025-08-13T07:14:33.053987878Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:14:33.054015 containerd[1473]: time="2025-08-13T07:14:33.054006503Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 07:14:33.054125 containerd[1473]: time="2025-08-13T07:14:33.054107372Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 07:14:33.054188 containerd[1473]: time="2025-08-13T07:14:33.054171492Z" level=info msg="metadata content store policy set" policy=shared Aug 13 07:14:33.148728 containerd[1473]: time="2025-08-13T07:14:33.148671336Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 07:14:33.148829 containerd[1473]: time="2025-08-13T07:14:33.148762166Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 07:14:33.148829 containerd[1473]: time="2025-08-13T07:14:33.148782715Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 13 07:14:33.148829 containerd[1473]: time="2025-08-13T07:14:33.148800508Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 13 07:14:33.148829 containerd[1473]: time="2025-08-13T07:14:33.148814564Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 07:14:33.149099 containerd[1473]: time="2025-08-13T07:14:33.149074291Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 07:14:33.149421 containerd[1473]: time="2025-08-13T07:14:33.149397086Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 07:14:33.149576 containerd[1473]: time="2025-08-13T07:14:33.149541387Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 13 07:14:33.149576 containerd[1473]: time="2025-08-13T07:14:33.149561284Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 13 07:14:33.149576 containerd[1473]: time="2025-08-13T07:14:33.149574038Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 13 07:14:33.149646 containerd[1473]: time="2025-08-13T07:14:33.149587393Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 07:14:33.149646 containerd[1473]: time="2025-08-13T07:14:33.149602020Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 07:14:33.149646 containerd[1473]: time="2025-08-13T07:14:33.149617780Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 07:14:33.149646 containerd[1473]: time="2025-08-13T07:14:33.149631105Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 07:14:33.149738 containerd[1473]: time="2025-08-13T07:14:33.149659198Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 07:14:33.149738 containerd[1473]: time="2025-08-13T07:14:33.149675318Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 07:14:33.149738 containerd[1473]: time="2025-08-13T07:14:33.149689675Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 07:14:33.149738 containerd[1473]: time="2025-08-13T07:14:33.149701717Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 07:14:33.149738 containerd[1473]: time="2025-08-13T07:14:33.149728257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 07:14:33.149829 containerd[1473]: time="2025-08-13T07:14:33.149742043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 07:14:33.149829 containerd[1473]: time="2025-08-13T07:14:33.149755238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 07:14:33.149829 containerd[1473]: time="2025-08-13T07:14:33.149767120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 07:14:33.149829 containerd[1473]: time="2025-08-13T07:14:33.149782048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 07:14:33.149829 containerd[1473]: time="2025-08-13T07:14:33.149794441Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 07:14:33.149829 containerd[1473]: time="2025-08-13T07:14:33.149806113Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 07:14:33.149829 containerd[1473]: time="2025-08-13T07:14:33.149818837Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 07:14:33.149829 containerd[1473]: time="2025-08-13T07:14:33.149830779Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 13 07:14:33.150018 containerd[1473]: time="2025-08-13T07:14:33.149849034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 13 07:14:33.150018 containerd[1473]: time="2025-08-13T07:14:33.149877226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 07:14:33.150018 containerd[1473]: time="2025-08-13T07:14:33.149893196Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 13 07:14:33.150018 containerd[1473]: time="2025-08-13T07:14:33.149905930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 07:14:33.150018 containerd[1473]: time="2025-08-13T07:14:33.149925126Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 13 07:14:33.150018 containerd[1473]: time="2025-08-13T07:14:33.149951556Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 13 07:14:33.150018 containerd[1473]: time="2025-08-13T07:14:33.149987964Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 07:14:33.150018 containerd[1473]: time="2025-08-13T07:14:33.150002611Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 07:14:33.150174 containerd[1473]: time="2025-08-13T07:14:33.150062934Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 07:14:33.150174 containerd[1473]: time="2025-08-13T07:14:33.150104402Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 13 07:14:33.150174 containerd[1473]: time="2025-08-13T07:14:33.150116725Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 07:14:33.150174 containerd[1473]: time="2025-08-13T07:14:33.150150368Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 13 07:14:33.150174 containerd[1473]: time="2025-08-13T07:14:33.150160708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 07:14:33.150277 containerd[1473]: time="2025-08-13T07:14:33.150194040Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 13 07:14:33.150277 containerd[1473]: time="2025-08-13T07:14:33.150206944Z" level=info msg="NRI interface is disabled by configuration." Aug 13 07:14:33.150277 containerd[1473]: time="2025-08-13T07:14:33.150217655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 07:14:33.150594 containerd[1473]: time="2025-08-13T07:14:33.150520883Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 07:14:33.150594 containerd[1473]: time="2025-08-13T07:14:33.150577469Z" level=info msg="Connect containerd service" Aug 13 07:14:33.150594 containerd[1473]: time="2025-08-13T07:14:33.150620079Z" level=info msg="using legacy CRI server" Aug 13 07:14:33.150893 containerd[1473]: time="2025-08-13T07:14:33.150631520Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 07:14:33.150893 containerd[1473]: time="2025-08-13T07:14:33.150755202Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 07:14:33.151532 containerd[1473]: time="2025-08-13T07:14:33.151481644Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 07:14:33.151813 containerd[1473]: time="2025-08-13T07:14:33.151723017Z" level=info msg="Start subscribing containerd event" Aug 13 07:14:33.151813 containerd[1473]: time="2025-08-13T07:14:33.151829406Z" level=info msg="Start recovering state" Aug 13 07:14:33.151989 containerd[1473]: time="2025-08-13T07:14:33.151951645Z" level=info msg="Start event monitor" Aug 13 07:14:33.151989 containerd[1473]: time="2025-08-13T07:14:33.151961684Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 07:14:33.152028 containerd[1473]: time="2025-08-13T07:14:33.151972675Z" level=info msg="Start snapshots syncer" Aug 13 07:14:33.152028 containerd[1473]: time="2025-08-13T07:14:33.152014954Z" level=info msg="Start cni network conf syncer for default" Aug 13 07:14:33.152028 containerd[1473]: time="2025-08-13T07:14:33.152026125Z" level=info msg="Start streaming server" Aug 13 07:14:33.152307 containerd[1473]: time="2025-08-13T07:14:33.152025333Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 07:14:33.152307 containerd[1473]: time="2025-08-13T07:14:33.152298736Z" level=info msg="containerd successfully booted in 0.130419s" Aug 13 07:14:33.152472 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 07:14:33.195580 tar[1461]: linux-amd64/README.md Aug 13 07:14:33.218314 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 07:14:33.411166 systemd-networkd[1400]: eth0: Gained IPv6LL Aug 13 07:14:33.415661 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 07:14:33.417576 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 07:14:33.431182 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Aug 13 07:14:33.433844 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:14:33.436273 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 07:14:33.459266 systemd[1]: coreos-metadata.service: Deactivated successfully. Aug 13 07:14:33.459535 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Aug 13 07:14:33.461297 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 07:14:33.464733 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 07:14:34.631290 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:14:34.633472 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 07:14:34.635825 systemd[1]: Startup finished in 1.172s (kernel) + 6.506s (initrd) + 5.020s (userspace) = 12.699s. Aug 13 07:14:34.636106 (kubelet)[1552]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:14:35.203617 kubelet[1552]: E0813 07:14:35.203518 1552 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:14:35.207691 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:14:35.207934 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:14:35.208362 systemd[1]: kubelet.service: Consumed 1.622s CPU time. Aug 13 07:14:36.872694 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 07:14:36.874077 systemd[1]: Started sshd@0-10.0.0.120:22-10.0.0.1:49186.service - OpenSSH per-connection server daemon (10.0.0.1:49186). Aug 13 07:14:36.926822 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 49186 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:14:36.929105 sshd[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:14:36.938942 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 07:14:36.946133 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 07:14:36.948384 systemd-logind[1449]: New session 1 of user core. Aug 13 07:14:36.959652 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 07:14:36.962697 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 07:14:36.972212 (systemd)[1569]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 07:14:37.097743 systemd[1569]: Queued start job for default target default.target. Aug 13 07:14:37.109305 systemd[1569]: Created slice app.slice - User Application Slice. Aug 13 07:14:37.109333 systemd[1569]: Reached target paths.target - Paths. Aug 13 07:14:37.109347 systemd[1569]: Reached target timers.target - Timers. Aug 13 07:14:37.111025 systemd[1569]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 07:14:37.125934 systemd[1569]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 07:14:37.126095 systemd[1569]: Reached target sockets.target - Sockets. Aug 13 07:14:37.126114 systemd[1569]: Reached target basic.target - Basic System. Aug 13 07:14:37.126158 systemd[1569]: Reached target default.target - Main User Target. Aug 13 07:14:37.126196 systemd[1569]: Startup finished in 146ms. Aug 13 07:14:37.126500 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 07:14:37.128284 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 07:14:37.189711 systemd[1]: Started sshd@1-10.0.0.120:22-10.0.0.1:49188.service - OpenSSH per-connection server daemon (10.0.0.1:49188). Aug 13 07:14:37.227701 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 49188 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:14:37.229654 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:14:37.234080 systemd-logind[1449]: New session 2 of user core. Aug 13 07:14:37.245001 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 07:14:37.301364 sshd[1580]: pam_unix(sshd:session): session closed for user core Aug 13 07:14:37.313951 systemd[1]: sshd@1-10.0.0.120:22-10.0.0.1:49188.service: Deactivated successfully. Aug 13 07:14:37.316107 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 07:14:37.317549 systemd-logind[1449]: Session 2 logged out. Waiting for processes to exit. Aug 13 07:14:37.318971 systemd[1]: Started sshd@2-10.0.0.120:22-10.0.0.1:49200.service - OpenSSH per-connection server daemon (10.0.0.1:49200). Aug 13 07:14:37.319833 systemd-logind[1449]: Removed session 2. Aug 13 07:14:37.355633 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 49200 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:14:37.357342 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:14:37.361402 systemd-logind[1449]: New session 3 of user core. Aug 13 07:14:37.370994 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 07:14:37.422302 sshd[1587]: pam_unix(sshd:session): session closed for user core Aug 13 07:14:37.436895 systemd[1]: sshd@2-10.0.0.120:22-10.0.0.1:49200.service: Deactivated successfully. Aug 13 07:14:37.439281 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 07:14:37.442110 systemd-logind[1449]: Session 3 logged out. Waiting for processes to exit. Aug 13 07:14:37.451347 systemd[1]: Started sshd@3-10.0.0.120:22-10.0.0.1:49214.service - OpenSSH per-connection server daemon (10.0.0.1:49214). Aug 13 07:14:37.452422 systemd-logind[1449]: Removed session 3. Aug 13 07:14:37.482530 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 49214 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:14:37.484213 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:14:37.488353 systemd-logind[1449]: New session 4 of user core. Aug 13 07:14:37.497982 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 07:14:37.554960 sshd[1594]: pam_unix(sshd:session): session closed for user core Aug 13 07:14:37.568089 systemd[1]: sshd@3-10.0.0.120:22-10.0.0.1:49214.service: Deactivated successfully. Aug 13 07:14:37.570037 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 07:14:37.571819 systemd-logind[1449]: Session 4 logged out. Waiting for processes to exit. Aug 13 07:14:37.579168 systemd[1]: Started sshd@4-10.0.0.120:22-10.0.0.1:49220.service - OpenSSH per-connection server daemon (10.0.0.1:49220). Aug 13 07:14:37.580119 systemd-logind[1449]: Removed session 4. Aug 13 07:14:37.613920 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 49220 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:14:37.615629 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:14:37.620001 systemd-logind[1449]: New session 5 of user core. Aug 13 07:14:37.639041 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 07:14:37.700652 sudo[1604]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 07:14:37.701130 sudo[1604]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:14:37.717547 sudo[1604]: pam_unix(sudo:session): session closed for user root Aug 13 07:14:37.719659 sshd[1601]: pam_unix(sshd:session): session closed for user core Aug 13 07:14:37.732761 systemd[1]: sshd@4-10.0.0.120:22-10.0.0.1:49220.service: Deactivated successfully. Aug 13 07:14:37.734599 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 07:14:37.736367 systemd-logind[1449]: Session 5 logged out. Waiting for processes to exit. Aug 13 07:14:37.737853 systemd[1]: Started sshd@5-10.0.0.120:22-10.0.0.1:49226.service - OpenSSH per-connection server daemon (10.0.0.1:49226). Aug 13 07:14:37.738625 systemd-logind[1449]: Removed session 5. Aug 13 07:14:37.786347 sshd[1609]: Accepted publickey for core from 10.0.0.1 port 49226 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:14:37.787966 sshd[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:14:37.791826 systemd-logind[1449]: New session 6 of user core. Aug 13 07:14:37.800989 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 07:14:37.857966 sudo[1613]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 07:14:37.858348 sudo[1613]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:14:37.862779 sudo[1613]: pam_unix(sudo:session): session closed for user root Aug 13 07:14:37.870026 sudo[1612]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 13 07:14:37.870402 sudo[1612]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:14:37.890086 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 13 07:14:37.891818 auditctl[1616]: No rules Aug 13 07:14:37.893134 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 07:14:37.893420 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 13 07:14:37.895316 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 07:14:37.930981 augenrules[1634]: No rules Aug 13 07:14:37.933003 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 07:14:37.934411 sudo[1612]: pam_unix(sudo:session): session closed for user root Aug 13 07:14:37.936547 sshd[1609]: pam_unix(sshd:session): session closed for user core Aug 13 07:14:37.951250 systemd[1]: sshd@5-10.0.0.120:22-10.0.0.1:49226.service: Deactivated successfully. Aug 13 07:14:37.953810 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 07:14:37.955943 systemd-logind[1449]: Session 6 logged out. Waiting for processes to exit. Aug 13 07:14:37.966210 systemd[1]: Started sshd@6-10.0.0.120:22-10.0.0.1:49236.service - OpenSSH per-connection server daemon (10.0.0.1:49236). Aug 13 07:14:37.967272 systemd-logind[1449]: Removed session 6. Aug 13 07:14:37.999667 sshd[1642]: Accepted publickey for core from 10.0.0.1 port 49236 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:14:38.001457 sshd[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:14:38.005529 systemd-logind[1449]: New session 7 of user core. Aug 13 07:14:38.017009 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 07:14:38.075402 sudo[1645]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 07:14:38.075760 sudo[1645]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:14:38.893163 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 07:14:38.893270 (dockerd)[1664]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 07:14:39.500363 dockerd[1664]: time="2025-08-13T07:14:39.500280164Z" level=info msg="Starting up" Aug 13 07:14:40.226959 systemd[1]: var-lib-docker-metacopy\x2dcheck3640751880-merged.mount: Deactivated successfully. Aug 13 07:14:40.255118 dockerd[1664]: time="2025-08-13T07:14:40.255050148Z" level=info msg="Loading containers: start." Aug 13 07:14:40.377889 kernel: Initializing XFRM netlink socket Aug 13 07:14:40.469567 systemd-networkd[1400]: docker0: Link UP Aug 13 07:14:40.493924 dockerd[1664]: time="2025-08-13T07:14:40.493787235Z" level=info msg="Loading containers: done." Aug 13 07:14:40.510926 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4076436848-merged.mount: Deactivated successfully. Aug 13 07:14:40.513826 dockerd[1664]: time="2025-08-13T07:14:40.513770887Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 07:14:40.514299 dockerd[1664]: time="2025-08-13T07:14:40.513932460Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Aug 13 07:14:40.514299 dockerd[1664]: time="2025-08-13T07:14:40.514085517Z" level=info msg="Daemon has completed initialization" Aug 13 07:14:40.559892 dockerd[1664]: time="2025-08-13T07:14:40.557356164Z" level=info msg="API listen on /run/docker.sock" Aug 13 07:14:40.559105 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 07:14:41.539677 containerd[1473]: time="2025-08-13T07:14:41.539618845Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\"" Aug 13 07:14:42.256015 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2394414863.mount: Deactivated successfully. Aug 13 07:14:43.300622 containerd[1473]: time="2025-08-13T07:14:43.300543420Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:14:43.301297 containerd[1473]: time="2025-08-13T07:14:43.301219197Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.7: active requests=0, bytes read=28799994" Aug 13 07:14:43.302270 containerd[1473]: time="2025-08-13T07:14:43.302239349Z" level=info msg="ImageCreate event name:\"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:14:43.305934 containerd[1473]: time="2025-08-13T07:14:43.305895624Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:14:43.306820 containerd[1473]: time="2025-08-13T07:14:43.306786854Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.7\" with image id \"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\", size \"28796794\" in 1.767122514s" Aug 13 07:14:43.306896 containerd[1473]: time="2025-08-13T07:14:43.306825677Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\" returns image reference \"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\"" Aug 13 07:14:43.307564 containerd[1473]: time="2025-08-13T07:14:43.307529116Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\"" Aug 13 07:14:44.734058 containerd[1473]: time="2025-08-13T07:14:44.733988137Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:14:44.734802 containerd[1473]: time="2025-08-13T07:14:44.734756487Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.7: active requests=0, bytes read=24783636" Aug 13 07:14:44.735937 containerd[1473]: time="2025-08-13T07:14:44.735912134Z" level=info msg="ImageCreate event name:\"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:14:44.738972 containerd[1473]: time="2025-08-13T07:14:44.738933959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:14:44.740107 containerd[1473]: time="2025-08-13T07:14:44.740078875Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.7\" with image id \"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\", size \"26385470\" in 1.432516457s" Aug 13 07:14:44.740107 containerd[1473]: time="2025-08-13T07:14:44.740112899Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\" returns image reference \"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\"" Aug 13 07:14:44.740731 containerd[1473]: time="2025-08-13T07:14:44.740687566Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\"" Aug 13 07:14:45.458269 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 07:14:45.467096 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:14:45.680335 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:14:45.687262 (kubelet)[1878]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:14:46.891218 kubelet[1878]: E0813 07:14:46.891124 1878 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:14:46.898394 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:14:46.898659 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:14:46.899034 systemd[1]: kubelet.service: Consumed 1.443s CPU time. Aug 13 07:14:49.251716 containerd[1473]: time="2025-08-13T07:14:49.251651021Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:14:49.252943 containerd[1473]: time="2025-08-13T07:14:49.252905473Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.7: active requests=0, bytes read=19176921" Aug 13 07:14:49.254135 containerd[1473]: time="2025-08-13T07:14:49.254083932Z" level=info msg="ImageCreate event name:\"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:14:49.256951 containerd[1473]: time="2025-08-13T07:14:49.256906974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:14:49.258261 containerd[1473]: time="2025-08-13T07:14:49.258224504Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.7\" with image id \"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\", size \"20778773\" in 4.517502264s" Aug 13 07:14:49.258318 containerd[1473]: time="2025-08-13T07:14:49.258269298Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\" returns image reference \"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\"" Aug 13 07:14:49.258844 containerd[1473]: time="2025-08-13T07:14:49.258814200Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\"" Aug 13 07:14:50.586756 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount163450369.mount: Deactivated successfully. Aug 13 07:14:51.643953 containerd[1473]: time="2025-08-13T07:14:51.643838128Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:14:51.653015 containerd[1473]: time="2025-08-13T07:14:51.652964597Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.7: active requests=0, bytes read=30895380" Aug 13 07:14:51.658292 containerd[1473]: time="2025-08-13T07:14:51.658247911Z" level=info msg="ImageCreate event name:\"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:14:51.663743 containerd[1473]: time="2025-08-13T07:14:51.663704049Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:14:51.664500 containerd[1473]: time="2025-08-13T07:14:51.664447182Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.7\" with image id \"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\", repo tag \"registry.k8s.io/kube-proxy:v1.32.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\", size \"30894399\" in 2.405604059s" Aug 13 07:14:51.664500 containerd[1473]: time="2025-08-13T07:14:51.664494822Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\" returns image reference \"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\"" Aug 13 07:14:51.665121 containerd[1473]: time="2025-08-13T07:14:51.665082874Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 07:14:52.232071 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3355976525.mount: Deactivated successfully. Aug 13 07:14:53.855818 containerd[1473]: time="2025-08-13T07:14:53.855661927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:14:53.856542 containerd[1473]: time="2025-08-13T07:14:53.856243688Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Aug 13 07:14:53.857632 containerd[1473]: time="2025-08-13T07:14:53.857549787Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:14:53.860787 containerd[1473]: time="2025-08-13T07:14:53.860739767Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:14:53.861982 containerd[1473]: time="2025-08-13T07:14:53.861920250Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.196790818s" Aug 13 07:14:53.862136 containerd[1473]: time="2025-08-13T07:14:53.862003295Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 07:14:53.863014 containerd[1473]: time="2025-08-13T07:14:53.862987550Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 07:14:54.447306 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount724662213.mount: Deactivated successfully. Aug 13 07:14:54.452632 containerd[1473]: time="2025-08-13T07:14:54.452583893Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:14:54.453384 containerd[1473]: time="2025-08-13T07:14:54.453311978Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Aug 13 07:14:54.454578 containerd[1473]: time="2025-08-13T07:14:54.454542725Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:14:54.456781 containerd[1473]: time="2025-08-13T07:14:54.456731699Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:14:54.457626 containerd[1473]: time="2025-08-13T07:14:54.457585680Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 594.564687ms" Aug 13 07:14:54.457672 containerd[1473]: time="2025-08-13T07:14:54.457625675Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 07:14:54.458218 containerd[1473]: time="2025-08-13T07:14:54.458187217Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Aug 13 07:14:55.022841 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1657464421.mount: Deactivated successfully. Aug 13 07:14:57.148991 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 07:14:57.379437 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:14:57.577182 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:14:57.581844 (kubelet)[2014]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:14:57.620883 kubelet[2014]: E0813 07:14:57.620578 2014 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:14:57.625121 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:14:57.625352 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:14:59.257840 containerd[1473]: time="2025-08-13T07:14:59.257760743Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:14:59.258678 containerd[1473]: time="2025-08-13T07:14:59.258627789Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Aug 13 07:14:59.259967 containerd[1473]: time="2025-08-13T07:14:59.259934759Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:14:59.263481 containerd[1473]: time="2025-08-13T07:14:59.263451902Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:14:59.264655 containerd[1473]: time="2025-08-13T07:14:59.264609372Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 4.806374295s" Aug 13 07:14:59.264655 containerd[1473]: time="2025-08-13T07:14:59.264646301Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Aug 13 07:15:01.292221 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:15:01.308092 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:15:01.337711 systemd[1]: Reloading requested from client PID 2055 ('systemctl') (unit session-7.scope)... Aug 13 07:15:01.337731 systemd[1]: Reloading... Aug 13 07:15:01.428892 zram_generator::config[2094]: No configuration found. Aug 13 07:15:01.726577 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:15:01.806841 systemd[1]: Reloading finished in 468 ms. Aug 13 07:15:01.865215 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 07:15:01.865324 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 07:15:01.865666 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:15:01.867829 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:15:02.077907 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:15:02.102773 (kubelet)[2143]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 07:15:02.156442 kubelet[2143]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:15:02.156442 kubelet[2143]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 07:15:02.156442 kubelet[2143]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:15:02.157155 kubelet[2143]: I0813 07:15:02.156570 2143 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 07:15:02.403646 kubelet[2143]: I0813 07:15:02.403510 2143 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 13 07:15:02.403646 kubelet[2143]: I0813 07:15:02.403552 2143 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 07:15:02.403900 kubelet[2143]: I0813 07:15:02.403846 2143 server.go:954] "Client rotation is on, will bootstrap in background" Aug 13 07:15:02.425295 kubelet[2143]: E0813 07:15:02.425227 2143 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.120:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.120:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:15:02.426694 kubelet[2143]: I0813 07:15:02.426663 2143 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 07:15:02.436247 kubelet[2143]: E0813 07:15:02.436189 2143 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 07:15:02.436247 kubelet[2143]: I0813 07:15:02.436220 2143 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 07:15:02.442167 kubelet[2143]: I0813 07:15:02.442116 2143 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 07:15:02.443947 kubelet[2143]: I0813 07:15:02.443877 2143 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 07:15:02.444095 kubelet[2143]: I0813 07:15:02.443916 2143 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 07:15:02.444285 kubelet[2143]: I0813 07:15:02.444099 2143 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 07:15:02.444285 kubelet[2143]: I0813 07:15:02.444111 2143 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 07:15:02.444365 kubelet[2143]: I0813 07:15:02.444340 2143 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:15:02.447840 kubelet[2143]: I0813 07:15:02.447792 2143 kubelet.go:446] "Attempting to sync node with API server" Aug 13 07:15:02.447840 kubelet[2143]: I0813 07:15:02.447826 2143 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 07:15:02.447840 kubelet[2143]: I0813 07:15:02.447845 2143 kubelet.go:352] "Adding apiserver pod source" Aug 13 07:15:02.447996 kubelet[2143]: I0813 07:15:02.447857 2143 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 07:15:02.450847 kubelet[2143]: I0813 07:15:02.450793 2143 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 07:15:02.451224 kubelet[2143]: I0813 07:15:02.451193 2143 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 07:15:02.451885 kubelet[2143]: W0813 07:15:02.451849 2143 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 07:15:02.456359 kubelet[2143]: I0813 07:15:02.456324 2143 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 07:15:02.456359 kubelet[2143]: I0813 07:15:02.456358 2143 server.go:1287] "Started kubelet" Aug 13 07:15:02.458939 kubelet[2143]: W0813 07:15:02.458103 2143 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.120:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.120:6443: connect: connection refused Aug 13 07:15:02.458939 kubelet[2143]: E0813 07:15:02.458168 2143 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.120:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.120:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:15:02.458939 kubelet[2143]: W0813 07:15:02.458329 2143 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.120:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.120:6443: connect: connection refused Aug 13 07:15:02.458939 kubelet[2143]: E0813 07:15:02.458382 2143 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.120:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.120:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:15:02.459350 kubelet[2143]: I0813 07:15:02.459328 2143 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 07:15:02.459424 kubelet[2143]: I0813 07:15:02.459372 2143 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 07:15:02.460776 kubelet[2143]: I0813 07:15:02.460731 2143 server.go:479] "Adding debug handlers to kubelet server" Aug 13 07:15:02.461629 kubelet[2143]: I0813 07:15:02.459302 2143 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 07:15:02.461935 kubelet[2143]: I0813 07:15:02.461912 2143 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 07:15:02.462157 kubelet[2143]: I0813 07:15:02.462122 2143 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 07:15:02.462709 kubelet[2143]: E0813 07:15:02.462674 2143 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:15:02.462770 kubelet[2143]: I0813 07:15:02.462716 2143 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 07:15:02.463452 kubelet[2143]: I0813 07:15:02.462843 2143 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 07:15:02.463452 kubelet[2143]: I0813 07:15:02.463362 2143 reconciler.go:26] "Reconciler: start to sync state" Aug 13 07:15:02.464059 kubelet[2143]: E0813 07:15:02.464032 2143 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.120:6443: connect: connection refused" interval="200ms" Aug 13 07:15:02.464129 kubelet[2143]: W0813 07:15:02.464095 2143 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.120:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.120:6443: connect: connection refused Aug 13 07:15:02.464155 kubelet[2143]: E0813 07:15:02.464127 2143 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.120:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.120:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:15:02.464921 kubelet[2143]: I0813 07:15:02.464414 2143 factory.go:221] Registration of the systemd container factory successfully Aug 13 07:15:02.464921 kubelet[2143]: I0813 07:15:02.464492 2143 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 07:15:02.465655 kubelet[2143]: E0813 07:15:02.465616 2143 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 07:15:02.465655 kubelet[2143]: I0813 07:15:02.465650 2143 factory.go:221] Registration of the containerd container factory successfully Aug 13 07:15:02.465853 kubelet[2143]: E0813 07:15:02.463775 2143 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.120:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.120:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185b423b591a56a5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-13 07:15:02.456342181 +0000 UTC m=+0.347606048,LastTimestamp:2025-08-13 07:15:02.456342181 +0000 UTC m=+0.347606048,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 13 07:15:02.493892 kubelet[2143]: I0813 07:15:02.493789 2143 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 07:15:02.495297 kubelet[2143]: I0813 07:15:02.495254 2143 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 07:15:02.495297 kubelet[2143]: I0813 07:15:02.495299 2143 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 13 07:15:02.495397 kubelet[2143]: I0813 07:15:02.495326 2143 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 07:15:02.495397 kubelet[2143]: I0813 07:15:02.495335 2143 kubelet.go:2382] "Starting kubelet main sync loop" Aug 13 07:15:02.495397 kubelet[2143]: E0813 07:15:02.495388 2143 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 07:15:02.500179 kubelet[2143]: W0813 07:15:02.500143 2143 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.120:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.120:6443: connect: connection refused Aug 13 07:15:02.500309 kubelet[2143]: E0813 07:15:02.500183 2143 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.120:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.120:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:15:02.501443 kubelet[2143]: I0813 07:15:02.501401 2143 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 07:15:02.501443 kubelet[2143]: I0813 07:15:02.501417 2143 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 07:15:02.501443 kubelet[2143]: I0813 07:15:02.501434 2143 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:15:02.563770 kubelet[2143]: E0813 07:15:02.563730 2143 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:15:02.596107 kubelet[2143]: E0813 07:15:02.596056 2143 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 07:15:02.664701 kubelet[2143]: E0813 07:15:02.664543 2143 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:15:02.664928 kubelet[2143]: E0813 07:15:02.664892 2143 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.120:6443: connect: connection refused" interval="400ms" Aug 13 07:15:02.764833 kubelet[2143]: E0813 07:15:02.764774 2143 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:15:02.797058 kubelet[2143]: E0813 07:15:02.797003 2143 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 07:15:02.865997 kubelet[2143]: E0813 07:15:02.865845 2143 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:15:02.967121 kubelet[2143]: E0813 07:15:02.966857 2143 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:15:03.066303 kubelet[2143]: E0813 07:15:03.066247 2143 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.120:6443: connect: connection refused" interval="800ms" Aug 13 07:15:03.067610 kubelet[2143]: E0813 07:15:03.067212 2143 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:15:03.073102 kubelet[2143]: I0813 07:15:03.072802 2143 policy_none.go:49] "None policy: Start" Aug 13 07:15:03.073102 kubelet[2143]: I0813 07:15:03.072940 2143 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 07:15:03.073102 kubelet[2143]: I0813 07:15:03.072969 2143 state_mem.go:35] "Initializing new in-memory state store" Aug 13 07:15:03.090825 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 07:15:03.115691 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 07:15:03.137262 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 07:15:03.138952 kubelet[2143]: I0813 07:15:03.138924 2143 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 07:15:03.139301 kubelet[2143]: I0813 07:15:03.139251 2143 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 07:15:03.140542 kubelet[2143]: I0813 07:15:03.139272 2143 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 07:15:03.140844 kubelet[2143]: I0813 07:15:03.140821 2143 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 07:15:03.141880 kubelet[2143]: E0813 07:15:03.141820 2143 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 07:15:03.142621 kubelet[2143]: E0813 07:15:03.141895 2143 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 13 07:15:03.210674 systemd[1]: Created slice kubepods-burstable-pod393e2c0a78c0056780c2194ff80c6df1.slice - libcontainer container kubepods-burstable-pod393e2c0a78c0056780c2194ff80c6df1.slice. Aug 13 07:15:03.229205 kubelet[2143]: E0813 07:15:03.229044 2143 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 07:15:03.234988 systemd[1]: Created slice kubepods-burstable-pod750d39fc02542d706e018e4727e23919.slice - libcontainer container kubepods-burstable-pod750d39fc02542d706e018e4727e23919.slice. Aug 13 07:15:03.237182 kubelet[2143]: E0813 07:15:03.237126 2143 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 07:15:03.241663 systemd[1]: Created slice kubepods-burstable-pod9e361aea0f382c7aa80194ee4a6aef10.slice - libcontainer container kubepods-burstable-pod9e361aea0f382c7aa80194ee4a6aef10.slice. Aug 13 07:15:03.242733 kubelet[2143]: I0813 07:15:03.242687 2143 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 07:15:03.243377 kubelet[2143]: E0813 07:15:03.243333 2143 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.120:6443/api/v1/nodes\": dial tcp 10.0.0.120:6443: connect: connection refused" node="localhost" Aug 13 07:15:03.244215 kubelet[2143]: E0813 07:15:03.244168 2143 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 07:15:03.268571 kubelet[2143]: I0813 07:15:03.268538 2143 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9e361aea0f382c7aa80194ee4a6aef10-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9e361aea0f382c7aa80194ee4a6aef10\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:15:03.268571 kubelet[2143]: I0813 07:15:03.268571 2143 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:15:03.268833 kubelet[2143]: I0813 07:15:03.268594 2143 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:15:03.268833 kubelet[2143]: I0813 07:15:03.268612 2143 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:15:03.268833 kubelet[2143]: I0813 07:15:03.268631 2143 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/750d39fc02542d706e018e4727e23919-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"750d39fc02542d706e018e4727e23919\") " pod="kube-system/kube-scheduler-localhost" Aug 13 07:15:03.268833 kubelet[2143]: I0813 07:15:03.268654 2143 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9e361aea0f382c7aa80194ee4a6aef10-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9e361aea0f382c7aa80194ee4a6aef10\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:15:03.268833 kubelet[2143]: I0813 07:15:03.268678 2143 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9e361aea0f382c7aa80194ee4a6aef10-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9e361aea0f382c7aa80194ee4a6aef10\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:15:03.269064 kubelet[2143]: I0813 07:15:03.268706 2143 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:15:03.269064 kubelet[2143]: I0813 07:15:03.268726 2143 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:15:03.411193 kubelet[2143]: W0813 07:15:03.411079 2143 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.120:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.120:6443: connect: connection refused Aug 13 07:15:03.411193 kubelet[2143]: E0813 07:15:03.411185 2143 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.120:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.120:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:15:03.445841 kubelet[2143]: I0813 07:15:03.445802 2143 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 07:15:03.446398 kubelet[2143]: E0813 07:15:03.446336 2143 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.120:6443/api/v1/nodes\": dial tcp 10.0.0.120:6443: connect: connection refused" node="localhost" Aug 13 07:15:03.472890 kubelet[2143]: W0813 07:15:03.472780 2143 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.120:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.120:6443: connect: connection refused Aug 13 07:15:03.473040 kubelet[2143]: E0813 07:15:03.472918 2143 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.120:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.120:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:15:03.530569 kubelet[2143]: E0813 07:15:03.530326 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:15:03.531660 containerd[1473]: time="2025-08-13T07:15:03.531485451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:393e2c0a78c0056780c2194ff80c6df1,Namespace:kube-system,Attempt:0,}" Aug 13 07:15:03.538457 kubelet[2143]: E0813 07:15:03.538374 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:15:03.538926 containerd[1473]: time="2025-08-13T07:15:03.538856850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:750d39fc02542d706e018e4727e23919,Namespace:kube-system,Attempt:0,}" Aug 13 07:15:03.542618 kubelet[2143]: W0813 07:15:03.542573 2143 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.120:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.120:6443: connect: connection refused Aug 13 07:15:03.542699 kubelet[2143]: E0813 07:15:03.542635 2143 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.120:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.120:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:15:03.545211 kubelet[2143]: E0813 07:15:03.545168 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:15:03.545789 containerd[1473]: time="2025-08-13T07:15:03.545755061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9e361aea0f382c7aa80194ee4a6aef10,Namespace:kube-system,Attempt:0,}" Aug 13 07:15:03.848767 kubelet[2143]: I0813 07:15:03.848631 2143 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 07:15:03.849101 kubelet[2143]: E0813 07:15:03.849068 2143 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.120:6443/api/v1/nodes\": dial tcp 10.0.0.120:6443: connect: connection refused" node="localhost" Aug 13 07:15:03.864877 kubelet[2143]: W0813 07:15:03.864804 2143 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.120:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.120:6443: connect: connection refused Aug 13 07:15:03.864943 kubelet[2143]: E0813 07:15:03.864883 2143 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.120:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.120:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:15:03.867342 kubelet[2143]: E0813 07:15:03.867297 2143 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.120:6443: connect: connection refused" interval="1.6s" Aug 13 07:15:04.231147 kubelet[2143]: E0813 07:15:04.230917 2143 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.120:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.120:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185b423b591a56a5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-13 07:15:02.456342181 +0000 UTC m=+0.347606048,LastTimestamp:2025-08-13 07:15:02.456342181 +0000 UTC m=+0.347606048,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 13 07:15:04.346115 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3639108765.mount: Deactivated successfully. Aug 13 07:15:04.354891 containerd[1473]: time="2025-08-13T07:15:04.354813527Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:15:04.356842 containerd[1473]: time="2025-08-13T07:15:04.356728657Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 07:15:04.357945 containerd[1473]: time="2025-08-13T07:15:04.357907367Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:15:04.359146 containerd[1473]: time="2025-08-13T07:15:04.359091707Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:15:04.360579 containerd[1473]: time="2025-08-13T07:15:04.360499777Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 07:15:04.360752 containerd[1473]: time="2025-08-13T07:15:04.360719729Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:15:04.361511 containerd[1473]: time="2025-08-13T07:15:04.361472670Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Aug 13 07:15:04.364267 containerd[1473]: time="2025-08-13T07:15:04.363505131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:15:04.367429 containerd[1473]: time="2025-08-13T07:15:04.367393019Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 828.418889ms" Aug 13 07:15:04.370992 containerd[1473]: time="2025-08-13T07:15:04.370932655Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 825.110727ms" Aug 13 07:15:04.429952 containerd[1473]: time="2025-08-13T07:15:04.429897352Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 898.198993ms" Aug 13 07:15:04.555580 kubelet[2143]: E0813 07:15:04.555384 2143 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.120:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.120:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:15:04.650505 kubelet[2143]: I0813 07:15:04.650450 2143 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 07:15:04.650988 kubelet[2143]: E0813 07:15:04.650933 2143 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.120:6443/api/v1/nodes\": dial tcp 10.0.0.120:6443: connect: connection refused" node="localhost" Aug 13 07:15:04.981756 containerd[1473]: time="2025-08-13T07:15:04.980355234Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:15:04.981756 containerd[1473]: time="2025-08-13T07:15:04.980442989Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:15:04.981756 containerd[1473]: time="2025-08-13T07:15:04.980460231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:15:04.981756 containerd[1473]: time="2025-08-13T07:15:04.980558105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:15:04.984171 containerd[1473]: time="2025-08-13T07:15:04.983886945Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:15:04.984171 containerd[1473]: time="2025-08-13T07:15:04.984009605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:15:04.984171 containerd[1473]: time="2025-08-13T07:15:04.984048979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:15:04.984312 containerd[1473]: time="2025-08-13T07:15:04.984154186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:15:04.984512 containerd[1473]: time="2025-08-13T07:15:04.984423531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:15:04.985890 containerd[1473]: time="2025-08-13T07:15:04.984503841Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:15:04.985890 containerd[1473]: time="2025-08-13T07:15:04.984519621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:15:04.985890 containerd[1473]: time="2025-08-13T07:15:04.984598739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:15:05.273533 kubelet[2143]: W0813 07:15:05.273360 2143 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.120:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.120:6443: connect: connection refused Aug 13 07:15:05.273533 kubelet[2143]: E0813 07:15:05.273422 2143 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.120:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.120:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:15:05.818235 kubelet[2143]: E0813 07:15:05.468109 2143 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.120:6443: connect: connection refused" interval="3.2s" Aug 13 07:15:05.818235 kubelet[2143]: W0813 07:15:05.657075 2143 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.120:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.120:6443: connect: connection refused Aug 13 07:15:05.818235 kubelet[2143]: E0813 07:15:05.657131 2143 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.120:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.120:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:15:05.816179 systemd[1]: Started cri-containerd-ee32f5d2f14446621216f379de22f6d258246e079c08beb2006a3c5449ec707b.scope - libcontainer container ee32f5d2f14446621216f379de22f6d258246e079c08beb2006a3c5449ec707b. Aug 13 07:15:05.831557 kubelet[2143]: W0813 07:15:05.831479 2143 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.120:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.120:6443: connect: connection refused Aug 13 07:15:05.831557 kubelet[2143]: E0813 07:15:05.831523 2143 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.120:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.120:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:15:05.840029 systemd[1]: Started cri-containerd-7fd93876fb07f6e7001c048fe010b8aff429ed0223ad4bfce20cc285ae2e3130.scope - libcontainer container 7fd93876fb07f6e7001c048fe010b8aff429ed0223ad4bfce20cc285ae2e3130. Aug 13 07:15:05.845507 systemd[1]: Started cri-containerd-7ba89a560e8033655557327a63f6255bac68eb4d351378f9e9de529ac22e1ae7.scope - libcontainer container 7ba89a560e8033655557327a63f6255bac68eb4d351378f9e9de529ac22e1ae7. Aug 13 07:15:05.882217 containerd[1473]: time="2025-08-13T07:15:05.882173399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9e361aea0f382c7aa80194ee4a6aef10,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee32f5d2f14446621216f379de22f6d258246e079c08beb2006a3c5449ec707b\"" Aug 13 07:15:05.887905 kubelet[2143]: E0813 07:15:05.887728 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:15:05.891091 containerd[1473]: time="2025-08-13T07:15:05.890368509Z" level=info msg="CreateContainer within sandbox \"ee32f5d2f14446621216f379de22f6d258246e079c08beb2006a3c5449ec707b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 07:15:05.891855 containerd[1473]: time="2025-08-13T07:15:05.891832537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:750d39fc02542d706e018e4727e23919,Namespace:kube-system,Attempt:0,} returns sandbox id \"7fd93876fb07f6e7001c048fe010b8aff429ed0223ad4bfce20cc285ae2e3130\"" Aug 13 07:15:05.892709 kubelet[2143]: E0813 07:15:05.892691 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:15:05.894026 containerd[1473]: time="2025-08-13T07:15:05.894004983Z" level=info msg="CreateContainer within sandbox \"7fd93876fb07f6e7001c048fe010b8aff429ed0223ad4bfce20cc285ae2e3130\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 07:15:05.899984 containerd[1473]: time="2025-08-13T07:15:05.899944877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:393e2c0a78c0056780c2194ff80c6df1,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ba89a560e8033655557327a63f6255bac68eb4d351378f9e9de529ac22e1ae7\"" Aug 13 07:15:05.901513 kubelet[2143]: E0813 07:15:05.901476 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:15:05.902934 containerd[1473]: time="2025-08-13T07:15:05.902908732Z" level=info msg="CreateContainer within sandbox \"7ba89a560e8033655557327a63f6255bac68eb4d351378f9e9de529ac22e1ae7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 07:15:06.095006 containerd[1473]: time="2025-08-13T07:15:06.094883259Z" level=info msg="CreateContainer within sandbox \"7fd93876fb07f6e7001c048fe010b8aff429ed0223ad4bfce20cc285ae2e3130\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c06ccc5564abefa0767ee02254526622633752da1f7c5ce6448deadb3e9f72e3\"" Aug 13 07:15:06.095827 containerd[1473]: time="2025-08-13T07:15:06.095789757Z" level=info msg="StartContainer for \"c06ccc5564abefa0767ee02254526622633752da1f7c5ce6448deadb3e9f72e3\"" Aug 13 07:15:06.096145 containerd[1473]: time="2025-08-13T07:15:06.096093222Z" level=info msg="CreateContainer within sandbox \"ee32f5d2f14446621216f379de22f6d258246e079c08beb2006a3c5449ec707b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d8466aa59a7c9b0d9a1b87c4ebc4f567db9ec752c0e56a1d42d94dc804e036d6\"" Aug 13 07:15:06.096591 containerd[1473]: time="2025-08-13T07:15:06.096563098Z" level=info msg="StartContainer for \"d8466aa59a7c9b0d9a1b87c4ebc4f567db9ec752c0e56a1d42d94dc804e036d6\"" Aug 13 07:15:06.098466 containerd[1473]: time="2025-08-13T07:15:06.098415619Z" level=info msg="CreateContainer within sandbox \"7ba89a560e8033655557327a63f6255bac68eb4d351378f9e9de529ac22e1ae7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c58368670be68bb209870fb11cabdc2da3f955db38c6aa242ede9cc7f218dab2\"" Aug 13 07:15:06.098964 containerd[1473]: time="2025-08-13T07:15:06.098931635Z" level=info msg="StartContainer for \"c58368670be68bb209870fb11cabdc2da3f955db38c6aa242ede9cc7f218dab2\"" Aug 13 07:15:06.137011 systemd[1]: Started cri-containerd-c06ccc5564abefa0767ee02254526622633752da1f7c5ce6448deadb3e9f72e3.scope - libcontainer container c06ccc5564abefa0767ee02254526622633752da1f7c5ce6448deadb3e9f72e3. Aug 13 07:15:06.141343 systemd[1]: Started cri-containerd-c58368670be68bb209870fb11cabdc2da3f955db38c6aa242ede9cc7f218dab2.scope - libcontainer container c58368670be68bb209870fb11cabdc2da3f955db38c6aa242ede9cc7f218dab2. Aug 13 07:15:06.142980 systemd[1]: Started cri-containerd-d8466aa59a7c9b0d9a1b87c4ebc4f567db9ec752c0e56a1d42d94dc804e036d6.scope - libcontainer container d8466aa59a7c9b0d9a1b87c4ebc4f567db9ec752c0e56a1d42d94dc804e036d6. Aug 13 07:15:06.196991 containerd[1473]: time="2025-08-13T07:15:06.196595499Z" level=info msg="StartContainer for \"c06ccc5564abefa0767ee02254526622633752da1f7c5ce6448deadb3e9f72e3\" returns successfully" Aug 13 07:15:06.198230 containerd[1473]: time="2025-08-13T07:15:06.198199982Z" level=info msg="StartContainer for \"d8466aa59a7c9b0d9a1b87c4ebc4f567db9ec752c0e56a1d42d94dc804e036d6\" returns successfully" Aug 13 07:15:06.200248 containerd[1473]: time="2025-08-13T07:15:06.198264917Z" level=info msg="StartContainer for \"c58368670be68bb209870fb11cabdc2da3f955db38c6aa242ede9cc7f218dab2\" returns successfully" Aug 13 07:15:06.258325 kubelet[2143]: I0813 07:15:06.258248 2143 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 07:15:06.259068 kubelet[2143]: E0813 07:15:06.258984 2143 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.120:6443/api/v1/nodes\": dial tcp 10.0.0.120:6443: connect: connection refused" node="localhost" Aug 13 07:15:06.520841 kubelet[2143]: E0813 07:15:06.520682 2143 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 07:15:06.520841 kubelet[2143]: E0813 07:15:06.520840 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:15:06.525457 kubelet[2143]: E0813 07:15:06.525432 2143 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 07:15:06.525556 kubelet[2143]: E0813 07:15:06.525530 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:15:06.528571 kubelet[2143]: E0813 07:15:06.528537 2143 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 07:15:06.528668 kubelet[2143]: E0813 07:15:06.528646 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:15:07.532908 kubelet[2143]: E0813 07:15:07.531472 2143 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 07:15:07.532908 kubelet[2143]: E0813 07:15:07.531608 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:15:07.532908 kubelet[2143]: E0813 07:15:07.531884 2143 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 07:15:07.532908 kubelet[2143]: E0813 07:15:07.531995 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:15:08.245726 kubelet[2143]: E0813 07:15:08.245678 2143 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Aug 13 07:15:08.601597 kubelet[2143]: E0813 07:15:08.601555 2143 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Aug 13 07:15:08.672572 kubelet[2143]: E0813 07:15:08.672501 2143 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Aug 13 07:15:09.043346 kubelet[2143]: E0813 07:15:09.043197 2143 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Aug 13 07:15:09.461180 kubelet[2143]: I0813 07:15:09.461141 2143 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 07:15:09.468200 kubelet[2143]: I0813 07:15:09.468165 2143 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Aug 13 07:15:09.468200 kubelet[2143]: E0813 07:15:09.468192 2143 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Aug 13 07:15:09.474602 kubelet[2143]: E0813 07:15:09.474560 2143 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:15:09.564348 kubelet[2143]: I0813 07:15:09.564307 2143 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Aug 13 07:15:09.571824 kubelet[2143]: I0813 07:15:09.571771 2143 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 13 07:15:09.575050 kubelet[2143]: I0813 07:15:09.574998 2143 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 13 07:15:09.644000 kubelet[2143]: I0813 07:15:09.643939 2143 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 13 07:15:09.649847 kubelet[2143]: E0813 07:15:09.649770 2143 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Aug 13 07:15:09.650138 kubelet[2143]: E0813 07:15:09.650101 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:15:10.034660 systemd[1]: Reloading requested from client PID 2420 ('systemctl') (unit session-7.scope)... Aug 13 07:15:10.034680 systemd[1]: Reloading... Aug 13 07:15:10.124048 zram_generator::config[2462]: No configuration found. Aug 13 07:15:10.230375 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:15:10.321842 systemd[1]: Reloading finished in 286 ms. Aug 13 07:15:10.372055 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:15:10.392384 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 07:15:10.392739 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:15:10.401084 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:15:10.595966 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:15:10.601404 (kubelet)[2504]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 07:15:10.643890 kubelet[2504]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:15:10.643890 kubelet[2504]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 07:15:10.643890 kubelet[2504]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:15:10.644303 kubelet[2504]: I0813 07:15:10.643953 2504 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 07:15:10.651639 kubelet[2504]: I0813 07:15:10.651598 2504 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 13 07:15:10.651639 kubelet[2504]: I0813 07:15:10.651625 2504 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 07:15:10.651897 kubelet[2504]: I0813 07:15:10.651883 2504 server.go:954] "Client rotation is on, will bootstrap in background" Aug 13 07:15:10.653016 kubelet[2504]: I0813 07:15:10.653000 2504 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 07:15:10.655143 kubelet[2504]: I0813 07:15:10.655126 2504 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 07:15:10.657896 kubelet[2504]: E0813 07:15:10.657836 2504 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 07:15:10.657896 kubelet[2504]: I0813 07:15:10.657896 2504 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 07:15:10.664996 kubelet[2504]: I0813 07:15:10.664959 2504 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 07:15:10.665269 kubelet[2504]: I0813 07:15:10.665228 2504 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 07:15:10.665432 kubelet[2504]: I0813 07:15:10.665263 2504 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 07:15:10.665550 kubelet[2504]: I0813 07:15:10.665438 2504 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 07:15:10.665550 kubelet[2504]: I0813 07:15:10.665447 2504 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 07:15:10.665550 kubelet[2504]: I0813 07:15:10.665501 2504 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:15:10.665675 kubelet[2504]: I0813 07:15:10.665661 2504 kubelet.go:446] "Attempting to sync node with API server" Aug 13 07:15:10.665706 kubelet[2504]: I0813 07:15:10.665686 2504 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 07:15:10.665706 kubelet[2504]: I0813 07:15:10.665708 2504 kubelet.go:352] "Adding apiserver pod source" Aug 13 07:15:10.665782 kubelet[2504]: I0813 07:15:10.665718 2504 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 07:15:10.666493 kubelet[2504]: I0813 07:15:10.666462 2504 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 07:15:10.666912 kubelet[2504]: I0813 07:15:10.666806 2504 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 07:15:10.668453 kubelet[2504]: I0813 07:15:10.667257 2504 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 07:15:10.668453 kubelet[2504]: I0813 07:15:10.667292 2504 server.go:1287] "Started kubelet" Aug 13 07:15:10.668453 kubelet[2504]: I0813 07:15:10.667366 2504 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 07:15:10.668453 kubelet[2504]: I0813 07:15:10.667610 2504 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 07:15:10.668453 kubelet[2504]: I0813 07:15:10.667938 2504 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 07:15:10.668453 kubelet[2504]: I0813 07:15:10.668283 2504 server.go:479] "Adding debug handlers to kubelet server" Aug 13 07:15:10.671889 kubelet[2504]: I0813 07:15:10.669837 2504 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 07:15:10.671889 kubelet[2504]: I0813 07:15:10.670138 2504 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 07:15:10.671889 kubelet[2504]: E0813 07:15:10.671237 2504 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:15:10.671889 kubelet[2504]: I0813 07:15:10.671390 2504 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 07:15:10.679995 kubelet[2504]: I0813 07:15:10.679963 2504 factory.go:221] Registration of the systemd container factory successfully Aug 13 07:15:10.680262 kubelet[2504]: I0813 07:15:10.680235 2504 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 07:15:10.683855 kubelet[2504]: I0813 07:15:10.683820 2504 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 07:15:10.685361 kubelet[2504]: I0813 07:15:10.685343 2504 reconciler.go:26] "Reconciler: start to sync state" Aug 13 07:15:10.689122 kubelet[2504]: E0813 07:15:10.686896 2504 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 07:15:10.689122 kubelet[2504]: I0813 07:15:10.687355 2504 factory.go:221] Registration of the containerd container factory successfully Aug 13 07:15:10.695215 kubelet[2504]: I0813 07:15:10.695165 2504 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 07:15:10.697166 kubelet[2504]: I0813 07:15:10.697129 2504 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 07:15:10.697166 kubelet[2504]: I0813 07:15:10.697153 2504 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 13 07:15:10.697242 kubelet[2504]: I0813 07:15:10.697177 2504 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 07:15:10.697242 kubelet[2504]: I0813 07:15:10.697194 2504 kubelet.go:2382] "Starting kubelet main sync loop" Aug 13 07:15:10.697295 kubelet[2504]: E0813 07:15:10.697255 2504 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 07:15:10.723500 kubelet[2504]: I0813 07:15:10.723450 2504 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 07:15:10.723500 kubelet[2504]: I0813 07:15:10.723476 2504 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 07:15:10.723500 kubelet[2504]: I0813 07:15:10.723508 2504 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:15:10.723750 kubelet[2504]: I0813 07:15:10.723700 2504 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 07:15:10.723750 kubelet[2504]: I0813 07:15:10.723713 2504 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 07:15:10.723750 kubelet[2504]: I0813 07:15:10.723738 2504 policy_none.go:49] "None policy: Start" Aug 13 07:15:10.723936 kubelet[2504]: I0813 07:15:10.723757 2504 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 07:15:10.723936 kubelet[2504]: I0813 07:15:10.723790 2504 state_mem.go:35] "Initializing new in-memory state store" Aug 13 07:15:10.724017 kubelet[2504]: I0813 07:15:10.723950 2504 state_mem.go:75] "Updated machine memory state" Aug 13 07:15:10.729195 kubelet[2504]: I0813 07:15:10.729160 2504 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 07:15:10.730880 kubelet[2504]: I0813 07:15:10.730840 2504 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 07:15:10.731035 kubelet[2504]: I0813 07:15:10.730906 2504 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 07:15:10.731619 kubelet[2504]: I0813 07:15:10.731531 2504 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 07:15:10.732684 kubelet[2504]: E0813 07:15:10.732647 2504 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 07:15:10.797900 kubelet[2504]: I0813 07:15:10.797762 2504 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 13 07:15:10.797900 kubelet[2504]: I0813 07:15:10.797887 2504 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Aug 13 07:15:10.798222 kubelet[2504]: I0813 07:15:10.797801 2504 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 13 07:15:10.806309 kubelet[2504]: E0813 07:15:10.806260 2504 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Aug 13 07:15:10.806621 kubelet[2504]: E0813 07:15:10.806598 2504 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Aug 13 07:15:10.806891 kubelet[2504]: E0813 07:15:10.806850 2504 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 13 07:15:10.838254 kubelet[2504]: I0813 07:15:10.838196 2504 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 07:15:10.843725 kubelet[2504]: I0813 07:15:10.843700 2504 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Aug 13 07:15:10.843812 kubelet[2504]: I0813 07:15:10.843774 2504 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Aug 13 07:15:10.887337 kubelet[2504]: I0813 07:15:10.887181 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:15:10.887337 kubelet[2504]: I0813 07:15:10.887234 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:15:10.887337 kubelet[2504]: I0813 07:15:10.887256 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/750d39fc02542d706e018e4727e23919-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"750d39fc02542d706e018e4727e23919\") " pod="kube-system/kube-scheduler-localhost" Aug 13 07:15:10.887337 kubelet[2504]: I0813 07:15:10.887274 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9e361aea0f382c7aa80194ee4a6aef10-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9e361aea0f382c7aa80194ee4a6aef10\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:15:10.887337 kubelet[2504]: I0813 07:15:10.887295 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9e361aea0f382c7aa80194ee4a6aef10-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9e361aea0f382c7aa80194ee4a6aef10\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:15:10.887590 kubelet[2504]: I0813 07:15:10.887317 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9e361aea0f382c7aa80194ee4a6aef10-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9e361aea0f382c7aa80194ee4a6aef10\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:15:10.887590 kubelet[2504]: I0813 07:15:10.887339 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:15:10.887590 kubelet[2504]: I0813 07:15:10.887364 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:15:10.887590 kubelet[2504]: I0813 07:15:10.887386 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:15:11.107664 kubelet[2504]: E0813 07:15:11.107591 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:15:11.107664 kubelet[2504]: E0813 07:15:11.107618 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:15:11.107906 kubelet[2504]: E0813 07:15:11.107637 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:15:11.720999 kubelet[2504]: I0813 07:15:11.720935 2504 apiserver.go:52] "Watching apiserver" Aug 13 07:15:11.724383 kubelet[2504]: E0813 07:15:11.724145 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:15:11.724667 kubelet[2504]: I0813 07:15:11.724632 2504 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 13 07:15:11.724889 kubelet[2504]: I0813 07:15:11.724851 2504 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 13 07:15:11.736404 kubelet[2504]: E0813 07:15:11.736043 2504 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 13 07:15:11.736404 kubelet[2504]: E0813 07:15:11.736096 2504 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Aug 13 07:15:11.736404 kubelet[2504]: E0813 07:15:11.736226 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:15:11.736404 kubelet[2504]: E0813 07:15:11.736302 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:15:11.746492 kubelet[2504]: I0813 07:15:11.746259 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.746244695 podStartE2EDuration="2.746244695s" podCreationTimestamp="2025-08-13 07:15:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:15:11.745997353 +0000 UTC m=+1.140088805" watchObservedRunningTime="2025-08-13 07:15:11.746244695 +0000 UTC m=+1.140336147" Aug 13 07:15:11.754005 kubelet[2504]: I0813 07:15:11.753953 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.753937926 podStartE2EDuration="2.753937926s" podCreationTimestamp="2025-08-13 07:15:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:15:11.753128547 +0000 UTC m=+1.147219989" watchObservedRunningTime="2025-08-13 07:15:11.753937926 +0000 UTC m=+1.148029378" Aug 13 07:15:11.763179 kubelet[2504]: I0813 07:15:11.763104 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.7630609919999998 podStartE2EDuration="2.763060992s" podCreationTimestamp="2025-08-13 07:15:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:15:11.763052234 +0000 UTC m=+1.157143686" watchObservedRunningTime="2025-08-13 07:15:11.763060992 +0000 UTC m=+1.157152434" Aug 13 07:15:11.784217 kubelet[2504]: I0813 07:15:11.784172 2504 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 07:15:12.725478 kubelet[2504]: E0813 07:15:12.725445 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:15:12.725478 kubelet[2504]: E0813 07:15:12.725445 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:15:12.923031 kubelet[2504]: E0813 07:15:12.922980 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:15:14.464595 kubelet[2504]: E0813 07:15:14.464535 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:15:16.832784 kubelet[2504]: I0813 07:15:16.832742 2504 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 07:15:16.833238 containerd[1473]: time="2025-08-13T07:15:16.833156947Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 07:15:16.833513 kubelet[2504]: I0813 07:15:16.833346 2504 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 07:15:17.543736 systemd[1]: Created slice kubepods-besteffort-pod61a8707d_cb3e_4729_9d11_8436710fb0ab.slice - libcontainer container kubepods-besteffort-pod61a8707d_cb3e_4729_9d11_8436710fb0ab.slice. Aug 13 07:15:17.657160 kubelet[2504]: I0813 07:15:17.657095 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/61a8707d-cb3e-4729-9d11-8436710fb0ab-kube-proxy\") pod \"kube-proxy-pbhz7\" (UID: \"61a8707d-cb3e-4729-9d11-8436710fb0ab\") " pod="kube-system/kube-proxy-pbhz7" Aug 13 07:15:17.657160 kubelet[2504]: I0813 07:15:17.657145 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/61a8707d-cb3e-4729-9d11-8436710fb0ab-xtables-lock\") pod \"kube-proxy-pbhz7\" (UID: \"61a8707d-cb3e-4729-9d11-8436710fb0ab\") " pod="kube-system/kube-proxy-pbhz7" Aug 13 07:15:17.657160 kubelet[2504]: I0813 07:15:17.657169 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/61a8707d-cb3e-4729-9d11-8436710fb0ab-lib-modules\") pod \"kube-proxy-pbhz7\" (UID: \"61a8707d-cb3e-4729-9d11-8436710fb0ab\") " pod="kube-system/kube-proxy-pbhz7" Aug 13 07:15:17.657365 kubelet[2504]: I0813 07:15:17.657191 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lttl8\" (UniqueName: \"kubernetes.io/projected/61a8707d-cb3e-4729-9d11-8436710fb0ab-kube-api-access-lttl8\") pod \"kube-proxy-pbhz7\" (UID: \"61a8707d-cb3e-4729-9d11-8436710fb0ab\") " pod="kube-system/kube-proxy-pbhz7" Aug 13 07:15:17.683239 kubelet[2504]: E0813 07:15:17.683206 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:15:17.732312 kubelet[2504]: E0813 07:15:17.732257 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:15:17.857316 kubelet[2504]: E0813 07:15:17.857273 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:15:17.857902 containerd[1473]: time="2025-08-13T07:15:17.857844807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pbhz7,Uid:61a8707d-cb3e-4729-9d11-8436710fb0ab,Namespace:kube-system,Attempt:0,}" Aug 13 07:15:17.898170 containerd[1473]: time="2025-08-13T07:15:17.895968498Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:15:17.898170 containerd[1473]: time="2025-08-13T07:15:17.896109035Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:15:17.898170 containerd[1473]: time="2025-08-13T07:15:17.896124414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:15:17.898170 containerd[1473]: time="2025-08-13T07:15:17.896258730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:15:17.918793 systemd[1]: Created slice kubepods-besteffort-podd2996358_0999_4f90_95ca_6746cd8005f7.slice - libcontainer container kubepods-besteffort-podd2996358_0999_4f90_95ca_6746cd8005f7.slice. Aug 13 07:15:17.934031 systemd[1]: Started cri-containerd-fd5d992bba244036beb0b5062697f00c8763c0cfbafd5ca5ed79470a380e82be.scope - libcontainer container fd5d992bba244036beb0b5062697f00c8763c0cfbafd5ca5ed79470a380e82be. Aug 13 07:15:17.936962 update_engine[1453]: I20250813 07:15:17.936874 1453 update_attempter.cc:509] Updating boot flags... Aug 13 07:15:17.958984 kubelet[2504]: I0813 07:15:17.958846 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d2996358-0999-4f90-95ca-6746cd8005f7-var-lib-calico\") pod \"tigera-operator-747864d56d-fsdjf\" (UID: \"d2996358-0999-4f90-95ca-6746cd8005f7\") " pod="tigera-operator/tigera-operator-747864d56d-fsdjf" Aug 13 07:15:17.958984 kubelet[2504]: I0813 07:15:17.958916 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7crmw\" (UniqueName: \"kubernetes.io/projected/d2996358-0999-4f90-95ca-6746cd8005f7-kube-api-access-7crmw\") pod \"tigera-operator-747864d56d-fsdjf\" (UID: \"d2996358-0999-4f90-95ca-6746cd8005f7\") " pod="tigera-operator/tigera-operator-747864d56d-fsdjf" Aug 13 07:15:17.966921 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2592) Aug 13 07:15:18.077963 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2592) Aug 13 07:15:18.104566 containerd[1473]: time="2025-08-13T07:15:18.104482306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pbhz7,Uid:61a8707d-cb3e-4729-9d11-8436710fb0ab,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd5d992bba244036beb0b5062697f00c8763c0cfbafd5ca5ed79470a380e82be\"" Aug 13 07:15:18.106822 kubelet[2504]: E0813 07:15:18.106802 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:15:18.111014 containerd[1473]: time="2025-08-13T07:15:18.110842265Z" level=info msg="CreateContainer within sandbox \"fd5d992bba244036beb0b5062697f00c8763c0cfbafd5ca5ed79470a380e82be\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 07:15:18.131258 containerd[1473]: time="2025-08-13T07:15:18.131200110Z" level=info msg="CreateContainer within sandbox \"fd5d992bba244036beb0b5062697f00c8763c0cfbafd5ca5ed79470a380e82be\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d7faed1c47230034c2509a9bd422dabe1f7e169384beb0577948b9fdd5d8416e\"" Aug 13 07:15:18.131756 containerd[1473]: time="2025-08-13T07:15:18.131728234Z" level=info msg="StartContainer for \"d7faed1c47230034c2509a9bd422dabe1f7e169384beb0577948b9fdd5d8416e\"" Aug 13 07:15:18.161007 systemd[1]: Started cri-containerd-d7faed1c47230034c2509a9bd422dabe1f7e169384beb0577948b9fdd5d8416e.scope - libcontainer container d7faed1c47230034c2509a9bd422dabe1f7e169384beb0577948b9fdd5d8416e. Aug 13 07:15:18.199608 containerd[1473]: time="2025-08-13T07:15:18.199533624Z" level=info msg="StartContainer for \"d7faed1c47230034c2509a9bd422dabe1f7e169384beb0577948b9fdd5d8416e\" returns successfully" Aug 13 07:15:18.222801 containerd[1473]: time="2025-08-13T07:15:18.222736293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-fsdjf,Uid:d2996358-0999-4f90-95ca-6746cd8005f7,Namespace:tigera-operator,Attempt:0,}" Aug 13 07:15:18.250741 containerd[1473]: time="2025-08-13T07:15:18.250632115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:15:18.250741 containerd[1473]: time="2025-08-13T07:15:18.250694954Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:15:18.250741 containerd[1473]: time="2025-08-13T07:15:18.250709051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:15:18.251246 containerd[1473]: time="2025-08-13T07:15:18.250796026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:15:18.274075 systemd[1]: Started cri-containerd-8593107b3f04f933a47281926e7f7bdd13cd74b2272d9963e65e44ce4d5a47bc.scope - libcontainer container 8593107b3f04f933a47281926e7f7bdd13cd74b2272d9963e65e44ce4d5a47bc. Aug 13 07:15:18.314822 containerd[1473]: time="2025-08-13T07:15:18.314765802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-fsdjf,Uid:d2996358-0999-4f90-95ca-6746cd8005f7,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8593107b3f04f933a47281926e7f7bdd13cd74b2272d9963e65e44ce4d5a47bc\"" Aug 13 07:15:18.316722 containerd[1473]: time="2025-08-13T07:15:18.316681541Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 13 07:15:18.735855 kubelet[2504]: E0813 07:15:18.735806 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:15:18.744609 kubelet[2504]: I0813 07:15:18.744522 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pbhz7" podStartSLOduration=1.744503163 podStartE2EDuration="1.744503163s" podCreationTimestamp="2025-08-13 07:15:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:15:18.744476943 +0000 UTC m=+8.138568415" watchObservedRunningTime="2025-08-13 07:15:18.744503163 +0000 UTC m=+8.138594626" Aug 13 07:15:19.754659 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount750466845.mount: Deactivated successfully. Aug 13 07:15:20.904488 containerd[1473]: time="2025-08-13T07:15:20.904424531Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:20.905197 containerd[1473]: time="2025-08-13T07:15:20.905132002Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Aug 13 07:15:20.906427 containerd[1473]: time="2025-08-13T07:15:20.906388013Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:20.910492 containerd[1473]: time="2025-08-13T07:15:20.910448133Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:20.911620 containerd[1473]: time="2025-08-13T07:15:20.911591050Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.594870466s" Aug 13 07:15:20.911674 containerd[1473]: time="2025-08-13T07:15:20.911623291Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 13 07:15:20.913880 containerd[1473]: time="2025-08-13T07:15:20.913819527Z" level=info msg="CreateContainer within sandbox \"8593107b3f04f933a47281926e7f7bdd13cd74b2272d9963e65e44ce4d5a47bc\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 13 07:15:20.927360 containerd[1473]: time="2025-08-13T07:15:20.927303405Z" level=info msg="CreateContainer within sandbox \"8593107b3f04f933a47281926e7f7bdd13cd74b2272d9963e65e44ce4d5a47bc\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9fa2029bf9a55171de728849679926df8a859bced3f4d8d0dfb8c1eedf61b5b2\"" Aug 13 07:15:20.927914 containerd[1473]: time="2025-08-13T07:15:20.927841346Z" level=info msg="StartContainer for \"9fa2029bf9a55171de728849679926df8a859bced3f4d8d0dfb8c1eedf61b5b2\"" Aug 13 07:15:20.965025 systemd[1]: Started cri-containerd-9fa2029bf9a55171de728849679926df8a859bced3f4d8d0dfb8c1eedf61b5b2.scope - libcontainer container 9fa2029bf9a55171de728849679926df8a859bced3f4d8d0dfb8c1eedf61b5b2. Aug 13 07:15:20.991007 containerd[1473]: time="2025-08-13T07:15:20.990961169Z" level=info msg="StartContainer for \"9fa2029bf9a55171de728849679926df8a859bced3f4d8d0dfb8c1eedf61b5b2\" returns successfully" Aug 13 07:15:22.927751 kubelet[2504]: E0813 07:15:22.927701 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:15:23.011971 kubelet[2504]: I0813 07:15:23.011888 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-fsdjf" podStartSLOduration=3.415479161 podStartE2EDuration="6.011852128s" podCreationTimestamp="2025-08-13 07:15:17 +0000 UTC" firstStartedPulling="2025-08-13 07:15:18.316064068 +0000 UTC m=+7.710155520" lastFinishedPulling="2025-08-13 07:15:20.912437045 +0000 UTC m=+10.306528487" observedRunningTime="2025-08-13 07:15:21.753636959 +0000 UTC m=+11.147728411" watchObservedRunningTime="2025-08-13 07:15:23.011852128 +0000 UTC m=+12.405943580" Aug 13 07:15:24.469448 kubelet[2504]: E0813 07:15:24.469392 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:15:24.748521 kubelet[2504]: E0813 07:15:24.748355 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:15:26.263716 sudo[1645]: pam_unix(sudo:session): session closed for user root Aug 13 07:15:26.269022 sshd[1642]: pam_unix(sshd:session): session closed for user core Aug 13 07:15:26.274283 systemd[1]: sshd@6-10.0.0.120:22-10.0.0.1:49236.service: Deactivated successfully. Aug 13 07:15:26.279210 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 07:15:26.279495 systemd[1]: session-7.scope: Consumed 4.855s CPU time, 162.3M memory peak, 0B memory swap peak. Aug 13 07:15:26.280393 systemd-logind[1449]: Session 7 logged out. Waiting for processes to exit. Aug 13 07:15:26.281671 systemd-logind[1449]: Removed session 7. Aug 13 07:15:28.765658 systemd[1]: Created slice kubepods-besteffort-pod7ee1cccd_11c0_414c_9093_a201c4271e82.slice - libcontainer container kubepods-besteffort-pod7ee1cccd_11c0_414c_9093_a201c4271e82.slice. Aug 13 07:15:28.826361 kubelet[2504]: I0813 07:15:28.826284 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm877\" (UniqueName: \"kubernetes.io/projected/7ee1cccd-11c0-414c-9093-a201c4271e82-kube-api-access-zm877\") pod \"calico-typha-6dfcdf7747-tbls6\" (UID: \"7ee1cccd-11c0-414c-9093-a201c4271e82\") " pod="calico-system/calico-typha-6dfcdf7747-tbls6" Aug 13 07:15:28.826361 kubelet[2504]: I0813 07:15:28.826334 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ee1cccd-11c0-414c-9093-a201c4271e82-tigera-ca-bundle\") pod \"calico-typha-6dfcdf7747-tbls6\" (UID: \"7ee1cccd-11c0-414c-9093-a201c4271e82\") " pod="calico-system/calico-typha-6dfcdf7747-tbls6" Aug 13 07:15:28.826361 kubelet[2504]: I0813 07:15:28.826352 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7ee1cccd-11c0-414c-9093-a201c4271e82-typha-certs\") pod \"calico-typha-6dfcdf7747-tbls6\" (UID: \"7ee1cccd-11c0-414c-9093-a201c4271e82\") " pod="calico-system/calico-typha-6dfcdf7747-tbls6" Aug 13 07:15:29.078978 kubelet[2504]: E0813 07:15:29.078901 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:15:29.079771 containerd[1473]: time="2025-08-13T07:15:29.079693712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6dfcdf7747-tbls6,Uid:7ee1cccd-11c0-414c-9093-a201c4271e82,Namespace:calico-system,Attempt:0,}" Aug 13 07:15:29.268246 containerd[1473]: time="2025-08-13T07:15:29.267512621Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:15:29.268246 containerd[1473]: time="2025-08-13T07:15:29.267718940Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:15:29.268246 containerd[1473]: time="2025-08-13T07:15:29.267823587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:15:29.268246 containerd[1473]: time="2025-08-13T07:15:29.267995060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:15:29.307951 systemd[1]: Created slice kubepods-besteffort-pod566ee478_6b43_47b6_a9be_3cd50960f6ee.slice - libcontainer container kubepods-besteffort-pod566ee478_6b43_47b6_a9be_3cd50960f6ee.slice. Aug 13 07:15:29.324064 systemd[1]: Started cri-containerd-731b2d4d8a033e883c16b7aa9950778bfd0b7dc5b33cccb7a316f616ffabd294.scope - libcontainer container 731b2d4d8a033e883c16b7aa9950778bfd0b7dc5b33cccb7a316f616ffabd294. Aug 13 07:15:29.392821 containerd[1473]: time="2025-08-13T07:15:29.392592134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6dfcdf7747-tbls6,Uid:7ee1cccd-11c0-414c-9093-a201c4271e82,Namespace:calico-system,Attempt:0,} returns sandbox id \"731b2d4d8a033e883c16b7aa9950778bfd0b7dc5b33cccb7a316f616ffabd294\"" Aug 13 07:15:29.399435 kubelet[2504]: E0813 07:15:29.399377 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:15:29.410154 containerd[1473]: time="2025-08-13T07:15:29.410081513Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 13 07:15:29.433718 kubelet[2504]: I0813 07:15:29.431945 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/566ee478-6b43-47b6-a9be-3cd50960f6ee-node-certs\") pod \"calico-node-8k7nb\" (UID: \"566ee478-6b43-47b6-a9be-3cd50960f6ee\") " pod="calico-system/calico-node-8k7nb" Aug 13 07:15:29.433718 kubelet[2504]: I0813 07:15:29.432079 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/566ee478-6b43-47b6-a9be-3cd50960f6ee-cni-log-dir\") pod \"calico-node-8k7nb\" (UID: \"566ee478-6b43-47b6-a9be-3cd50960f6ee\") " pod="calico-system/calico-node-8k7nb" Aug 13 07:15:29.433718 kubelet[2504]: I0813 07:15:29.432129 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/566ee478-6b43-47b6-a9be-3cd50960f6ee-flexvol-driver-host\") pod \"calico-node-8k7nb\" (UID: \"566ee478-6b43-47b6-a9be-3cd50960f6ee\") " pod="calico-system/calico-node-8k7nb" Aug 13 07:15:29.433718 kubelet[2504]: I0813 07:15:29.432168 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/566ee478-6b43-47b6-a9be-3cd50960f6ee-var-run-calico\") pod \"calico-node-8k7nb\" (UID: \"566ee478-6b43-47b6-a9be-3cd50960f6ee\") " pod="calico-system/calico-node-8k7nb" Aug 13 07:15:29.433718 kubelet[2504]: I0813 07:15:29.432214 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/566ee478-6b43-47b6-a9be-3cd50960f6ee-cni-net-dir\") pod \"calico-node-8k7nb\" (UID: \"566ee478-6b43-47b6-a9be-3cd50960f6ee\") " pod="calico-system/calico-node-8k7nb" Aug 13 07:15:29.434005 kubelet[2504]: I0813 07:15:29.432237 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/566ee478-6b43-47b6-a9be-3cd50960f6ee-var-lib-calico\") pod \"calico-node-8k7nb\" (UID: \"566ee478-6b43-47b6-a9be-3cd50960f6ee\") " pod="calico-system/calico-node-8k7nb" Aug 13 07:15:29.434005 kubelet[2504]: I0813 07:15:29.432268 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv4r5\" (UniqueName: \"kubernetes.io/projected/566ee478-6b43-47b6-a9be-3cd50960f6ee-kube-api-access-kv4r5\") pod \"calico-node-8k7nb\" (UID: \"566ee478-6b43-47b6-a9be-3cd50960f6ee\") " pod="calico-system/calico-node-8k7nb" Aug 13 07:15:29.434005 kubelet[2504]: I0813 07:15:29.432332 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/566ee478-6b43-47b6-a9be-3cd50960f6ee-cni-bin-dir\") pod \"calico-node-8k7nb\" (UID: \"566ee478-6b43-47b6-a9be-3cd50960f6ee\") " pod="calico-system/calico-node-8k7nb" Aug 13 07:15:29.434005 kubelet[2504]: I0813 07:15:29.432373 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/566ee478-6b43-47b6-a9be-3cd50960f6ee-tigera-ca-bundle\") pod \"calico-node-8k7nb\" (UID: \"566ee478-6b43-47b6-a9be-3cd50960f6ee\") " pod="calico-system/calico-node-8k7nb" Aug 13 07:15:29.434005 kubelet[2504]: I0813 07:15:29.432394 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/566ee478-6b43-47b6-a9be-3cd50960f6ee-policysync\") pod \"calico-node-8k7nb\" (UID: \"566ee478-6b43-47b6-a9be-3cd50960f6ee\") " pod="calico-system/calico-node-8k7nb" Aug 13 07:15:29.434121 kubelet[2504]: I0813 07:15:29.432409 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/566ee478-6b43-47b6-a9be-3cd50960f6ee-lib-modules\") pod \"calico-node-8k7nb\" (UID: \"566ee478-6b43-47b6-a9be-3cd50960f6ee\") " pod="calico-system/calico-node-8k7nb" Aug 13 07:15:29.434121 kubelet[2504]: I0813 07:15:29.432448 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/566ee478-6b43-47b6-a9be-3cd50960f6ee-xtables-lock\") pod \"calico-node-8k7nb\" (UID: \"566ee478-6b43-47b6-a9be-3cd50960f6ee\") " pod="calico-system/calico-node-8k7nb" Aug 13 07:15:29.440878 kubelet[2504]: E0813 07:15:29.440794 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k8p5g" podUID="ff96eac2-a650-4688-baf4-c624d8dfca9d" Aug 13 07:15:29.533670 kubelet[2504]: I0813 07:15:29.533399 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ff96eac2-a650-4688-baf4-c624d8dfca9d-varrun\") pod \"csi-node-driver-k8p5g\" (UID: \"ff96eac2-a650-4688-baf4-c624d8dfca9d\") " pod="calico-system/csi-node-driver-k8p5g" Aug 13 07:15:29.533670 kubelet[2504]: I0813 07:15:29.533526 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ff96eac2-a650-4688-baf4-c624d8dfca9d-kubelet-dir\") pod \"csi-node-driver-k8p5g\" (UID: \"ff96eac2-a650-4688-baf4-c624d8dfca9d\") " pod="calico-system/csi-node-driver-k8p5g" Aug 13 07:15:29.533670 kubelet[2504]: I0813 07:15:29.533595 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ff96eac2-a650-4688-baf4-c624d8dfca9d-registration-dir\") pod \"csi-node-driver-k8p5g\" (UID: \"ff96eac2-a650-4688-baf4-c624d8dfca9d\") " pod="calico-system/csi-node-driver-k8p5g" Aug 13 07:15:29.533670 kubelet[2504]: I0813 07:15:29.533617 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ff96eac2-a650-4688-baf4-c624d8dfca9d-socket-dir\") pod \"csi-node-driver-k8p5g\" (UID: \"ff96eac2-a650-4688-baf4-c624d8dfca9d\") " pod="calico-system/csi-node-driver-k8p5g" Aug 13 07:15:29.533670 kubelet[2504]: I0813 07:15:29.533636 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x9sq\" (UniqueName: \"kubernetes.io/projected/ff96eac2-a650-4688-baf4-c624d8dfca9d-kube-api-access-8x9sq\") pod \"csi-node-driver-k8p5g\" (UID: \"ff96eac2-a650-4688-baf4-c624d8dfca9d\") " pod="calico-system/csi-node-driver-k8p5g" Aug 13 07:15:29.541555 kubelet[2504]: E0813 07:15:29.539001 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:29.541924 kubelet[2504]: W0813 07:15:29.539048 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:29.541924 kubelet[2504]: E0813 07:15:29.541800 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:29.543942 kubelet[2504]: E0813 07:15:29.543836 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:29.543942 kubelet[2504]: W0813 07:15:29.543928 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:29.544082 kubelet[2504]: E0813 07:15:29.543978 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:29.552109 kubelet[2504]: E0813 07:15:29.552075 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:29.552109 kubelet[2504]: W0813 07:15:29.552093 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:29.552109 kubelet[2504]: E0813 07:15:29.552108 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:29.618337 containerd[1473]: time="2025-08-13T07:15:29.618281475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8k7nb,Uid:566ee478-6b43-47b6-a9be-3cd50960f6ee,Namespace:calico-system,Attempt:0,}" Aug 13 07:15:29.635889 kubelet[2504]: E0813 07:15:29.635170 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:29.635889 kubelet[2504]: W0813 07:15:29.635196 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:29.635889 kubelet[2504]: E0813 07:15:29.635219 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:29.635889 kubelet[2504]: E0813 07:15:29.635479 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:29.635889 kubelet[2504]: W0813 07:15:29.635488 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:29.635889 kubelet[2504]: E0813 07:15:29.635502 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:29.636193 kubelet[2504]: E0813 07:15:29.635914 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:29.636193 kubelet[2504]: W0813 07:15:29.635947 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:29.636193 kubelet[2504]: E0813 07:15:29.635990 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:29.636407 kubelet[2504]: E0813 07:15:29.636363 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:29.636407 kubelet[2504]: W0813 07:15:29.636397 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:29.636487 kubelet[2504]: E0813 07:15:29.636434 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:29.636737 kubelet[2504]: E0813 07:15:29.636711 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:29.636737 kubelet[2504]: W0813 07:15:29.636728 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:29.636813 kubelet[2504]: E0813 07:15:29.636746 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:29.637075 kubelet[2504]: E0813 07:15:29.637050 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:29.637075 kubelet[2504]: W0813 07:15:29.637067 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:29.637145 kubelet[2504]: E0813 07:15:29.637121 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:29.637411 kubelet[2504]: E0813 07:15:29.637390 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:29.637411 kubelet[2504]: W0813 07:15:29.637403 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:29.637626 kubelet[2504]: E0813 07:15:29.637601 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:29.637897 kubelet[2504]: E0813 07:15:29.637811 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:29.637897 kubelet[2504]: W0813 07:15:29.637826 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:29.637982 kubelet[2504]: E0813 07:15:29.637966 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:29.638144 kubelet[2504]: E0813 07:15:29.638120 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:29.638144 kubelet[2504]: W0813 07:15:29.638136 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:29.638752 kubelet[2504]: E0813 07:15:29.638210 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:29.638752 kubelet[2504]: E0813 07:15:29.638475 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:29.638752 kubelet[2504]: W0813 07:15:29.638485 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:29.638752 kubelet[2504]: E0813 07:15:29.638522 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:29.639020 kubelet[2504]: E0813 07:15:29.638986 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:29.639020 kubelet[2504]: W0813 07:15:29.639012 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:29.639120 kubelet[2504]: E0813 07:15:29.639057 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:29.639550 kubelet[2504]: E0813 07:15:29.639525 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:29.639550 kubelet[2504]: W0813 07:15:29.639543 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:29.639662 kubelet[2504]: E0813 07:15:29.639628 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:29.639930 kubelet[2504]: E0813 07:15:29.639907 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:29.639930 kubelet[2504]: W0813 07:15:29.639922 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:29.640036 kubelet[2504]: E0813 07:15:29.640014 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:29.640149 kubelet[2504]: E0813 07:15:29.640127 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:29.640149 kubelet[2504]: W0813 07:15:29.640143 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:29.640214 kubelet[2504]: E0813 07:15:29.640196 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:29.640443 kubelet[2504]: E0813 07:15:29.640406 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:29.640443 kubelet[2504]: W0813 07:15:29.640431 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:29.640577 kubelet[2504]: E0813 07:15:29.640538 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:29.640848 kubelet[2504]: E0813 07:15:29.640814 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:29.640942 kubelet[2504]: W0813 07:15:29.640849 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:29.640942 kubelet[2504]: E0813 07:15:29.640906 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:29.641276 kubelet[2504]: E0813 07:15:29.641254 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:29.641315 kubelet[2504]: W0813 07:15:29.641279 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:29.641348 kubelet[2504]: E0813 07:15:29.641321 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:29.641702 kubelet[2504]: E0813 07:15:29.641658 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:29.642254 kubelet[2504]: W0813 07:15:29.642214 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:29.642785 kubelet[2504]: E0813 07:15:29.642723 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:29.645022 kubelet[2504]: E0813 07:15:29.643748 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:29.645022 kubelet[2504]: W0813 07:15:29.643769 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:29.645022 kubelet[2504]: E0813 07:15:29.643917 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:29.645022 kubelet[2504]: E0813 07:15:29.644155 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:29.645022 kubelet[2504]: W0813 07:15:29.644174 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:29.645022 kubelet[2504]: E0813 07:15:29.644232 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:29.645395 kubelet[2504]: E0813 07:15:29.645352 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:29.645395 kubelet[2504]: W0813 07:15:29.645370 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:29.647993 kubelet[2504]: E0813 07:15:29.647956 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:29.648331 kubelet[2504]: E0813 07:15:29.648113 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:29.648331 kubelet[2504]: W0813 07:15:29.648123 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:29.648839 kubelet[2504]: E0813 07:15:29.648796 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:29.649396 kubelet[2504]: E0813 07:15:29.649351 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:29.649396 kubelet[2504]: W0813 07:15:29.649365 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:29.649638 kubelet[2504]: E0813 07:15:29.649454 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:29.650242 kubelet[2504]: E0813 07:15:29.650224 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:29.650242 kubelet[2504]: W0813 07:15:29.650237 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:29.650496 kubelet[2504]: E0813 07:15:29.650307 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:29.650566 kubelet[2504]: E0813 07:15:29.650525 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:29.650566 kubelet[2504]: W0813 07:15:29.650535 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:29.650566 kubelet[2504]: E0813 07:15:29.650546 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:29.650667 containerd[1473]: time="2025-08-13T07:15:29.650385919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:15:29.650750 containerd[1473]: time="2025-08-13T07:15:29.650546993Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:15:29.650750 containerd[1473]: time="2025-08-13T07:15:29.650586056Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:15:29.650816 containerd[1473]: time="2025-08-13T07:15:29.650783389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:15:29.676465 kubelet[2504]: E0813 07:15:29.671658 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:29.676465 kubelet[2504]: W0813 07:15:29.671683 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:29.676465 kubelet[2504]: E0813 07:15:29.671712 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:29.680031 systemd[1]: Started cri-containerd-11bf230913098534525983559414e63890563b9c1aaa5059cd1dfab1a05f09e6.scope - libcontainer container 11bf230913098534525983559414e63890563b9c1aaa5059cd1dfab1a05f09e6. Aug 13 07:15:29.727461 containerd[1473]: time="2025-08-13T07:15:29.725364896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8k7nb,Uid:566ee478-6b43-47b6-a9be-3cd50960f6ee,Namespace:calico-system,Attempt:0,} returns sandbox id \"11bf230913098534525983559414e63890563b9c1aaa5059cd1dfab1a05f09e6\"" Aug 13 07:15:30.885573 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1186570798.mount: Deactivated successfully. Aug 13 07:15:31.702724 kubelet[2504]: E0813 07:15:31.702646 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k8p5g" podUID="ff96eac2-a650-4688-baf4-c624d8dfca9d" Aug 13 07:15:32.038281 containerd[1473]: time="2025-08-13T07:15:32.038129494Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:32.039067 containerd[1473]: time="2025-08-13T07:15:32.039008520Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Aug 13 07:15:32.040521 containerd[1473]: time="2025-08-13T07:15:32.040490764Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:32.042693 containerd[1473]: time="2025-08-13T07:15:32.042640456Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:32.043255 containerd[1473]: time="2025-08-13T07:15:32.043218866Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.633062602s" Aug 13 07:15:32.043255 containerd[1473]: time="2025-08-13T07:15:32.043251277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Aug 13 07:15:32.044259 containerd[1473]: time="2025-08-13T07:15:32.044182481Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 13 07:15:32.056395 containerd[1473]: time="2025-08-13T07:15:32.056340816Z" level=info msg="CreateContainer within sandbox \"731b2d4d8a033e883c16b7aa9950778bfd0b7dc5b33cccb7a316f616ffabd294\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 13 07:15:32.072680 containerd[1473]: time="2025-08-13T07:15:32.072631648Z" level=info msg="CreateContainer within sandbox \"731b2d4d8a033e883c16b7aa9950778bfd0b7dc5b33cccb7a316f616ffabd294\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7d3bf94c4e587cbbed0968fcbac7a34dfda6a30028cebc1f426eb83d0bfe96af\"" Aug 13 07:15:32.073216 containerd[1473]: time="2025-08-13T07:15:32.073112644Z" level=info msg="StartContainer for \"7d3bf94c4e587cbbed0968fcbac7a34dfda6a30028cebc1f426eb83d0bfe96af\"" Aug 13 07:15:32.106058 systemd[1]: Started cri-containerd-7d3bf94c4e587cbbed0968fcbac7a34dfda6a30028cebc1f426eb83d0bfe96af.scope - libcontainer container 7d3bf94c4e587cbbed0968fcbac7a34dfda6a30028cebc1f426eb83d0bfe96af. Aug 13 07:15:32.151944 containerd[1473]: time="2025-08-13T07:15:32.151895269Z" level=info msg="StartContainer for \"7d3bf94c4e587cbbed0968fcbac7a34dfda6a30028cebc1f426eb83d0bfe96af\" returns successfully" Aug 13 07:15:32.789352 kubelet[2504]: E0813 07:15:32.789290 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:15:32.805427 kubelet[2504]: I0813 07:15:32.805309 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6dfcdf7747-tbls6" podStartSLOduration=2.170202933 podStartE2EDuration="4.804722403s" podCreationTimestamp="2025-08-13 07:15:28 +0000 UTC" firstStartedPulling="2025-08-13 07:15:29.409490347 +0000 UTC m=+18.803581799" lastFinishedPulling="2025-08-13 07:15:32.044009827 +0000 UTC m=+21.438101269" observedRunningTime="2025-08-13 07:15:32.804213504 +0000 UTC m=+22.198304956" watchObservedRunningTime="2025-08-13 07:15:32.804722403 +0000 UTC m=+22.198813855" Aug 13 07:15:32.814140 kubelet[2504]: E0813 07:15:32.814097 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:32.814140 kubelet[2504]: W0813 07:15:32.814123 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:32.814426 kubelet[2504]: E0813 07:15:32.814151 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:32.814512 kubelet[2504]: E0813 07:15:32.814439 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:32.814512 kubelet[2504]: W0813 07:15:32.814448 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:32.814512 kubelet[2504]: E0813 07:15:32.814458 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:32.814760 kubelet[2504]: E0813 07:15:32.814651 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:32.814760 kubelet[2504]: W0813 07:15:32.814659 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:32.814760 kubelet[2504]: E0813 07:15:32.814667 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:32.814943 kubelet[2504]: E0813 07:15:32.814926 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:32.814943 kubelet[2504]: W0813 07:15:32.814935 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:32.814943 kubelet[2504]: E0813 07:15:32.814944 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:32.815344 kubelet[2504]: E0813 07:15:32.815329 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:32.815344 kubelet[2504]: W0813 07:15:32.815340 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:32.815443 kubelet[2504]: E0813 07:15:32.815349 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:32.815572 kubelet[2504]: E0813 07:15:32.815557 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:32.815572 kubelet[2504]: W0813 07:15:32.815569 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:32.815651 kubelet[2504]: E0813 07:15:32.815578 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:32.815926 kubelet[2504]: E0813 07:15:32.815840 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:32.815926 kubelet[2504]: W0813 07:15:32.815906 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:32.816020 kubelet[2504]: E0813 07:15:32.815951 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:32.816369 kubelet[2504]: E0813 07:15:32.816335 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:32.816369 kubelet[2504]: W0813 07:15:32.816353 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:32.816455 kubelet[2504]: E0813 07:15:32.816382 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:32.816705 kubelet[2504]: E0813 07:15:32.816673 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:32.816705 kubelet[2504]: W0813 07:15:32.816689 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:32.816705 kubelet[2504]: E0813 07:15:32.816700 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:32.817068 kubelet[2504]: E0813 07:15:32.817038 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:32.817068 kubelet[2504]: W0813 07:15:32.817056 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:32.817068 kubelet[2504]: E0813 07:15:32.817072 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:32.817514 kubelet[2504]: E0813 07:15:32.817476 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:32.817514 kubelet[2504]: W0813 07:15:32.817494 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:32.817514 kubelet[2504]: E0813 07:15:32.817508 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:32.817769 kubelet[2504]: E0813 07:15:32.817754 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:32.817769 kubelet[2504]: W0813 07:15:32.817766 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:32.817826 kubelet[2504]: E0813 07:15:32.817775 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:32.819443 kubelet[2504]: E0813 07:15:32.817994 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:32.819443 kubelet[2504]: W0813 07:15:32.818019 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:32.819443 kubelet[2504]: E0813 07:15:32.818030 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:32.819443 kubelet[2504]: E0813 07:15:32.819375 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:32.819443 kubelet[2504]: W0813 07:15:32.819387 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:32.819443 kubelet[2504]: E0813 07:15:32.819399 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:32.819663 kubelet[2504]: E0813 07:15:32.819618 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:32.819663 kubelet[2504]: W0813 07:15:32.819629 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:32.819663 kubelet[2504]: E0813 07:15:32.819640 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:32.862171 kubelet[2504]: E0813 07:15:32.862116 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:32.862171 kubelet[2504]: W0813 07:15:32.862145 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:32.862171 kubelet[2504]: E0813 07:15:32.862171 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:32.862454 kubelet[2504]: E0813 07:15:32.862413 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:32.862454 kubelet[2504]: W0813 07:15:32.862423 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:32.862454 kubelet[2504]: E0813 07:15:32.862438 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:32.862795 kubelet[2504]: E0813 07:15:32.862754 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:32.862795 kubelet[2504]: W0813 07:15:32.862785 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:32.862906 kubelet[2504]: E0813 07:15:32.862819 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:32.863088 kubelet[2504]: E0813 07:15:32.863069 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:32.863088 kubelet[2504]: W0813 07:15:32.863081 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:32.863165 kubelet[2504]: E0813 07:15:32.863097 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:32.863317 kubelet[2504]: E0813 07:15:32.863290 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:32.863317 kubelet[2504]: W0813 07:15:32.863302 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:32.863317 kubelet[2504]: E0813 07:15:32.863316 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:32.863585 kubelet[2504]: E0813 07:15:32.863566 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:32.863585 kubelet[2504]: W0813 07:15:32.863578 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:32.863663 kubelet[2504]: E0813 07:15:32.863593 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:32.863956 kubelet[2504]: E0813 07:15:32.863926 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:32.863956 kubelet[2504]: W0813 07:15:32.863952 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:32.864077 kubelet[2504]: E0813 07:15:32.863987 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:32.864278 kubelet[2504]: E0813 07:15:32.864259 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:32.864278 kubelet[2504]: W0813 07:15:32.864271 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:32.864344 kubelet[2504]: E0813 07:15:32.864285 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:32.864607 kubelet[2504]: E0813 07:15:32.864565 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:32.864607 kubelet[2504]: W0813 07:15:32.864581 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:32.864607 kubelet[2504]: E0813 07:15:32.864602 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:32.864929 kubelet[2504]: E0813 07:15:32.864823 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:32.864929 kubelet[2504]: W0813 07:15:32.864832 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:32.864929 kubelet[2504]: E0813 07:15:32.864845 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:32.865144 kubelet[2504]: E0813 07:15:32.865115 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:32.865144 kubelet[2504]: W0813 07:15:32.865128 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:32.865324 kubelet[2504]: E0813 07:15:32.865232 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:32.865383 kubelet[2504]: E0813 07:15:32.865338 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:32.865383 kubelet[2504]: W0813 07:15:32.865349 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:32.865433 kubelet[2504]: E0813 07:15:32.865395 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:32.865616 kubelet[2504]: E0813 07:15:32.865595 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:32.865616 kubelet[2504]: W0813 07:15:32.865607 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:32.865681 kubelet[2504]: E0813 07:15:32.865623 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:32.865852 kubelet[2504]: E0813 07:15:32.865837 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:32.865852 kubelet[2504]: W0813 07:15:32.865848 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:32.865928 kubelet[2504]: E0813 07:15:32.865909 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:32.866127 kubelet[2504]: E0813 07:15:32.866112 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:32.866127 kubelet[2504]: W0813 07:15:32.866124 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:32.866193 kubelet[2504]: E0813 07:15:32.866139 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:32.866342 kubelet[2504]: E0813 07:15:32.866325 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:32.866342 kubelet[2504]: W0813 07:15:32.866339 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:32.866406 kubelet[2504]: E0813 07:15:32.866355 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:32.866570 kubelet[2504]: E0813 07:15:32.866556 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:32.866570 kubelet[2504]: W0813 07:15:32.866569 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:32.866617 kubelet[2504]: E0813 07:15:32.866578 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:32.867052 kubelet[2504]: E0813 07:15:32.867029 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:32.867052 kubelet[2504]: W0813 07:15:32.867042 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:32.867052 kubelet[2504]: E0813 07:15:32.867051 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:33.698472 kubelet[2504]: E0813 07:15:33.698410 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k8p5g" podUID="ff96eac2-a650-4688-baf4-c624d8dfca9d" Aug 13 07:15:33.739395 containerd[1473]: time="2025-08-13T07:15:33.739326020Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:33.740164 containerd[1473]: time="2025-08-13T07:15:33.740125306Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Aug 13 07:15:33.741450 containerd[1473]: time="2025-08-13T07:15:33.741409445Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:33.743603 containerd[1473]: time="2025-08-13T07:15:33.743572952Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:33.744328 containerd[1473]: time="2025-08-13T07:15:33.744283860Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.700065091s" Aug 13 07:15:33.744328 containerd[1473]: time="2025-08-13T07:15:33.744316322Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Aug 13 07:15:33.746168 containerd[1473]: time="2025-08-13T07:15:33.746133055Z" level=info msg="CreateContainer within sandbox \"11bf230913098534525983559414e63890563b9c1aaa5059cd1dfab1a05f09e6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 13 07:15:33.762828 containerd[1473]: time="2025-08-13T07:15:33.762775499Z" level=info msg="CreateContainer within sandbox \"11bf230913098534525983559414e63890563b9c1aaa5059cd1dfab1a05f09e6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"0de7ef3301f8fd3e429a2c7e742e712ca17a1dd796cf07a1f918a952103e1577\"" Aug 13 07:15:33.763416 containerd[1473]: time="2025-08-13T07:15:33.763375729Z" level=info msg="StartContainer for \"0de7ef3301f8fd3e429a2c7e742e712ca17a1dd796cf07a1f918a952103e1577\"" Aug 13 07:15:33.792091 kubelet[2504]: E0813 07:15:33.792053 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:15:33.802000 systemd[1]: Started cri-containerd-0de7ef3301f8fd3e429a2c7e742e712ca17a1dd796cf07a1f918a952103e1577.scope - libcontainer container 0de7ef3301f8fd3e429a2c7e742e712ca17a1dd796cf07a1f918a952103e1577. Aug 13 07:15:33.823796 kubelet[2504]: E0813 07:15:33.823766 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:33.823796 kubelet[2504]: W0813 07:15:33.823787 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:33.823981 kubelet[2504]: E0813 07:15:33.823810 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:33.824082 kubelet[2504]: E0813 07:15:33.824070 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:33.824082 kubelet[2504]: W0813 07:15:33.824080 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:33.824137 kubelet[2504]: E0813 07:15:33.824092 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:33.824315 kubelet[2504]: E0813 07:15:33.824303 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:33.824315 kubelet[2504]: W0813 07:15:33.824313 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:33.824392 kubelet[2504]: E0813 07:15:33.824322 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:33.824680 kubelet[2504]: E0813 07:15:33.824581 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:33.824680 kubelet[2504]: W0813 07:15:33.824592 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:33.824680 kubelet[2504]: E0813 07:15:33.824602 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:33.824851 kubelet[2504]: E0813 07:15:33.824837 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:33.824851 kubelet[2504]: W0813 07:15:33.824847 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:33.824946 kubelet[2504]: E0813 07:15:33.824855 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:33.825123 kubelet[2504]: E0813 07:15:33.825108 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:33.825123 kubelet[2504]: W0813 07:15:33.825119 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:33.825173 kubelet[2504]: E0813 07:15:33.825131 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:33.825321 kubelet[2504]: E0813 07:15:33.825308 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:33.825321 kubelet[2504]: W0813 07:15:33.825317 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:33.825396 kubelet[2504]: E0813 07:15:33.825326 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:33.825562 kubelet[2504]: E0813 07:15:33.825550 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:33.825585 kubelet[2504]: W0813 07:15:33.825562 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:33.825585 kubelet[2504]: E0813 07:15:33.825571 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:33.825780 kubelet[2504]: E0813 07:15:33.825767 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:33.825780 kubelet[2504]: W0813 07:15:33.825777 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:33.825845 kubelet[2504]: E0813 07:15:33.825784 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:33.826073 kubelet[2504]: E0813 07:15:33.826059 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:33.826073 kubelet[2504]: W0813 07:15:33.826071 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:33.826114 kubelet[2504]: E0813 07:15:33.826081 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:33.826988 kubelet[2504]: E0813 07:15:33.826973 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:33.826988 kubelet[2504]: W0813 07:15:33.826985 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:33.826988 kubelet[2504]: E0813 07:15:33.826995 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:33.827237 kubelet[2504]: E0813 07:15:33.827217 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:33.827237 kubelet[2504]: W0813 07:15:33.827228 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:33.827237 kubelet[2504]: E0813 07:15:33.827237 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:33.827465 kubelet[2504]: E0813 07:15:33.827452 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:33.827465 kubelet[2504]: W0813 07:15:33.827463 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:33.827516 kubelet[2504]: E0813 07:15:33.827471 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:33.827678 kubelet[2504]: E0813 07:15:33.827667 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:33.827700 kubelet[2504]: W0813 07:15:33.827676 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:33.827700 kubelet[2504]: E0813 07:15:33.827685 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:33.828003 kubelet[2504]: E0813 07:15:33.827991 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:15:33.828035 kubelet[2504]: W0813 07:15:33.828002 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:15:33.828035 kubelet[2504]: E0813 07:15:33.828011 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:15:33.835000 containerd[1473]: time="2025-08-13T07:15:33.834948543Z" level=info msg="StartContainer for \"0de7ef3301f8fd3e429a2c7e742e712ca17a1dd796cf07a1f918a952103e1577\" returns successfully" Aug 13 07:15:33.847078 systemd[1]: cri-containerd-0de7ef3301f8fd3e429a2c7e742e712ca17a1dd796cf07a1f918a952103e1577.scope: Deactivated successfully. Aug 13 07:15:33.871110 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0de7ef3301f8fd3e429a2c7e742e712ca17a1dd796cf07a1f918a952103e1577-rootfs.mount: Deactivated successfully. Aug 13 07:15:34.611600 containerd[1473]: time="2025-08-13T07:15:34.608820674Z" level=info msg="shim disconnected" id=0de7ef3301f8fd3e429a2c7e742e712ca17a1dd796cf07a1f918a952103e1577 namespace=k8s.io Aug 13 07:15:34.611600 containerd[1473]: time="2025-08-13T07:15:34.611590942Z" level=warning msg="cleaning up after shim disconnected" id=0de7ef3301f8fd3e429a2c7e742e712ca17a1dd796cf07a1f918a952103e1577 namespace=k8s.io Aug 13 07:15:34.611600 containerd[1473]: time="2025-08-13T07:15:34.611604537Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:15:34.796466 kubelet[2504]: E0813 07:15:34.796392 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:15:34.797249 containerd[1473]: time="2025-08-13T07:15:34.797213706Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 13 07:15:35.697597 kubelet[2504]: E0813 07:15:35.697551 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k8p5g" podUID="ff96eac2-a650-4688-baf4-c624d8dfca9d" Aug 13 07:15:37.697769 kubelet[2504]: E0813 07:15:37.697582 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k8p5g" podUID="ff96eac2-a650-4688-baf4-c624d8dfca9d" Aug 13 07:15:38.442237 containerd[1473]: time="2025-08-13T07:15:38.442162875Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:38.443087 containerd[1473]: time="2025-08-13T07:15:38.443020559Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Aug 13 07:15:38.444466 containerd[1473]: time="2025-08-13T07:15:38.444432074Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:38.446822 containerd[1473]: time="2025-08-13T07:15:38.446794168Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:38.447484 containerd[1473]: time="2025-08-13T07:15:38.447456475Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 3.650198144s" Aug 13 07:15:38.447484 containerd[1473]: time="2025-08-13T07:15:38.447485810Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Aug 13 07:15:38.450229 containerd[1473]: time="2025-08-13T07:15:38.450195427Z" level=info msg="CreateContainer within sandbox \"11bf230913098534525983559414e63890563b9c1aaa5059cd1dfab1a05f09e6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 13 07:15:38.465973 containerd[1473]: time="2025-08-13T07:15:38.465930542Z" level=info msg="CreateContainer within sandbox \"11bf230913098534525983559414e63890563b9c1aaa5059cd1dfab1a05f09e6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"cdbeebc6c73c828ecae3834d36921cfb3ae79ebea075841fe337864c2714ef32\"" Aug 13 07:15:38.466324 containerd[1473]: time="2025-08-13T07:15:38.466302943Z" level=info msg="StartContainer for \"cdbeebc6c73c828ecae3834d36921cfb3ae79ebea075841fe337864c2714ef32\"" Aug 13 07:15:38.505012 systemd[1]: Started cri-containerd-cdbeebc6c73c828ecae3834d36921cfb3ae79ebea075841fe337864c2714ef32.scope - libcontainer container cdbeebc6c73c828ecae3834d36921cfb3ae79ebea075841fe337864c2714ef32. Aug 13 07:15:38.542201 containerd[1473]: time="2025-08-13T07:15:38.542133733Z" level=info msg="StartContainer for \"cdbeebc6c73c828ecae3834d36921cfb3ae79ebea075841fe337864c2714ef32\" returns successfully" Aug 13 07:15:39.698239 kubelet[2504]: E0813 07:15:39.698166 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k8p5g" podUID="ff96eac2-a650-4688-baf4-c624d8dfca9d" Aug 13 07:15:39.918839 systemd[1]: cri-containerd-cdbeebc6c73c828ecae3834d36921cfb3ae79ebea075841fe337864c2714ef32.scope: Deactivated successfully. Aug 13 07:15:39.942586 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cdbeebc6c73c828ecae3834d36921cfb3ae79ebea075841fe337864c2714ef32-rootfs.mount: Deactivated successfully. Aug 13 07:15:39.990271 kubelet[2504]: I0813 07:15:39.989853 2504 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 13 07:15:40.309265 systemd[1]: Created slice kubepods-burstable-pod2f61af08_7750_47f4_b608_c1bb42e1730d.slice - libcontainer container kubepods-burstable-pod2f61af08_7750_47f4_b608_c1bb42e1730d.slice. Aug 13 07:15:40.320819 systemd[1]: Created slice kubepods-besteffort-pod5c8ac06a_59ac_4ad7_851b_39e9a256e71f.slice - libcontainer container kubepods-besteffort-pod5c8ac06a_59ac_4ad7_851b_39e9a256e71f.slice. Aug 13 07:15:40.327079 systemd[1]: Created slice kubepods-burstable-pod0f44421e_e215_4efb_b425_a905d3215525.slice - libcontainer container kubepods-burstable-pod0f44421e_e215_4efb_b425_a905d3215525.slice. Aug 13 07:15:40.331416 systemd[1]: Created slice kubepods-besteffort-pod6371703e_a0f7_43f3_a612_88598f32a9f9.slice - libcontainer container kubepods-besteffort-pod6371703e_a0f7_43f3_a612_88598f32a9f9.slice. Aug 13 07:15:40.334988 containerd[1473]: time="2025-08-13T07:15:40.334839296Z" level=info msg="shim disconnected" id=cdbeebc6c73c828ecae3834d36921cfb3ae79ebea075841fe337864c2714ef32 namespace=k8s.io Aug 13 07:15:40.335512 containerd[1473]: time="2025-08-13T07:15:40.335350537Z" level=warning msg="cleaning up after shim disconnected" id=cdbeebc6c73c828ecae3834d36921cfb3ae79ebea075841fe337864c2714ef32 namespace=k8s.io Aug 13 07:15:40.335512 containerd[1473]: time="2025-08-13T07:15:40.335368241Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:15:40.336461 systemd[1]: Created slice kubepods-besteffort-pod142cde8b_2616_412a_b265_085158d0383f.slice - libcontainer container kubepods-besteffort-pod142cde8b_2616_412a_b265_085158d0383f.slice. Aug 13 07:15:40.346027 systemd[1]: Created slice kubepods-besteffort-poddac8f550_1b39_48eb_bc01_57837792a21e.slice - libcontainer container kubepods-besteffort-poddac8f550_1b39_48eb_bc01_57837792a21e.slice. Aug 13 07:15:40.349915 systemd[1]: Created slice kubepods-besteffort-podf71c796a_2b24_4955_a685_11764bd3ee81.slice - libcontainer container kubepods-besteffort-podf71c796a_2b24_4955_a685_11764bd3ee81.slice. Aug 13 07:15:40.417226 kubelet[2504]: I0813 07:15:40.417138 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/142cde8b-2616-412a-b265-085158d0383f-config\") pod \"goldmane-768f4c5c69-ww6f8\" (UID: \"142cde8b-2616-412a-b265-085158d0383f\") " pod="calico-system/goldmane-768f4c5c69-ww6f8" Aug 13 07:15:40.417226 kubelet[2504]: I0813 07:15:40.417206 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6371703e-a0f7-43f3-a612-88598f32a9f9-calico-apiserver-certs\") pod \"calico-apiserver-7bc76445cf-h7vjm\" (UID: \"6371703e-a0f7-43f3-a612-88598f32a9f9\") " pod="calico-apiserver/calico-apiserver-7bc76445cf-h7vjm" Aug 13 07:15:40.417226 kubelet[2504]: I0813 07:15:40.417229 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5nph\" (UniqueName: \"kubernetes.io/projected/dac8f550-1b39-48eb-bc01-57837792a21e-kube-api-access-n5nph\") pod \"whisker-6b6cf55fc8-x54ms\" (UID: \"dac8f550-1b39-48eb-bc01-57837792a21e\") " pod="calico-system/whisker-6b6cf55fc8-x54ms" Aug 13 07:15:40.417454 kubelet[2504]: I0813 07:15:40.417247 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-424dk\" (UniqueName: \"kubernetes.io/projected/f71c796a-2b24-4955-a685-11764bd3ee81-kube-api-access-424dk\") pod \"calico-apiserver-7bc76445cf-6gxtp\" (UID: \"f71c796a-2b24-4955-a685-11764bd3ee81\") " pod="calico-apiserver/calico-apiserver-7bc76445cf-6gxtp" Aug 13 07:15:40.417454 kubelet[2504]: I0813 07:15:40.417266 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjdjn\" (UniqueName: \"kubernetes.io/projected/2f61af08-7750-47f4-b608-c1bb42e1730d-kube-api-access-zjdjn\") pod \"coredns-668d6bf9bc-kkjwm\" (UID: \"2f61af08-7750-47f4-b608-c1bb42e1730d\") " pod="kube-system/coredns-668d6bf9bc-kkjwm" Aug 13 07:15:40.417454 kubelet[2504]: I0813 07:15:40.417287 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f71c796a-2b24-4955-a685-11764bd3ee81-calico-apiserver-certs\") pod \"calico-apiserver-7bc76445cf-6gxtp\" (UID: \"f71c796a-2b24-4955-a685-11764bd3ee81\") " pod="calico-apiserver/calico-apiserver-7bc76445cf-6gxtp" Aug 13 07:15:40.417454 kubelet[2504]: I0813 07:15:40.417304 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c8ac06a-59ac-4ad7-851b-39e9a256e71f-tigera-ca-bundle\") pod \"calico-kube-controllers-d5b45b8d4-f96tt\" (UID: \"5c8ac06a-59ac-4ad7-851b-39e9a256e71f\") " pod="calico-system/calico-kube-controllers-d5b45b8d4-f96tt" Aug 13 07:15:40.417552 kubelet[2504]: I0813 07:15:40.417439 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7lq9\" (UniqueName: \"kubernetes.io/projected/142cde8b-2616-412a-b265-085158d0383f-kube-api-access-c7lq9\") pod \"goldmane-768f4c5c69-ww6f8\" (UID: \"142cde8b-2616-412a-b265-085158d0383f\") " pod="calico-system/goldmane-768f4c5c69-ww6f8" Aug 13 07:15:40.417552 kubelet[2504]: I0813 07:15:40.417500 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92hxg\" (UniqueName: \"kubernetes.io/projected/6371703e-a0f7-43f3-a612-88598f32a9f9-kube-api-access-92hxg\") pod \"calico-apiserver-7bc76445cf-h7vjm\" (UID: \"6371703e-a0f7-43f3-a612-88598f32a9f9\") " pod="calico-apiserver/calico-apiserver-7bc76445cf-h7vjm" Aug 13 07:15:40.417552 kubelet[2504]: I0813 07:15:40.417516 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/142cde8b-2616-412a-b265-085158d0383f-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-ww6f8\" (UID: \"142cde8b-2616-412a-b265-085158d0383f\") " pod="calico-system/goldmane-768f4c5c69-ww6f8" Aug 13 07:15:40.417552 kubelet[2504]: I0813 07:15:40.417534 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f61af08-7750-47f4-b608-c1bb42e1730d-config-volume\") pod \"coredns-668d6bf9bc-kkjwm\" (UID: \"2f61af08-7750-47f4-b608-c1bb42e1730d\") " pod="kube-system/coredns-668d6bf9bc-kkjwm" Aug 13 07:15:40.417649 kubelet[2504]: I0813 07:15:40.417566 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wtnv\" (UniqueName: \"kubernetes.io/projected/0f44421e-e215-4efb-b425-a905d3215525-kube-api-access-6wtnv\") pod \"coredns-668d6bf9bc-8bmkx\" (UID: \"0f44421e-e215-4efb-b425-a905d3215525\") " pod="kube-system/coredns-668d6bf9bc-8bmkx" Aug 13 07:15:40.417649 kubelet[2504]: I0813 07:15:40.417589 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg42d\" (UniqueName: \"kubernetes.io/projected/5c8ac06a-59ac-4ad7-851b-39e9a256e71f-kube-api-access-bg42d\") pod \"calico-kube-controllers-d5b45b8d4-f96tt\" (UID: \"5c8ac06a-59ac-4ad7-851b-39e9a256e71f\") " pod="calico-system/calico-kube-controllers-d5b45b8d4-f96tt" Aug 13 07:15:40.417649 kubelet[2504]: I0813 07:15:40.417608 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f44421e-e215-4efb-b425-a905d3215525-config-volume\") pod \"coredns-668d6bf9bc-8bmkx\" (UID: \"0f44421e-e215-4efb-b425-a905d3215525\") " pod="kube-system/coredns-668d6bf9bc-8bmkx" Aug 13 07:15:40.417649 kubelet[2504]: I0813 07:15:40.417623 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dac8f550-1b39-48eb-bc01-57837792a21e-whisker-backend-key-pair\") pod \"whisker-6b6cf55fc8-x54ms\" (UID: \"dac8f550-1b39-48eb-bc01-57837792a21e\") " pod="calico-system/whisker-6b6cf55fc8-x54ms" Aug 13 07:15:40.417751 kubelet[2504]: I0813 07:15:40.417716 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/142cde8b-2616-412a-b265-085158d0383f-goldmane-key-pair\") pod \"goldmane-768f4c5c69-ww6f8\" (UID: \"142cde8b-2616-412a-b265-085158d0383f\") " pod="calico-system/goldmane-768f4c5c69-ww6f8" Aug 13 07:15:40.417786 kubelet[2504]: I0813 07:15:40.417760 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dac8f550-1b39-48eb-bc01-57837792a21e-whisker-ca-bundle\") pod \"whisker-6b6cf55fc8-x54ms\" (UID: \"dac8f550-1b39-48eb-bc01-57837792a21e\") " pod="calico-system/whisker-6b6cf55fc8-x54ms" Aug 13 07:15:40.617358 kubelet[2504]: E0813 07:15:40.617166 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:15:40.617912 containerd[1473]: time="2025-08-13T07:15:40.617813367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kkjwm,Uid:2f61af08-7750-47f4-b608-c1bb42e1730d,Namespace:kube-system,Attempt:0,}" Aug 13 07:15:40.624146 containerd[1473]: time="2025-08-13T07:15:40.624112583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d5b45b8d4-f96tt,Uid:5c8ac06a-59ac-4ad7-851b-39e9a256e71f,Namespace:calico-system,Attempt:0,}" Aug 13 07:15:40.629528 kubelet[2504]: E0813 07:15:40.629491 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:15:40.630384 containerd[1473]: time="2025-08-13T07:15:40.629997700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8bmkx,Uid:0f44421e-e215-4efb-b425-a905d3215525,Namespace:kube-system,Attempt:0,}" Aug 13 07:15:40.635878 containerd[1473]: time="2025-08-13T07:15:40.635810933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bc76445cf-h7vjm,Uid:6371703e-a0f7-43f3-a612-88598f32a9f9,Namespace:calico-apiserver,Attempt:0,}" Aug 13 07:15:40.642688 containerd[1473]: time="2025-08-13T07:15:40.642652690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-ww6f8,Uid:142cde8b-2616-412a-b265-085158d0383f,Namespace:calico-system,Attempt:0,}" Aug 13 07:15:40.666797 containerd[1473]: time="2025-08-13T07:15:40.666737463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b6cf55fc8-x54ms,Uid:dac8f550-1b39-48eb-bc01-57837792a21e,Namespace:calico-system,Attempt:0,}" Aug 13 07:15:40.667580 containerd[1473]: time="2025-08-13T07:15:40.667126245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bc76445cf-6gxtp,Uid:f71c796a-2b24-4955-a685-11764bd3ee81,Namespace:calico-apiserver,Attempt:0,}" Aug 13 07:15:40.790112 containerd[1473]: time="2025-08-13T07:15:40.789888090Z" level=error msg="Failed to destroy network for sandbox \"31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:40.790726 containerd[1473]: time="2025-08-13T07:15:40.790693435Z" level=error msg="encountered an error cleaning up failed sandbox \"31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:40.791004 containerd[1473]: time="2025-08-13T07:15:40.790974203Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kkjwm,Uid:2f61af08-7750-47f4-b608-c1bb42e1730d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:40.792850 containerd[1473]: time="2025-08-13T07:15:40.792676915Z" level=error msg="Failed to destroy network for sandbox \"6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:40.793393 containerd[1473]: time="2025-08-13T07:15:40.793366702Z" level=error msg="encountered an error cleaning up failed sandbox \"6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:40.793633 containerd[1473]: time="2025-08-13T07:15:40.793598177Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bc76445cf-h7vjm,Uid:6371703e-a0f7-43f3-a612-88598f32a9f9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:40.793887 containerd[1473]: time="2025-08-13T07:15:40.793843328Z" level=error msg="Failed to destroy network for sandbox \"354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:40.794443 containerd[1473]: time="2025-08-13T07:15:40.794419421Z" level=error msg="encountered an error cleaning up failed sandbox \"354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:40.794790 containerd[1473]: time="2025-08-13T07:15:40.794518989Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d5b45b8d4-f96tt,Uid:5c8ac06a-59ac-4ad7-851b-39e9a256e71f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:40.809939 kubelet[2504]: E0813 07:15:40.809884 2504 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:40.810363 kubelet[2504]: E0813 07:15:40.809986 2504 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-d5b45b8d4-f96tt" Aug 13 07:15:40.810363 kubelet[2504]: E0813 07:15:40.810012 2504 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-d5b45b8d4-f96tt" Aug 13 07:15:40.810363 kubelet[2504]: E0813 07:15:40.810058 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-d5b45b8d4-f96tt_calico-system(5c8ac06a-59ac-4ad7-851b-39e9a256e71f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-d5b45b8d4-f96tt_calico-system(5c8ac06a-59ac-4ad7-851b-39e9a256e71f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-d5b45b8d4-f96tt" podUID="5c8ac06a-59ac-4ad7-851b-39e9a256e71f" Aug 13 07:15:40.810916 kubelet[2504]: E0813 07:15:40.810606 2504 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:40.810916 kubelet[2504]: E0813 07:15:40.810675 2504 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bc76445cf-h7vjm" Aug 13 07:15:40.810916 kubelet[2504]: E0813 07:15:40.810699 2504 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bc76445cf-h7vjm" Aug 13 07:15:40.811030 kubelet[2504]: E0813 07:15:40.810743 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7bc76445cf-h7vjm_calico-apiserver(6371703e-a0f7-43f3-a612-88598f32a9f9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7bc76445cf-h7vjm_calico-apiserver(6371703e-a0f7-43f3-a612-88598f32a9f9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7bc76445cf-h7vjm" podUID="6371703e-a0f7-43f3-a612-88598f32a9f9" Aug 13 07:15:40.811030 kubelet[2504]: E0813 07:15:40.810790 2504 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:40.811030 kubelet[2504]: E0813 07:15:40.810808 2504 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-kkjwm" Aug 13 07:15:40.811132 kubelet[2504]: E0813 07:15:40.810822 2504 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-kkjwm" Aug 13 07:15:40.811132 kubelet[2504]: E0813 07:15:40.810842 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-kkjwm_kube-system(2f61af08-7750-47f4-b608-c1bb42e1730d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-kkjwm_kube-system(2f61af08-7750-47f4-b608-c1bb42e1730d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-kkjwm" podUID="2f61af08-7750-47f4-b608-c1bb42e1730d" Aug 13 07:15:40.823692 kubelet[2504]: I0813 07:15:40.823169 2504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9" Aug 13 07:15:40.824052 containerd[1473]: time="2025-08-13T07:15:40.824022193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 07:15:40.836019 kubelet[2504]: I0813 07:15:40.835082 2504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e" Aug 13 07:15:40.851116 kubelet[2504]: I0813 07:15:40.850526 2504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4" Aug 13 07:15:40.879179 containerd[1473]: time="2025-08-13T07:15:40.879039468Z" level=info msg="StopPodSandbox for \"31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4\"" Aug 13 07:15:40.882475 containerd[1473]: time="2025-08-13T07:15:40.882427250Z" level=info msg="Ensure that sandbox 31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4 in task-service has been cleanup successfully" Aug 13 07:15:40.892524 containerd[1473]: time="2025-08-13T07:15:40.892459035Z" level=info msg="StopPodSandbox for \"354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e\"" Aug 13 07:15:40.892900 containerd[1473]: time="2025-08-13T07:15:40.892849560Z" level=info msg="Ensure that sandbox 354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e in task-service has been cleanup successfully" Aug 13 07:15:40.911256 containerd[1473]: time="2025-08-13T07:15:40.911162899Z" level=error msg="Failed to destroy network for sandbox \"d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:40.911891 containerd[1473]: time="2025-08-13T07:15:40.892496506Z" level=info msg="StopPodSandbox for \"6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9\"" Aug 13 07:15:40.911891 containerd[1473]: time="2025-08-13T07:15:40.911791472Z" level=info msg="Ensure that sandbox 6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9 in task-service has been cleanup successfully" Aug 13 07:15:40.914486 containerd[1473]: time="2025-08-13T07:15:40.914439140Z" level=error msg="encountered an error cleaning up failed sandbox \"d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:40.915117 containerd[1473]: time="2025-08-13T07:15:40.914729837Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8bmkx,Uid:0f44421e-e215-4efb-b425-a905d3215525,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:40.928261 kubelet[2504]: E0813 07:15:40.928178 2504 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:40.928574 kubelet[2504]: E0813 07:15:40.928555 2504 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-8bmkx" Aug 13 07:15:40.928798 kubelet[2504]: E0813 07:15:40.928780 2504 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-8bmkx" Aug 13 07:15:40.929586 kubelet[2504]: E0813 07:15:40.929288 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-8bmkx_kube-system(0f44421e-e215-4efb-b425-a905d3215525)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-8bmkx_kube-system(0f44421e-e215-4efb-b425-a905d3215525)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-8bmkx" podUID="0f44421e-e215-4efb-b425-a905d3215525" Aug 13 07:15:40.967594 containerd[1473]: time="2025-08-13T07:15:40.967507871Z" level=error msg="Failed to destroy network for sandbox \"86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:40.970493 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff-shm.mount: Deactivated successfully. Aug 13 07:15:40.976243 kubelet[2504]: E0813 07:15:40.972605 2504 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:40.976243 kubelet[2504]: E0813 07:15:40.972682 2504 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-ww6f8" Aug 13 07:15:40.976243 kubelet[2504]: E0813 07:15:40.972706 2504 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-ww6f8" Aug 13 07:15:40.976395 containerd[1473]: time="2025-08-13T07:15:40.970885774Z" level=error msg="encountered an error cleaning up failed sandbox \"86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:40.976395 containerd[1473]: time="2025-08-13T07:15:40.971486344Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-ww6f8,Uid:142cde8b-2616-412a-b265-085158d0383f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:40.976395 containerd[1473]: time="2025-08-13T07:15:40.973068378Z" level=error msg="Failed to destroy network for sandbox \"0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:40.976395 containerd[1473]: time="2025-08-13T07:15:40.973641587Z" level=error msg="encountered an error cleaning up failed sandbox \"0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:40.976395 containerd[1473]: time="2025-08-13T07:15:40.973728941Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bc76445cf-6gxtp,Uid:f71c796a-2b24-4955-a685-11764bd3ee81,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:40.975468 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1-shm.mount: Deactivated successfully. Aug 13 07:15:40.976615 kubelet[2504]: E0813 07:15:40.972749 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-ww6f8_calico-system(142cde8b-2616-412a-b265-085158d0383f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-ww6f8_calico-system(142cde8b-2616-412a-b265-085158d0383f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-ww6f8" podUID="142cde8b-2616-412a-b265-085158d0383f" Aug 13 07:15:40.977718 kubelet[2504]: E0813 07:15:40.976782 2504 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:40.977718 kubelet[2504]: E0813 07:15:40.976826 2504 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bc76445cf-6gxtp" Aug 13 07:15:40.977718 kubelet[2504]: E0813 07:15:40.976845 2504 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bc76445cf-6gxtp" Aug 13 07:15:40.977834 kubelet[2504]: E0813 07:15:40.976908 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7bc76445cf-6gxtp_calico-apiserver(f71c796a-2b24-4955-a685-11764bd3ee81)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7bc76445cf-6gxtp_calico-apiserver(f71c796a-2b24-4955-a685-11764bd3ee81)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7bc76445cf-6gxtp" podUID="f71c796a-2b24-4955-a685-11764bd3ee81" Aug 13 07:15:40.979465 containerd[1473]: time="2025-08-13T07:15:40.979404083Z" level=error msg="StopPodSandbox for \"6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9\" failed" error="failed to destroy network for sandbox \"6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:40.979900 kubelet[2504]: E0813 07:15:40.979670 2504 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9" Aug 13 07:15:40.979900 kubelet[2504]: E0813 07:15:40.979739 2504 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9"} Aug 13 07:15:40.979900 kubelet[2504]: E0813 07:15:40.979807 2504 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6371703e-a0f7-43f3-a612-88598f32a9f9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:15:40.979900 kubelet[2504]: E0813 07:15:40.979835 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6371703e-a0f7-43f3-a612-88598f32a9f9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7bc76445cf-h7vjm" podUID="6371703e-a0f7-43f3-a612-88598f32a9f9" Aug 13 07:15:40.986438 containerd[1473]: time="2025-08-13T07:15:40.986373971Z" level=error msg="Failed to destroy network for sandbox \"25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:40.988945 containerd[1473]: time="2025-08-13T07:15:40.986906423Z" level=error msg="encountered an error cleaning up failed sandbox \"25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:40.988945 containerd[1473]: time="2025-08-13T07:15:40.986965383Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b6cf55fc8-x54ms,Uid:dac8f550-1b39-48eb-bc01-57837792a21e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:40.988945 containerd[1473]: time="2025-08-13T07:15:40.988634662Z" level=error msg="StopPodSandbox for \"354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e\" failed" error="failed to destroy network for sandbox \"354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:40.989126 kubelet[2504]: E0813 07:15:40.988742 2504 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:40.989126 kubelet[2504]: E0813 07:15:40.988797 2504 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6b6cf55fc8-x54ms" Aug 13 07:15:40.989126 kubelet[2504]: E0813 07:15:40.988815 2504 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6b6cf55fc8-x54ms" Aug 13 07:15:40.989268 kubelet[2504]: E0813 07:15:40.988873 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6b6cf55fc8-x54ms_calico-system(dac8f550-1b39-48eb-bc01-57837792a21e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6b6cf55fc8-x54ms_calico-system(dac8f550-1b39-48eb-bc01-57837792a21e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6b6cf55fc8-x54ms" podUID="dac8f550-1b39-48eb-bc01-57837792a21e" Aug 13 07:15:40.989268 kubelet[2504]: E0813 07:15:40.988995 2504 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e" Aug 13 07:15:40.989268 kubelet[2504]: E0813 07:15:40.989053 2504 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e"} Aug 13 07:15:40.989268 kubelet[2504]: E0813 07:15:40.989089 2504 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5c8ac06a-59ac-4ad7-851b-39e9a256e71f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:15:40.989449 kubelet[2504]: E0813 07:15:40.989116 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5c8ac06a-59ac-4ad7-851b-39e9a256e71f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-d5b45b8d4-f96tt" podUID="5c8ac06a-59ac-4ad7-851b-39e9a256e71f" Aug 13 07:15:40.989306 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede-shm.mount: Deactivated successfully. Aug 13 07:15:40.994854 containerd[1473]: time="2025-08-13T07:15:40.994810407Z" level=error msg="StopPodSandbox for \"31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4\" failed" error="failed to destroy network for sandbox \"31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:40.995096 kubelet[2504]: E0813 07:15:40.995049 2504 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4" Aug 13 07:15:40.995170 kubelet[2504]: E0813 07:15:40.995109 2504 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4"} Aug 13 07:15:40.995170 kubelet[2504]: E0813 07:15:40.995142 2504 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2f61af08-7750-47f4-b608-c1bb42e1730d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:15:40.995286 kubelet[2504]: E0813 07:15:40.995166 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2f61af08-7750-47f4-b608-c1bb42e1730d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-kkjwm" podUID="2f61af08-7750-47f4-b608-c1bb42e1730d" Aug 13 07:15:41.704087 systemd[1]: Created slice kubepods-besteffort-podff96eac2_a650_4688_baf4_c624d8dfca9d.slice - libcontainer container kubepods-besteffort-podff96eac2_a650_4688_baf4_c624d8dfca9d.slice. Aug 13 07:15:41.706332 containerd[1473]: time="2025-08-13T07:15:41.706290462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k8p5g,Uid:ff96eac2-a650-4688-baf4-c624d8dfca9d,Namespace:calico-system,Attempt:0,}" Aug 13 07:15:41.805330 containerd[1473]: time="2025-08-13T07:15:41.805259697Z" level=error msg="Failed to destroy network for sandbox \"4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:41.805780 containerd[1473]: time="2025-08-13T07:15:41.805740380Z" level=error msg="encountered an error cleaning up failed sandbox \"4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:41.805829 containerd[1473]: time="2025-08-13T07:15:41.805803419Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k8p5g,Uid:ff96eac2-a650-4688-baf4-c624d8dfca9d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:41.806160 kubelet[2504]: E0813 07:15:41.806092 2504 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:41.806357 kubelet[2504]: E0813 07:15:41.806192 2504 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-k8p5g" Aug 13 07:15:41.806357 kubelet[2504]: E0813 07:15:41.806215 2504 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-k8p5g" Aug 13 07:15:41.806357 kubelet[2504]: E0813 07:15:41.806263 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-k8p5g_calico-system(ff96eac2-a650-4688-baf4-c624d8dfca9d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-k8p5g_calico-system(ff96eac2-a650-4688-baf4-c624d8dfca9d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-k8p5g" podUID="ff96eac2-a650-4688-baf4-c624d8dfca9d" Aug 13 07:15:41.854018 kubelet[2504]: I0813 07:15:41.853971 2504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede" Aug 13 07:15:41.854503 containerd[1473]: time="2025-08-13T07:15:41.854455486Z" level=info msg="StopPodSandbox for \"25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede\"" Aug 13 07:15:41.854743 containerd[1473]: time="2025-08-13T07:15:41.854707419Z" level=info msg="Ensure that sandbox 25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede in task-service has been cleanup successfully" Aug 13 07:15:41.855308 kubelet[2504]: I0813 07:15:41.855276 2504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5" Aug 13 07:15:41.855682 containerd[1473]: time="2025-08-13T07:15:41.855651525Z" level=info msg="StopPodSandbox for \"d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5\"" Aug 13 07:15:41.856472 containerd[1473]: time="2025-08-13T07:15:41.856252385Z" level=info msg="Ensure that sandbox d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5 in task-service has been cleanup successfully" Aug 13 07:15:41.857465 kubelet[2504]: I0813 07:15:41.857318 2504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff" Aug 13 07:15:41.857990 containerd[1473]: time="2025-08-13T07:15:41.857955668Z" level=info msg="StopPodSandbox for \"86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff\"" Aug 13 07:15:41.858204 containerd[1473]: time="2025-08-13T07:15:41.858145635Z" level=info msg="Ensure that sandbox 86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff in task-service has been cleanup successfully" Aug 13 07:15:41.859605 kubelet[2504]: I0813 07:15:41.859510 2504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57" Aug 13 07:15:41.860271 containerd[1473]: time="2025-08-13T07:15:41.860232558Z" level=info msg="StopPodSandbox for \"4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57\"" Aug 13 07:15:41.860727 containerd[1473]: time="2025-08-13T07:15:41.860487889Z" level=info msg="Ensure that sandbox 4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57 in task-service has been cleanup successfully" Aug 13 07:15:41.865902 containerd[1473]: time="2025-08-13T07:15:41.865834872Z" level=info msg="StopPodSandbox for \"0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1\"" Aug 13 07:15:41.867204 kubelet[2504]: I0813 07:15:41.864826 2504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1" Aug 13 07:15:41.871368 containerd[1473]: time="2025-08-13T07:15:41.870290129Z" level=info msg="Ensure that sandbox 0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1 in task-service has been cleanup successfully" Aug 13 07:15:41.906089 containerd[1473]: time="2025-08-13T07:15:41.906015470Z" level=error msg="StopPodSandbox for \"d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5\" failed" error="failed to destroy network for sandbox \"d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:41.906340 kubelet[2504]: E0813 07:15:41.906297 2504 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5" Aug 13 07:15:41.906419 kubelet[2504]: E0813 07:15:41.906362 2504 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5"} Aug 13 07:15:41.906419 kubelet[2504]: E0813 07:15:41.906402 2504 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0f44421e-e215-4efb-b425-a905d3215525\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:15:41.906507 kubelet[2504]: E0813 07:15:41.906427 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0f44421e-e215-4efb-b425-a905d3215525\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-8bmkx" podUID="0f44421e-e215-4efb-b425-a905d3215525" Aug 13 07:15:41.911098 containerd[1473]: time="2025-08-13T07:15:41.911009059Z" level=error msg="StopPodSandbox for \"86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff\" failed" error="failed to destroy network for sandbox \"86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:41.911431 kubelet[2504]: E0813 07:15:41.911377 2504 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff" Aug 13 07:15:41.911500 kubelet[2504]: E0813 07:15:41.911444 2504 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff"} Aug 13 07:15:41.911500 kubelet[2504]: E0813 07:15:41.911490 2504 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"142cde8b-2616-412a-b265-085158d0383f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:15:41.911582 kubelet[2504]: E0813 07:15:41.911516 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"142cde8b-2616-412a-b265-085158d0383f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-ww6f8" podUID="142cde8b-2616-412a-b265-085158d0383f" Aug 13 07:15:41.913463 containerd[1473]: time="2025-08-13T07:15:41.913415905Z" level=error msg="StopPodSandbox for \"25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede\" failed" error="failed to destroy network for sandbox \"25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:41.913776 kubelet[2504]: E0813 07:15:41.913688 2504 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede" Aug 13 07:15:41.913776 kubelet[2504]: E0813 07:15:41.913755 2504 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede"} Aug 13 07:15:41.913838 kubelet[2504]: E0813 07:15:41.913786 2504 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dac8f550-1b39-48eb-bc01-57837792a21e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:15:41.913838 kubelet[2504]: E0813 07:15:41.913805 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dac8f550-1b39-48eb-bc01-57837792a21e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6b6cf55fc8-x54ms" podUID="dac8f550-1b39-48eb-bc01-57837792a21e" Aug 13 07:15:41.919085 containerd[1473]: time="2025-08-13T07:15:41.918957194Z" level=error msg="StopPodSandbox for \"0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1\" failed" error="failed to destroy network for sandbox \"0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:41.919220 kubelet[2504]: E0813 07:15:41.919191 2504 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1" Aug 13 07:15:41.919271 kubelet[2504]: E0813 07:15:41.919228 2504 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1"} Aug 13 07:15:41.919294 kubelet[2504]: E0813 07:15:41.919253 2504 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f71c796a-2b24-4955-a685-11764bd3ee81\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:15:41.919365 kubelet[2504]: E0813 07:15:41.919294 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f71c796a-2b24-4955-a685-11764bd3ee81\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7bc76445cf-6gxtp" podUID="f71c796a-2b24-4955-a685-11764bd3ee81" Aug 13 07:15:41.919421 containerd[1473]: time="2025-08-13T07:15:41.919387733Z" level=error msg="StopPodSandbox for \"4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57\" failed" error="failed to destroy network for sandbox \"4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:15:41.919597 kubelet[2504]: E0813 07:15:41.919573 2504 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57" Aug 13 07:15:41.919633 kubelet[2504]: E0813 07:15:41.919598 2504 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57"} Aug 13 07:15:41.919633 kubelet[2504]: E0813 07:15:41.919620 2504 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ff96eac2-a650-4688-baf4-c624d8dfca9d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:15:41.919713 kubelet[2504]: E0813 07:15:41.919638 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ff96eac2-a650-4688-baf4-c624d8dfca9d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-k8p5g" podUID="ff96eac2-a650-4688-baf4-c624d8dfca9d" Aug 13 07:15:41.943339 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57-shm.mount: Deactivated successfully. Aug 13 07:15:46.946453 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount540818678.mount: Deactivated successfully. Aug 13 07:15:47.503374 containerd[1473]: time="2025-08-13T07:15:47.503300632Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:47.504322 containerd[1473]: time="2025-08-13T07:15:47.504256187Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 07:15:47.505899 containerd[1473]: time="2025-08-13T07:15:47.505835614Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:47.507912 containerd[1473]: time="2025-08-13T07:15:47.507855588Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:47.508428 containerd[1473]: time="2025-08-13T07:15:47.508390703Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 6.68007183s" Aug 13 07:15:47.508469 containerd[1473]: time="2025-08-13T07:15:47.508433413Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Aug 13 07:15:47.527136 containerd[1473]: time="2025-08-13T07:15:47.527070201Z" level=info msg="CreateContainer within sandbox \"11bf230913098534525983559414e63890563b9c1aaa5059cd1dfab1a05f09e6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 13 07:15:47.551413 containerd[1473]: time="2025-08-13T07:15:47.551351131Z" level=info msg="CreateContainer within sandbox \"11bf230913098534525983559414e63890563b9c1aaa5059cd1dfab1a05f09e6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1541fc312ff0000779156e233c81d80ef85646918f3db4185dc3d9e2d0a138ef\"" Aug 13 07:15:47.551991 containerd[1473]: time="2025-08-13T07:15:47.551929789Z" level=info msg="StartContainer for \"1541fc312ff0000779156e233c81d80ef85646918f3db4185dc3d9e2d0a138ef\"" Aug 13 07:15:47.603147 systemd[1]: Started cri-containerd-1541fc312ff0000779156e233c81d80ef85646918f3db4185dc3d9e2d0a138ef.scope - libcontainer container 1541fc312ff0000779156e233c81d80ef85646918f3db4185dc3d9e2d0a138ef. Aug 13 07:15:47.636419 containerd[1473]: time="2025-08-13T07:15:47.636372698Z" level=info msg="StartContainer for \"1541fc312ff0000779156e233c81d80ef85646918f3db4185dc3d9e2d0a138ef\" returns successfully" Aug 13 07:15:47.719850 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 13 07:15:47.720099 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 13 07:15:47.798393 containerd[1473]: time="2025-08-13T07:15:47.797905057Z" level=info msg="StopPodSandbox for \"25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede\"" Aug 13 07:15:47.936530 kubelet[2504]: I0813 07:15:47.936439 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8k7nb" podStartSLOduration=1.156819018 podStartE2EDuration="18.936417964s" podCreationTimestamp="2025-08-13 07:15:29 +0000 UTC" firstStartedPulling="2025-08-13 07:15:29.729480992 +0000 UTC m=+19.123572444" lastFinishedPulling="2025-08-13 07:15:47.509079938 +0000 UTC m=+36.903171390" observedRunningTime="2025-08-13 07:15:47.936114103 +0000 UTC m=+37.330205565" watchObservedRunningTime="2025-08-13 07:15:47.936417964 +0000 UTC m=+37.330509416" Aug 13 07:15:47.961165 containerd[1473]: 2025-08-13 07:15:47.858 [INFO][3780] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede" Aug 13 07:15:47.961165 containerd[1473]: 2025-08-13 07:15:47.859 [INFO][3780] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede" iface="eth0" netns="/var/run/netns/cni-46b07177-7aac-a6a9-11c3-0538390f01e2" Aug 13 07:15:47.961165 containerd[1473]: 2025-08-13 07:15:47.859 [INFO][3780] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede" iface="eth0" netns="/var/run/netns/cni-46b07177-7aac-a6a9-11c3-0538390f01e2" Aug 13 07:15:47.961165 containerd[1473]: 2025-08-13 07:15:47.859 [INFO][3780] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede" iface="eth0" netns="/var/run/netns/cni-46b07177-7aac-a6a9-11c3-0538390f01e2" Aug 13 07:15:47.961165 containerd[1473]: 2025-08-13 07:15:47.859 [INFO][3780] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede" Aug 13 07:15:47.961165 containerd[1473]: 2025-08-13 07:15:47.859 [INFO][3780] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede" Aug 13 07:15:47.961165 containerd[1473]: 2025-08-13 07:15:47.935 [INFO][3792] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede" HandleID="k8s-pod-network.25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede" Workload="localhost-k8s-whisker--6b6cf55fc8--x54ms-eth0" Aug 13 07:15:47.961165 containerd[1473]: 2025-08-13 07:15:47.937 [INFO][3792] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:15:47.961165 containerd[1473]: 2025-08-13 07:15:47.937 [INFO][3792] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:15:47.961165 containerd[1473]: 2025-08-13 07:15:47.952 [WARNING][3792] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede" HandleID="k8s-pod-network.25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede" Workload="localhost-k8s-whisker--6b6cf55fc8--x54ms-eth0" Aug 13 07:15:47.961165 containerd[1473]: 2025-08-13 07:15:47.952 [INFO][3792] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede" HandleID="k8s-pod-network.25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede" Workload="localhost-k8s-whisker--6b6cf55fc8--x54ms-eth0" Aug 13 07:15:47.961165 containerd[1473]: 2025-08-13 07:15:47.953 [INFO][3792] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:15:47.961165 containerd[1473]: 2025-08-13 07:15:47.957 [INFO][3780] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede" Aug 13 07:15:47.962182 containerd[1473]: time="2025-08-13T07:15:47.962137226Z" level=info msg="TearDown network for sandbox \"25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede\" successfully" Aug 13 07:15:47.962182 containerd[1473]: time="2025-08-13T07:15:47.962176149Z" level=info msg="StopPodSandbox for \"25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede\" returns successfully" Aug 13 07:15:47.965175 systemd[1]: run-netns-cni\x2d46b07177\x2d7aac\x2da6a9\x2d11c3\x2d0538390f01e2.mount: Deactivated successfully. Aug 13 07:15:48.068102 kubelet[2504]: I0813 07:15:48.068059 2504 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dac8f550-1b39-48eb-bc01-57837792a21e-whisker-backend-key-pair\") pod \"dac8f550-1b39-48eb-bc01-57837792a21e\" (UID: \"dac8f550-1b39-48eb-bc01-57837792a21e\") " Aug 13 07:15:48.068728 kubelet[2504]: I0813 07:15:48.068121 2504 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5nph\" (UniqueName: \"kubernetes.io/projected/dac8f550-1b39-48eb-bc01-57837792a21e-kube-api-access-n5nph\") pod \"dac8f550-1b39-48eb-bc01-57837792a21e\" (UID: \"dac8f550-1b39-48eb-bc01-57837792a21e\") " Aug 13 07:15:48.068728 kubelet[2504]: I0813 07:15:48.068161 2504 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dac8f550-1b39-48eb-bc01-57837792a21e-whisker-ca-bundle\") pod \"dac8f550-1b39-48eb-bc01-57837792a21e\" (UID: \"dac8f550-1b39-48eb-bc01-57837792a21e\") " Aug 13 07:15:48.068802 kubelet[2504]: I0813 07:15:48.068776 2504 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dac8f550-1b39-48eb-bc01-57837792a21e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "dac8f550-1b39-48eb-bc01-57837792a21e" (UID: "dac8f550-1b39-48eb-bc01-57837792a21e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 07:15:48.075439 systemd[1]: var-lib-kubelet-pods-dac8f550\x2d1b39\x2d48eb\x2dbc01\x2d57837792a21e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 07:15:48.077386 kubelet[2504]: I0813 07:15:48.076221 2504 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dac8f550-1b39-48eb-bc01-57837792a21e-kube-api-access-n5nph" (OuterVolumeSpecName: "kube-api-access-n5nph") pod "dac8f550-1b39-48eb-bc01-57837792a21e" (UID: "dac8f550-1b39-48eb-bc01-57837792a21e"). InnerVolumeSpecName "kube-api-access-n5nph". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 07:15:48.078071 kubelet[2504]: I0813 07:15:48.078014 2504 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dac8f550-1b39-48eb-bc01-57837792a21e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "dac8f550-1b39-48eb-bc01-57837792a21e" (UID: "dac8f550-1b39-48eb-bc01-57837792a21e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 07:15:48.078412 systemd[1]: var-lib-kubelet-pods-dac8f550\x2d1b39\x2d48eb\x2dbc01\x2d57837792a21e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dn5nph.mount: Deactivated successfully. Aug 13 07:15:48.169396 kubelet[2504]: I0813 07:15:48.169343 2504 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dac8f550-1b39-48eb-bc01-57837792a21e-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Aug 13 07:15:48.169396 kubelet[2504]: I0813 07:15:48.169376 2504 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dac8f550-1b39-48eb-bc01-57837792a21e-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Aug 13 07:15:48.169396 kubelet[2504]: I0813 07:15:48.169385 2504 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n5nph\" (UniqueName: \"kubernetes.io/projected/dac8f550-1b39-48eb-bc01-57837792a21e-kube-api-access-n5nph\") on node \"localhost\" DevicePath \"\"" Aug 13 07:15:48.706641 systemd[1]: Removed slice kubepods-besteffort-poddac8f550_1b39_48eb_bc01_57837792a21e.slice - libcontainer container kubepods-besteffort-poddac8f550_1b39_48eb_bc01_57837792a21e.slice. Aug 13 07:15:48.975186 systemd[1]: Created slice kubepods-besteffort-pode2888387_5510_44ef_abbd_7c1261b0bf09.slice - libcontainer container kubepods-besteffort-pode2888387_5510_44ef_abbd_7c1261b0bf09.slice. Aug 13 07:15:49.077591 kubelet[2504]: I0813 07:15:49.077522 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e2888387-5510-44ef-abbd-7c1261b0bf09-whisker-backend-key-pair\") pod \"whisker-787f667446-wf97w\" (UID: \"e2888387-5510-44ef-abbd-7c1261b0bf09\") " pod="calico-system/whisker-787f667446-wf97w" Aug 13 07:15:49.077591 kubelet[2504]: I0813 07:15:49.077593 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2888387-5510-44ef-abbd-7c1261b0bf09-whisker-ca-bundle\") pod \"whisker-787f667446-wf97w\" (UID: \"e2888387-5510-44ef-abbd-7c1261b0bf09\") " pod="calico-system/whisker-787f667446-wf97w" Aug 13 07:15:49.077591 kubelet[2504]: I0813 07:15:49.077610 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69mmf\" (UniqueName: \"kubernetes.io/projected/e2888387-5510-44ef-abbd-7c1261b0bf09-kube-api-access-69mmf\") pod \"whisker-787f667446-wf97w\" (UID: \"e2888387-5510-44ef-abbd-7c1261b0bf09\") " pod="calico-system/whisker-787f667446-wf97w" Aug 13 07:15:49.254897 kernel: bpftool[3991]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Aug 13 07:15:49.279853 containerd[1473]: time="2025-08-13T07:15:49.279795796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-787f667446-wf97w,Uid:e2888387-5510-44ef-abbd-7c1261b0bf09,Namespace:calico-system,Attempt:0,}" Aug 13 07:15:49.683428 systemd-networkd[1400]: vxlan.calico: Link UP Aug 13 07:15:49.683439 systemd-networkd[1400]: vxlan.calico: Gained carrier Aug 13 07:15:50.248516 systemd-networkd[1400]: cali651c8ea2ffb: Link UP Aug 13 07:15:50.248807 systemd-networkd[1400]: cali651c8ea2ffb: Gained carrier Aug 13 07:15:50.295145 containerd[1473]: 2025-08-13 07:15:50.176 [INFO][4064] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--787f667446--wf97w-eth0 whisker-787f667446- calico-system e2888387-5510-44ef-abbd-7c1261b0bf09 904 0 2025-08-13 07:15:48 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:787f667446 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-787f667446-wf97w eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali651c8ea2ffb [] [] }} ContainerID="437ec73ea2bc730344a386be11cc79bbaf31621b6796be8241b42a3b5d079369" Namespace="calico-system" Pod="whisker-787f667446-wf97w" WorkloadEndpoint="localhost-k8s-whisker--787f667446--wf97w-" Aug 13 07:15:50.295145 containerd[1473]: 2025-08-13 07:15:50.176 [INFO][4064] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="437ec73ea2bc730344a386be11cc79bbaf31621b6796be8241b42a3b5d079369" Namespace="calico-system" Pod="whisker-787f667446-wf97w" WorkloadEndpoint="localhost-k8s-whisker--787f667446--wf97w-eth0" Aug 13 07:15:50.295145 containerd[1473]: 2025-08-13 07:15:50.207 [INFO][4078] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="437ec73ea2bc730344a386be11cc79bbaf31621b6796be8241b42a3b5d079369" HandleID="k8s-pod-network.437ec73ea2bc730344a386be11cc79bbaf31621b6796be8241b42a3b5d079369" Workload="localhost-k8s-whisker--787f667446--wf97w-eth0" Aug 13 07:15:50.295145 containerd[1473]: 2025-08-13 07:15:50.207 [INFO][4078] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="437ec73ea2bc730344a386be11cc79bbaf31621b6796be8241b42a3b5d079369" HandleID="k8s-pod-network.437ec73ea2bc730344a386be11cc79bbaf31621b6796be8241b42a3b5d079369" Workload="localhost-k8s-whisker--787f667446--wf97w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002bcfc0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-787f667446-wf97w", "timestamp":"2025-08-13 07:15:50.2070275 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:15:50.295145 containerd[1473]: 2025-08-13 07:15:50.207 [INFO][4078] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:15:50.295145 containerd[1473]: 2025-08-13 07:15:50.207 [INFO][4078] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:15:50.295145 containerd[1473]: 2025-08-13 07:15:50.207 [INFO][4078] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 07:15:50.295145 containerd[1473]: 2025-08-13 07:15:50.214 [INFO][4078] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.437ec73ea2bc730344a386be11cc79bbaf31621b6796be8241b42a3b5d079369" host="localhost" Aug 13 07:15:50.295145 containerd[1473]: 2025-08-13 07:15:50.220 [INFO][4078] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 07:15:50.295145 containerd[1473]: 2025-08-13 07:15:50.225 [INFO][4078] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 07:15:50.295145 containerd[1473]: 2025-08-13 07:15:50.226 [INFO][4078] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 07:15:50.295145 containerd[1473]: 2025-08-13 07:15:50.229 [INFO][4078] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 07:15:50.295145 containerd[1473]: 2025-08-13 07:15:50.229 [INFO][4078] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.437ec73ea2bc730344a386be11cc79bbaf31621b6796be8241b42a3b5d079369" host="localhost" Aug 13 07:15:50.295145 containerd[1473]: 2025-08-13 07:15:50.230 [INFO][4078] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.437ec73ea2bc730344a386be11cc79bbaf31621b6796be8241b42a3b5d079369 Aug 13 07:15:50.295145 containerd[1473]: 2025-08-13 07:15:50.234 [INFO][4078] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.437ec73ea2bc730344a386be11cc79bbaf31621b6796be8241b42a3b5d079369" host="localhost" Aug 13 07:15:50.295145 containerd[1473]: 2025-08-13 07:15:50.242 [INFO][4078] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.437ec73ea2bc730344a386be11cc79bbaf31621b6796be8241b42a3b5d079369" host="localhost" Aug 13 07:15:50.295145 containerd[1473]: 2025-08-13 07:15:50.242 [INFO][4078] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.437ec73ea2bc730344a386be11cc79bbaf31621b6796be8241b42a3b5d079369" host="localhost" Aug 13 07:15:50.295145 containerd[1473]: 2025-08-13 07:15:50.242 [INFO][4078] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:15:50.295145 containerd[1473]: 2025-08-13 07:15:50.242 [INFO][4078] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="437ec73ea2bc730344a386be11cc79bbaf31621b6796be8241b42a3b5d079369" HandleID="k8s-pod-network.437ec73ea2bc730344a386be11cc79bbaf31621b6796be8241b42a3b5d079369" Workload="localhost-k8s-whisker--787f667446--wf97w-eth0" Aug 13 07:15:50.296395 containerd[1473]: 2025-08-13 07:15:50.246 [INFO][4064] cni-plugin/k8s.go 418: Populated endpoint ContainerID="437ec73ea2bc730344a386be11cc79bbaf31621b6796be8241b42a3b5d079369" Namespace="calico-system" Pod="whisker-787f667446-wf97w" WorkloadEndpoint="localhost-k8s-whisker--787f667446--wf97w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--787f667446--wf97w-eth0", GenerateName:"whisker-787f667446-", Namespace:"calico-system", SelfLink:"", UID:"e2888387-5510-44ef-abbd-7c1261b0bf09", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 15, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"787f667446", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-787f667446-wf97w", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali651c8ea2ffb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:15:50.296395 containerd[1473]: 2025-08-13 07:15:50.246 [INFO][4064] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="437ec73ea2bc730344a386be11cc79bbaf31621b6796be8241b42a3b5d079369" Namespace="calico-system" Pod="whisker-787f667446-wf97w" WorkloadEndpoint="localhost-k8s-whisker--787f667446--wf97w-eth0" Aug 13 07:15:50.296395 containerd[1473]: 2025-08-13 07:15:50.246 [INFO][4064] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali651c8ea2ffb ContainerID="437ec73ea2bc730344a386be11cc79bbaf31621b6796be8241b42a3b5d079369" Namespace="calico-system" Pod="whisker-787f667446-wf97w" WorkloadEndpoint="localhost-k8s-whisker--787f667446--wf97w-eth0" Aug 13 07:15:50.296395 containerd[1473]: 2025-08-13 07:15:50.250 [INFO][4064] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="437ec73ea2bc730344a386be11cc79bbaf31621b6796be8241b42a3b5d079369" Namespace="calico-system" Pod="whisker-787f667446-wf97w" WorkloadEndpoint="localhost-k8s-whisker--787f667446--wf97w-eth0" Aug 13 07:15:50.296395 containerd[1473]: 2025-08-13 07:15:50.251 [INFO][4064] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="437ec73ea2bc730344a386be11cc79bbaf31621b6796be8241b42a3b5d079369" Namespace="calico-system" Pod="whisker-787f667446-wf97w" WorkloadEndpoint="localhost-k8s-whisker--787f667446--wf97w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--787f667446--wf97w-eth0", GenerateName:"whisker-787f667446-", Namespace:"calico-system", SelfLink:"", UID:"e2888387-5510-44ef-abbd-7c1261b0bf09", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 15, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"787f667446", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"437ec73ea2bc730344a386be11cc79bbaf31621b6796be8241b42a3b5d079369", Pod:"whisker-787f667446-wf97w", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali651c8ea2ffb", MAC:"66:59:2b:b3:db:c9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:15:50.296395 containerd[1473]: 2025-08-13 07:15:50.290 [INFO][4064] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="437ec73ea2bc730344a386be11cc79bbaf31621b6796be8241b42a3b5d079369" Namespace="calico-system" Pod="whisker-787f667446-wf97w" WorkloadEndpoint="localhost-k8s-whisker--787f667446--wf97w-eth0" Aug 13 07:15:50.327072 containerd[1473]: time="2025-08-13T07:15:50.326888571Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:15:50.327072 containerd[1473]: time="2025-08-13T07:15:50.326970425Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:15:50.327072 containerd[1473]: time="2025-08-13T07:15:50.326988850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:15:50.327307 containerd[1473]: time="2025-08-13T07:15:50.327102884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:15:50.349908 systemd[1]: run-containerd-runc-k8s.io-437ec73ea2bc730344a386be11cc79bbaf31621b6796be8241b42a3b5d079369-runc.EWY4DL.mount: Deactivated successfully. Aug 13 07:15:50.364033 systemd[1]: Started cri-containerd-437ec73ea2bc730344a386be11cc79bbaf31621b6796be8241b42a3b5d079369.scope - libcontainer container 437ec73ea2bc730344a386be11cc79bbaf31621b6796be8241b42a3b5d079369. Aug 13 07:15:50.378562 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:15:50.404561 containerd[1473]: time="2025-08-13T07:15:50.404502372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-787f667446-wf97w,Uid:e2888387-5510-44ef-abbd-7c1261b0bf09,Namespace:calico-system,Attempt:0,} returns sandbox id \"437ec73ea2bc730344a386be11cc79bbaf31621b6796be8241b42a3b5d079369\"" Aug 13 07:15:50.406423 containerd[1473]: time="2025-08-13T07:15:50.406377964Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 13 07:15:50.703669 kubelet[2504]: I0813 07:15:50.703244 2504 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dac8f550-1b39-48eb-bc01-57837792a21e" path="/var/lib/kubelet/pods/dac8f550-1b39-48eb-bc01-57837792a21e/volumes" Aug 13 07:15:51.299147 systemd-networkd[1400]: vxlan.calico: Gained IPv6LL Aug 13 07:15:51.619113 systemd-networkd[1400]: cali651c8ea2ffb: Gained IPv6LL Aug 13 07:15:52.092176 systemd[1]: Started sshd@7-10.0.0.120:22-10.0.0.1:59584.service - OpenSSH per-connection server daemon (10.0.0.1:59584). Aug 13 07:15:52.136884 sshd[4146]: Accepted publickey for core from 10.0.0.1 port 59584 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:15:52.138679 sshd[4146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:15:52.143236 systemd-logind[1449]: New session 8 of user core. Aug 13 07:15:52.151020 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 07:15:52.293102 sshd[4146]: pam_unix(sshd:session): session closed for user core Aug 13 07:15:52.297544 systemd[1]: sshd@7-10.0.0.120:22-10.0.0.1:59584.service: Deactivated successfully. Aug 13 07:15:52.300404 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 07:15:52.301303 systemd-logind[1449]: Session 8 logged out. Waiting for processes to exit. Aug 13 07:15:52.302323 systemd-logind[1449]: Removed session 8. Aug 13 07:15:52.685437 containerd[1473]: time="2025-08-13T07:15:52.685370498Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:52.686139 containerd[1473]: time="2025-08-13T07:15:52.686093725Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Aug 13 07:15:52.687151 containerd[1473]: time="2025-08-13T07:15:52.687118389Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:52.689358 containerd[1473]: time="2025-08-13T07:15:52.689317187Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:52.690027 containerd[1473]: time="2025-08-13T07:15:52.689988297Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 2.283569927s" Aug 13 07:15:52.690027 containerd[1473]: time="2025-08-13T07:15:52.690019816Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Aug 13 07:15:52.692224 containerd[1473]: time="2025-08-13T07:15:52.692184290Z" level=info msg="CreateContainer within sandbox \"437ec73ea2bc730344a386be11cc79bbaf31621b6796be8241b42a3b5d079369\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 13 07:15:52.707349 containerd[1473]: time="2025-08-13T07:15:52.707294487Z" level=info msg="CreateContainer within sandbox \"437ec73ea2bc730344a386be11cc79bbaf31621b6796be8241b42a3b5d079369\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"b2ed29a26f8949ea6cc0925ee8d7e8eadf8acbc1dde20f1acede6a7dca015d8c\"" Aug 13 07:15:52.707821 containerd[1473]: time="2025-08-13T07:15:52.707780039Z" level=info msg="StartContainer for \"b2ed29a26f8949ea6cc0925ee8d7e8eadf8acbc1dde20f1acede6a7dca015d8c\"" Aug 13 07:15:52.750022 systemd[1]: Started cri-containerd-b2ed29a26f8949ea6cc0925ee8d7e8eadf8acbc1dde20f1acede6a7dca015d8c.scope - libcontainer container b2ed29a26f8949ea6cc0925ee8d7e8eadf8acbc1dde20f1acede6a7dca015d8c. Aug 13 07:15:52.789287 containerd[1473]: time="2025-08-13T07:15:52.789244530Z" level=info msg="StartContainer for \"b2ed29a26f8949ea6cc0925ee8d7e8eadf8acbc1dde20f1acede6a7dca015d8c\" returns successfully" Aug 13 07:15:52.790440 containerd[1473]: time="2025-08-13T07:15:52.790375994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 13 07:15:53.699033 containerd[1473]: time="2025-08-13T07:15:53.698961822Z" level=info msg="StopPodSandbox for \"31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4\"" Aug 13 07:15:53.788022 containerd[1473]: 2025-08-13 07:15:53.746 [INFO][4213] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4" Aug 13 07:15:53.788022 containerd[1473]: 2025-08-13 07:15:53.746 [INFO][4213] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4" iface="eth0" netns="/var/run/netns/cni-3b0cb5f4-cdda-6b2a-4676-c698453776b3" Aug 13 07:15:53.788022 containerd[1473]: 2025-08-13 07:15:53.746 [INFO][4213] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4" iface="eth0" netns="/var/run/netns/cni-3b0cb5f4-cdda-6b2a-4676-c698453776b3" Aug 13 07:15:53.788022 containerd[1473]: 2025-08-13 07:15:53.746 [INFO][4213] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4" iface="eth0" netns="/var/run/netns/cni-3b0cb5f4-cdda-6b2a-4676-c698453776b3" Aug 13 07:15:53.788022 containerd[1473]: 2025-08-13 07:15:53.746 [INFO][4213] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4" Aug 13 07:15:53.788022 containerd[1473]: 2025-08-13 07:15:53.746 [INFO][4213] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4" Aug 13 07:15:53.788022 containerd[1473]: 2025-08-13 07:15:53.769 [INFO][4223] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4" HandleID="k8s-pod-network.31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4" Workload="localhost-k8s-coredns--668d6bf9bc--kkjwm-eth0" Aug 13 07:15:53.788022 containerd[1473]: 2025-08-13 07:15:53.770 [INFO][4223] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:15:53.788022 containerd[1473]: 2025-08-13 07:15:53.770 [INFO][4223] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:15:53.788022 containerd[1473]: 2025-08-13 07:15:53.780 [WARNING][4223] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4" HandleID="k8s-pod-network.31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4" Workload="localhost-k8s-coredns--668d6bf9bc--kkjwm-eth0" Aug 13 07:15:53.788022 containerd[1473]: 2025-08-13 07:15:53.780 [INFO][4223] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4" HandleID="k8s-pod-network.31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4" Workload="localhost-k8s-coredns--668d6bf9bc--kkjwm-eth0" Aug 13 07:15:53.788022 containerd[1473]: 2025-08-13 07:15:53.781 [INFO][4223] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:15:53.788022 containerd[1473]: 2025-08-13 07:15:53.784 [INFO][4213] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4" Aug 13 07:15:53.788698 containerd[1473]: time="2025-08-13T07:15:53.788236059Z" level=info msg="TearDown network for sandbox \"31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4\" successfully" Aug 13 07:15:53.788698 containerd[1473]: time="2025-08-13T07:15:53.788280482Z" level=info msg="StopPodSandbox for \"31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4\" returns successfully" Aug 13 07:15:53.788781 kubelet[2504]: E0813 07:15:53.788697 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:15:53.790691 containerd[1473]: time="2025-08-13T07:15:53.790651283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kkjwm,Uid:2f61af08-7750-47f4-b608-c1bb42e1730d,Namespace:kube-system,Attempt:1,}" Aug 13 07:15:53.792727 systemd[1]: run-netns-cni\x2d3b0cb5f4\x2dcdda\x2d6b2a\x2d4676\x2dc698453776b3.mount: Deactivated successfully. Aug 13 07:15:53.933989 systemd-networkd[1400]: cali58cadadb49f: Link UP Aug 13 07:15:53.934298 systemd-networkd[1400]: cali58cadadb49f: Gained carrier Aug 13 07:15:53.950654 containerd[1473]: 2025-08-13 07:15:53.867 [INFO][4231] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--kkjwm-eth0 coredns-668d6bf9bc- kube-system 2f61af08-7750-47f4-b608-c1bb42e1730d 967 0 2025-08-13 07:15:17 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-kkjwm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali58cadadb49f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="bd211d12464cb0a79d9fa2dceb2f71ede9b9ce092ea53a82d6fac290c68d5e76" Namespace="kube-system" Pod="coredns-668d6bf9bc-kkjwm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--kkjwm-" Aug 13 07:15:53.950654 containerd[1473]: 2025-08-13 07:15:53.867 [INFO][4231] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bd211d12464cb0a79d9fa2dceb2f71ede9b9ce092ea53a82d6fac290c68d5e76" Namespace="kube-system" Pod="coredns-668d6bf9bc-kkjwm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--kkjwm-eth0" Aug 13 07:15:53.950654 containerd[1473]: 2025-08-13 07:15:53.895 [INFO][4243] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bd211d12464cb0a79d9fa2dceb2f71ede9b9ce092ea53a82d6fac290c68d5e76" HandleID="k8s-pod-network.bd211d12464cb0a79d9fa2dceb2f71ede9b9ce092ea53a82d6fac290c68d5e76" Workload="localhost-k8s-coredns--668d6bf9bc--kkjwm-eth0" Aug 13 07:15:53.950654 containerd[1473]: 2025-08-13 07:15:53.895 [INFO][4243] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bd211d12464cb0a79d9fa2dceb2f71ede9b9ce092ea53a82d6fac290c68d5e76" HandleID="k8s-pod-network.bd211d12464cb0a79d9fa2dceb2f71ede9b9ce092ea53a82d6fac290c68d5e76" Workload="localhost-k8s-coredns--668d6bf9bc--kkjwm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e560), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-kkjwm", "timestamp":"2025-08-13 07:15:53.895043781 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:15:53.950654 containerd[1473]: 2025-08-13 07:15:53.895 [INFO][4243] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:15:53.950654 containerd[1473]: 2025-08-13 07:15:53.895 [INFO][4243] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:15:53.950654 containerd[1473]: 2025-08-13 07:15:53.895 [INFO][4243] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 07:15:53.950654 containerd[1473]: 2025-08-13 07:15:53.901 [INFO][4243] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bd211d12464cb0a79d9fa2dceb2f71ede9b9ce092ea53a82d6fac290c68d5e76" host="localhost" Aug 13 07:15:53.950654 containerd[1473]: 2025-08-13 07:15:53.905 [INFO][4243] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 07:15:53.950654 containerd[1473]: 2025-08-13 07:15:53.909 [INFO][4243] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 07:15:53.950654 containerd[1473]: 2025-08-13 07:15:53.911 [INFO][4243] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 07:15:53.950654 containerd[1473]: 2025-08-13 07:15:53.912 [INFO][4243] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 07:15:53.950654 containerd[1473]: 2025-08-13 07:15:53.913 [INFO][4243] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bd211d12464cb0a79d9fa2dceb2f71ede9b9ce092ea53a82d6fac290c68d5e76" host="localhost" Aug 13 07:15:53.950654 containerd[1473]: 2025-08-13 07:15:53.914 [INFO][4243] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bd211d12464cb0a79d9fa2dceb2f71ede9b9ce092ea53a82d6fac290c68d5e76 Aug 13 07:15:53.950654 containerd[1473]: 2025-08-13 07:15:53.917 [INFO][4243] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bd211d12464cb0a79d9fa2dceb2f71ede9b9ce092ea53a82d6fac290c68d5e76" host="localhost" Aug 13 07:15:53.950654 containerd[1473]: 2025-08-13 07:15:53.923 [INFO][4243] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.bd211d12464cb0a79d9fa2dceb2f71ede9b9ce092ea53a82d6fac290c68d5e76" host="localhost" Aug 13 07:15:53.950654 containerd[1473]: 2025-08-13 07:15:53.923 [INFO][4243] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.bd211d12464cb0a79d9fa2dceb2f71ede9b9ce092ea53a82d6fac290c68d5e76" host="localhost" Aug 13 07:15:53.950654 containerd[1473]: 2025-08-13 07:15:53.923 [INFO][4243] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:15:53.950654 containerd[1473]: 2025-08-13 07:15:53.923 [INFO][4243] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="bd211d12464cb0a79d9fa2dceb2f71ede9b9ce092ea53a82d6fac290c68d5e76" HandleID="k8s-pod-network.bd211d12464cb0a79d9fa2dceb2f71ede9b9ce092ea53a82d6fac290c68d5e76" Workload="localhost-k8s-coredns--668d6bf9bc--kkjwm-eth0" Aug 13 07:15:53.952394 containerd[1473]: 2025-08-13 07:15:53.927 [INFO][4231] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bd211d12464cb0a79d9fa2dceb2f71ede9b9ce092ea53a82d6fac290c68d5e76" Namespace="kube-system" Pod="coredns-668d6bf9bc-kkjwm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--kkjwm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--kkjwm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2f61af08-7750-47f4-b608-c1bb42e1730d", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 15, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-kkjwm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali58cadadb49f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:15:53.952394 containerd[1473]: 2025-08-13 07:15:53.927 [INFO][4231] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="bd211d12464cb0a79d9fa2dceb2f71ede9b9ce092ea53a82d6fac290c68d5e76" Namespace="kube-system" Pod="coredns-668d6bf9bc-kkjwm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--kkjwm-eth0" Aug 13 07:15:53.952394 containerd[1473]: 2025-08-13 07:15:53.927 [INFO][4231] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali58cadadb49f ContainerID="bd211d12464cb0a79d9fa2dceb2f71ede9b9ce092ea53a82d6fac290c68d5e76" Namespace="kube-system" Pod="coredns-668d6bf9bc-kkjwm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--kkjwm-eth0" Aug 13 07:15:53.952394 containerd[1473]: 2025-08-13 07:15:53.932 [INFO][4231] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bd211d12464cb0a79d9fa2dceb2f71ede9b9ce092ea53a82d6fac290c68d5e76" Namespace="kube-system" Pod="coredns-668d6bf9bc-kkjwm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--kkjwm-eth0" Aug 13 07:15:53.952394 containerd[1473]: 2025-08-13 07:15:53.933 [INFO][4231] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bd211d12464cb0a79d9fa2dceb2f71ede9b9ce092ea53a82d6fac290c68d5e76" Namespace="kube-system" Pod="coredns-668d6bf9bc-kkjwm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--kkjwm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--kkjwm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2f61af08-7750-47f4-b608-c1bb42e1730d", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 15, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bd211d12464cb0a79d9fa2dceb2f71ede9b9ce092ea53a82d6fac290c68d5e76", Pod:"coredns-668d6bf9bc-kkjwm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali58cadadb49f", MAC:"e2:0d:72:1b:2d:0b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:15:53.952394 containerd[1473]: 2025-08-13 07:15:53.945 [INFO][4231] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bd211d12464cb0a79d9fa2dceb2f71ede9b9ce092ea53a82d6fac290c68d5e76" Namespace="kube-system" Pod="coredns-668d6bf9bc-kkjwm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--kkjwm-eth0" Aug 13 07:15:53.969983 containerd[1473]: time="2025-08-13T07:15:53.969669319Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:15:53.969983 containerd[1473]: time="2025-08-13T07:15:53.969742676Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:15:53.969983 containerd[1473]: time="2025-08-13T07:15:53.969773194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:15:53.969983 containerd[1473]: time="2025-08-13T07:15:53.969893250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:15:54.004021 systemd[1]: Started cri-containerd-bd211d12464cb0a79d9fa2dceb2f71ede9b9ce092ea53a82d6fac290c68d5e76.scope - libcontainer container bd211d12464cb0a79d9fa2dceb2f71ede9b9ce092ea53a82d6fac290c68d5e76. Aug 13 07:15:54.016014 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:15:54.043610 containerd[1473]: time="2025-08-13T07:15:54.043568547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kkjwm,Uid:2f61af08-7750-47f4-b608-c1bb42e1730d,Namespace:kube-system,Attempt:1,} returns sandbox id \"bd211d12464cb0a79d9fa2dceb2f71ede9b9ce092ea53a82d6fac290c68d5e76\"" Aug 13 07:15:54.044485 kubelet[2504]: E0813 07:15:54.044458 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:15:54.046177 containerd[1473]: time="2025-08-13T07:15:54.046145384Z" level=info msg="CreateContainer within sandbox \"bd211d12464cb0a79d9fa2dceb2f71ede9b9ce092ea53a82d6fac290c68d5e76\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 07:15:54.066834 containerd[1473]: time="2025-08-13T07:15:54.066783603Z" level=info msg="CreateContainer within sandbox \"bd211d12464cb0a79d9fa2dceb2f71ede9b9ce092ea53a82d6fac290c68d5e76\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3239f1ce618a99c4bcbcb34c00a4fcb9069b494243b18a50bc8e54de4081b6cb\"" Aug 13 07:15:54.067378 containerd[1473]: time="2025-08-13T07:15:54.067335278Z" level=info msg="StartContainer for \"3239f1ce618a99c4bcbcb34c00a4fcb9069b494243b18a50bc8e54de4081b6cb\"" Aug 13 07:15:54.096013 systemd[1]: Started cri-containerd-3239f1ce618a99c4bcbcb34c00a4fcb9069b494243b18a50bc8e54de4081b6cb.scope - libcontainer container 3239f1ce618a99c4bcbcb34c00a4fcb9069b494243b18a50bc8e54de4081b6cb. Aug 13 07:15:54.125440 containerd[1473]: time="2025-08-13T07:15:54.125402194Z" level=info msg="StartContainer for \"3239f1ce618a99c4bcbcb34c00a4fcb9069b494243b18a50bc8e54de4081b6cb\" returns successfully" Aug 13 07:15:54.704873 containerd[1473]: time="2025-08-13T07:15:54.704807699Z" level=info msg="StopPodSandbox for \"4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57\"" Aug 13 07:15:54.705489 containerd[1473]: time="2025-08-13T07:15:54.704838217Z" level=info msg="StopPodSandbox for \"6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9\"" Aug 13 07:15:54.707437 containerd[1473]: time="2025-08-13T07:15:54.704875807Z" level=info msg="StopPodSandbox for \"0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1\"" Aug 13 07:15:54.724732 containerd[1473]: time="2025-08-13T07:15:54.704914510Z" level=info msg="StopPodSandbox for \"354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e\"" Aug 13 07:15:54.934390 kubelet[2504]: E0813 07:15:54.934350 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:15:55.018140 kubelet[2504]: I0813 07:15:55.016886 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-kkjwm" podStartSLOduration=38.016762634 podStartE2EDuration="38.016762634s" podCreationTimestamp="2025-08-13 07:15:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:15:55.015840083 +0000 UTC m=+44.409931545" watchObservedRunningTime="2025-08-13 07:15:55.016762634 +0000 UTC m=+44.410854076" Aug 13 07:15:55.042856 containerd[1473]: 2025-08-13 07:15:54.808 [INFO][4380] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57" Aug 13 07:15:55.042856 containerd[1473]: 2025-08-13 07:15:54.809 [INFO][4380] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57" iface="eth0" netns="/var/run/netns/cni-aede839b-09a7-f38f-fc7b-90db1b6c203e" Aug 13 07:15:55.042856 containerd[1473]: 2025-08-13 07:15:54.809 [INFO][4380] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57" iface="eth0" netns="/var/run/netns/cni-aede839b-09a7-f38f-fc7b-90db1b6c203e" Aug 13 07:15:55.042856 containerd[1473]: 2025-08-13 07:15:54.809 [INFO][4380] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57" iface="eth0" netns="/var/run/netns/cni-aede839b-09a7-f38f-fc7b-90db1b6c203e" Aug 13 07:15:55.042856 containerd[1473]: 2025-08-13 07:15:54.809 [INFO][4380] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57" Aug 13 07:15:55.042856 containerd[1473]: 2025-08-13 07:15:54.809 [INFO][4380] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57" Aug 13 07:15:55.042856 containerd[1473]: 2025-08-13 07:15:54.832 [INFO][4419] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57" HandleID="k8s-pod-network.4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57" Workload="localhost-k8s-csi--node--driver--k8p5g-eth0" Aug 13 07:15:55.042856 containerd[1473]: 2025-08-13 07:15:54.832 [INFO][4419] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:15:55.042856 containerd[1473]: 2025-08-13 07:15:54.832 [INFO][4419] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:15:55.042856 containerd[1473]: 2025-08-13 07:15:55.005 [WARNING][4419] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57" HandleID="k8s-pod-network.4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57" Workload="localhost-k8s-csi--node--driver--k8p5g-eth0" Aug 13 07:15:55.042856 containerd[1473]: 2025-08-13 07:15:55.005 [INFO][4419] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57" HandleID="k8s-pod-network.4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57" Workload="localhost-k8s-csi--node--driver--k8p5g-eth0" Aug 13 07:15:55.042856 containerd[1473]: 2025-08-13 07:15:55.009 [INFO][4419] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:15:55.042856 containerd[1473]: 2025-08-13 07:15:55.031 [INFO][4380] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57" Aug 13 07:15:55.044268 containerd[1473]: time="2025-08-13T07:15:55.044129369Z" level=info msg="TearDown network for sandbox \"4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57\" successfully" Aug 13 07:15:55.044268 containerd[1473]: time="2025-08-13T07:15:55.044159355Z" level=info msg="StopPodSandbox for \"4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57\" returns successfully" Aug 13 07:15:55.046549 systemd[1]: run-netns-cni\x2daede839b\x2d09a7\x2df38f\x2dfc7b\x2d90db1b6c203e.mount: Deactivated successfully. Aug 13 07:15:55.048232 containerd[1473]: time="2025-08-13T07:15:55.046726834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k8p5g,Uid:ff96eac2-a650-4688-baf4-c624d8dfca9d,Namespace:calico-system,Attempt:1,}" Aug 13 07:15:55.094569 containerd[1473]: 2025-08-13 07:15:55.001 [INFO][4382] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1" Aug 13 07:15:55.094569 containerd[1473]: 2025-08-13 07:15:55.001 [INFO][4382] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1" iface="eth0" netns="/var/run/netns/cni-a244f2b7-5660-3238-2144-79170e1e852b" Aug 13 07:15:55.094569 containerd[1473]: 2025-08-13 07:15:55.005 [INFO][4382] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1" iface="eth0" netns="/var/run/netns/cni-a244f2b7-5660-3238-2144-79170e1e852b" Aug 13 07:15:55.094569 containerd[1473]: 2025-08-13 07:15:55.006 [INFO][4382] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1" iface="eth0" netns="/var/run/netns/cni-a244f2b7-5660-3238-2144-79170e1e852b" Aug 13 07:15:55.094569 containerd[1473]: 2025-08-13 07:15:55.010 [INFO][4382] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1" Aug 13 07:15:55.094569 containerd[1473]: 2025-08-13 07:15:55.010 [INFO][4382] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1" Aug 13 07:15:55.094569 containerd[1473]: 2025-08-13 07:15:55.062 [INFO][4436] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1" HandleID="k8s-pod-network.0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1" Workload="localhost-k8s-calico--apiserver--7bc76445cf--6gxtp-eth0" Aug 13 07:15:55.094569 containerd[1473]: 2025-08-13 07:15:55.063 [INFO][4436] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:15:55.094569 containerd[1473]: 2025-08-13 07:15:55.063 [INFO][4436] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:15:55.094569 containerd[1473]: 2025-08-13 07:15:55.070 [WARNING][4436] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1" HandleID="k8s-pod-network.0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1" Workload="localhost-k8s-calico--apiserver--7bc76445cf--6gxtp-eth0" Aug 13 07:15:55.094569 containerd[1473]: 2025-08-13 07:15:55.070 [INFO][4436] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1" HandleID="k8s-pod-network.0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1" Workload="localhost-k8s-calico--apiserver--7bc76445cf--6gxtp-eth0" Aug 13 07:15:55.094569 containerd[1473]: 2025-08-13 07:15:55.074 [INFO][4436] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:15:55.094569 containerd[1473]: 2025-08-13 07:15:55.089 [INFO][4382] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1" Aug 13 07:15:55.096129 containerd[1473]: time="2025-08-13T07:15:55.096058766Z" level=info msg="TearDown network for sandbox \"0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1\" successfully" Aug 13 07:15:55.096129 containerd[1473]: time="2025-08-13T07:15:55.096105433Z" level=info msg="StopPodSandbox for \"0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1\" returns successfully" Aug 13 07:15:55.097298 containerd[1473]: time="2025-08-13T07:15:55.097252445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bc76445cf-6gxtp,Uid:f71c796a-2b24-4955-a685-11764bd3ee81,Namespace:calico-apiserver,Attempt:1,}" Aug 13 07:15:55.110082 containerd[1473]: 2025-08-13 07:15:54.998 [INFO][4393] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9" Aug 13 07:15:55.110082 containerd[1473]: 2025-08-13 07:15:54.999 [INFO][4393] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9" iface="eth0" netns="/var/run/netns/cni-60030e14-512e-3e34-7305-55c592eb2103" Aug 13 07:15:55.110082 containerd[1473]: 2025-08-13 07:15:55.004 [INFO][4393] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9" iface="eth0" netns="/var/run/netns/cni-60030e14-512e-3e34-7305-55c592eb2103" Aug 13 07:15:55.110082 containerd[1473]: 2025-08-13 07:15:55.005 [INFO][4393] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9" iface="eth0" netns="/var/run/netns/cni-60030e14-512e-3e34-7305-55c592eb2103" Aug 13 07:15:55.110082 containerd[1473]: 2025-08-13 07:15:55.005 [INFO][4393] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9" Aug 13 07:15:55.110082 containerd[1473]: 2025-08-13 07:15:55.005 [INFO][4393] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9" Aug 13 07:15:55.110082 containerd[1473]: 2025-08-13 07:15:55.083 [INFO][4431] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9" HandleID="k8s-pod-network.6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9" Workload="localhost-k8s-calico--apiserver--7bc76445cf--h7vjm-eth0" Aug 13 07:15:55.110082 containerd[1473]: 2025-08-13 07:15:55.083 [INFO][4431] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:15:55.110082 containerd[1473]: 2025-08-13 07:15:55.083 [INFO][4431] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:15:55.110082 containerd[1473]: 2025-08-13 07:15:55.090 [WARNING][4431] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9" HandleID="k8s-pod-network.6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9" Workload="localhost-k8s-calico--apiserver--7bc76445cf--h7vjm-eth0" Aug 13 07:15:55.110082 containerd[1473]: 2025-08-13 07:15:55.090 [INFO][4431] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9" HandleID="k8s-pod-network.6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9" Workload="localhost-k8s-calico--apiserver--7bc76445cf--h7vjm-eth0" Aug 13 07:15:55.110082 containerd[1473]: 2025-08-13 07:15:55.093 [INFO][4431] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:15:55.110082 containerd[1473]: 2025-08-13 07:15:55.102 [INFO][4393] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9" Aug 13 07:15:55.110645 containerd[1473]: time="2025-08-13T07:15:55.110597453Z" level=info msg="TearDown network for sandbox \"6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9\" successfully" Aug 13 07:15:55.110645 containerd[1473]: time="2025-08-13T07:15:55.110633902Z" level=info msg="StopPodSandbox for \"6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9\" returns successfully" Aug 13 07:15:55.111478 containerd[1473]: time="2025-08-13T07:15:55.111452548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bc76445cf-h7vjm,Uid:6371703e-a0f7-43f3-a612-88598f32a9f9,Namespace:calico-apiserver,Attempt:1,}" Aug 13 07:15:55.117676 containerd[1473]: 2025-08-13 07:15:55.001 [INFO][4394] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e" Aug 13 07:15:55.117676 containerd[1473]: 2025-08-13 07:15:55.002 [INFO][4394] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e" iface="eth0" netns="/var/run/netns/cni-99dba3ea-ce0d-9fcc-61f9-b67707c9cd6c" Aug 13 07:15:55.117676 containerd[1473]: 2025-08-13 07:15:55.003 [INFO][4394] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e" iface="eth0" netns="/var/run/netns/cni-99dba3ea-ce0d-9fcc-61f9-b67707c9cd6c" Aug 13 07:15:55.117676 containerd[1473]: 2025-08-13 07:15:55.003 [INFO][4394] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e" iface="eth0" netns="/var/run/netns/cni-99dba3ea-ce0d-9fcc-61f9-b67707c9cd6c" Aug 13 07:15:55.117676 containerd[1473]: 2025-08-13 07:15:55.003 [INFO][4394] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e" Aug 13 07:15:55.117676 containerd[1473]: 2025-08-13 07:15:55.003 [INFO][4394] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e" Aug 13 07:15:55.117676 containerd[1473]: 2025-08-13 07:15:55.090 [INFO][4429] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e" HandleID="k8s-pod-network.354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e" Workload="localhost-k8s-calico--kube--controllers--d5b45b8d4--f96tt-eth0" Aug 13 07:15:55.117676 containerd[1473]: 2025-08-13 07:15:55.090 [INFO][4429] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:15:55.117676 containerd[1473]: 2025-08-13 07:15:55.095 [INFO][4429] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:15:55.117676 containerd[1473]: 2025-08-13 07:15:55.104 [WARNING][4429] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e" HandleID="k8s-pod-network.354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e" Workload="localhost-k8s-calico--kube--controllers--d5b45b8d4--f96tt-eth0" Aug 13 07:15:55.117676 containerd[1473]: 2025-08-13 07:15:55.104 [INFO][4429] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e" HandleID="k8s-pod-network.354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e" Workload="localhost-k8s-calico--kube--controllers--d5b45b8d4--f96tt-eth0" Aug 13 07:15:55.117676 containerd[1473]: 2025-08-13 07:15:55.106 [INFO][4429] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:15:55.117676 containerd[1473]: 2025-08-13 07:15:55.113 [INFO][4394] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e" Aug 13 07:15:55.119840 containerd[1473]: time="2025-08-13T07:15:55.118391141Z" level=info msg="TearDown network for sandbox \"354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e\" successfully" Aug 13 07:15:55.119840 containerd[1473]: time="2025-08-13T07:15:55.118496690Z" level=info msg="StopPodSandbox for \"354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e\" returns successfully" Aug 13 07:15:55.119840 containerd[1473]: time="2025-08-13T07:15:55.119253880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d5b45b8d4-f96tt,Uid:5c8ac06a-59ac-4ad7-851b-39e9a256e71f,Namespace:calico-system,Attempt:1,}" Aug 13 07:15:55.203416 systemd-networkd[1400]: cali58cadadb49f: Gained IPv6LL Aug 13 07:15:55.230447 systemd-networkd[1400]: cali6fcb0921607: Link UP Aug 13 07:15:55.231660 systemd-networkd[1400]: cali6fcb0921607: Gained carrier Aug 13 07:15:55.249778 containerd[1473]: 2025-08-13 07:15:55.130 [INFO][4460] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--k8p5g-eth0 csi-node-driver- calico-system ff96eac2-a650-4688-baf4-c624d8dfca9d 986 0 2025-08-13 07:15:29 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-k8p5g eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali6fcb0921607 [] [] }} ContainerID="994674e4866866499f74ff8c7dc8a69e8270e2d5d86d28a23b4bdee2b7c2667a" Namespace="calico-system" Pod="csi-node-driver-k8p5g" WorkloadEndpoint="localhost-k8s-csi--node--driver--k8p5g-" Aug 13 07:15:55.249778 containerd[1473]: 2025-08-13 07:15:55.130 [INFO][4460] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="994674e4866866499f74ff8c7dc8a69e8270e2d5d86d28a23b4bdee2b7c2667a" Namespace="calico-system" Pod="csi-node-driver-k8p5g" WorkloadEndpoint="localhost-k8s-csi--node--driver--k8p5g-eth0" Aug 13 07:15:55.249778 containerd[1473]: 2025-08-13 07:15:55.179 [INFO][4494] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="994674e4866866499f74ff8c7dc8a69e8270e2d5d86d28a23b4bdee2b7c2667a" HandleID="k8s-pod-network.994674e4866866499f74ff8c7dc8a69e8270e2d5d86d28a23b4bdee2b7c2667a" Workload="localhost-k8s-csi--node--driver--k8p5g-eth0" Aug 13 07:15:55.249778 containerd[1473]: 2025-08-13 07:15:55.181 [INFO][4494] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="994674e4866866499f74ff8c7dc8a69e8270e2d5d86d28a23b4bdee2b7c2667a" HandleID="k8s-pod-network.994674e4866866499f74ff8c7dc8a69e8270e2d5d86d28a23b4bdee2b7c2667a" Workload="localhost-k8s-csi--node--driver--k8p5g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ee00), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-k8p5g", "timestamp":"2025-08-13 07:15:55.177481628 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:15:55.249778 containerd[1473]: 2025-08-13 07:15:55.181 [INFO][4494] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:15:55.249778 containerd[1473]: 2025-08-13 07:15:55.181 [INFO][4494] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:15:55.249778 containerd[1473]: 2025-08-13 07:15:55.181 [INFO][4494] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 07:15:55.249778 containerd[1473]: 2025-08-13 07:15:55.190 [INFO][4494] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.994674e4866866499f74ff8c7dc8a69e8270e2d5d86d28a23b4bdee2b7c2667a" host="localhost" Aug 13 07:15:55.249778 containerd[1473]: 2025-08-13 07:15:55.194 [INFO][4494] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 07:15:55.249778 containerd[1473]: 2025-08-13 07:15:55.199 [INFO][4494] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 07:15:55.249778 containerd[1473]: 2025-08-13 07:15:55.202 [INFO][4494] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 07:15:55.249778 containerd[1473]: 2025-08-13 07:15:55.204 [INFO][4494] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 07:15:55.249778 containerd[1473]: 2025-08-13 07:15:55.204 [INFO][4494] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.994674e4866866499f74ff8c7dc8a69e8270e2d5d86d28a23b4bdee2b7c2667a" host="localhost" Aug 13 07:15:55.249778 containerd[1473]: 2025-08-13 07:15:55.208 [INFO][4494] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.994674e4866866499f74ff8c7dc8a69e8270e2d5d86d28a23b4bdee2b7c2667a Aug 13 07:15:55.249778 containerd[1473]: 2025-08-13 07:15:55.213 [INFO][4494] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.994674e4866866499f74ff8c7dc8a69e8270e2d5d86d28a23b4bdee2b7c2667a" host="localhost" Aug 13 07:15:55.249778 containerd[1473]: 2025-08-13 07:15:55.219 [INFO][4494] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.994674e4866866499f74ff8c7dc8a69e8270e2d5d86d28a23b4bdee2b7c2667a" host="localhost" Aug 13 07:15:55.249778 containerd[1473]: 2025-08-13 07:15:55.219 [INFO][4494] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.994674e4866866499f74ff8c7dc8a69e8270e2d5d86d28a23b4bdee2b7c2667a" host="localhost" Aug 13 07:15:55.249778 containerd[1473]: 2025-08-13 07:15:55.220 [INFO][4494] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:15:55.249778 containerd[1473]: 2025-08-13 07:15:55.220 [INFO][4494] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="994674e4866866499f74ff8c7dc8a69e8270e2d5d86d28a23b4bdee2b7c2667a" HandleID="k8s-pod-network.994674e4866866499f74ff8c7dc8a69e8270e2d5d86d28a23b4bdee2b7c2667a" Workload="localhost-k8s-csi--node--driver--k8p5g-eth0" Aug 13 07:15:55.250367 containerd[1473]: 2025-08-13 07:15:55.225 [INFO][4460] cni-plugin/k8s.go 418: Populated endpoint ContainerID="994674e4866866499f74ff8c7dc8a69e8270e2d5d86d28a23b4bdee2b7c2667a" Namespace="calico-system" Pod="csi-node-driver-k8p5g" WorkloadEndpoint="localhost-k8s-csi--node--driver--k8p5g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--k8p5g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ff96eac2-a650-4688-baf4-c624d8dfca9d", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 15, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-k8p5g", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6fcb0921607", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:15:55.250367 containerd[1473]: 2025-08-13 07:15:55.225 [INFO][4460] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="994674e4866866499f74ff8c7dc8a69e8270e2d5d86d28a23b4bdee2b7c2667a" Namespace="calico-system" Pod="csi-node-driver-k8p5g" WorkloadEndpoint="localhost-k8s-csi--node--driver--k8p5g-eth0" Aug 13 07:15:55.250367 containerd[1473]: 2025-08-13 07:15:55.225 [INFO][4460] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6fcb0921607 ContainerID="994674e4866866499f74ff8c7dc8a69e8270e2d5d86d28a23b4bdee2b7c2667a" Namespace="calico-system" Pod="csi-node-driver-k8p5g" WorkloadEndpoint="localhost-k8s-csi--node--driver--k8p5g-eth0" Aug 13 07:15:55.250367 containerd[1473]: 2025-08-13 07:15:55.233 [INFO][4460] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="994674e4866866499f74ff8c7dc8a69e8270e2d5d86d28a23b4bdee2b7c2667a" Namespace="calico-system" Pod="csi-node-driver-k8p5g" WorkloadEndpoint="localhost-k8s-csi--node--driver--k8p5g-eth0" Aug 13 07:15:55.250367 containerd[1473]: 2025-08-13 07:15:55.234 [INFO][4460] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="994674e4866866499f74ff8c7dc8a69e8270e2d5d86d28a23b4bdee2b7c2667a" Namespace="calico-system" Pod="csi-node-driver-k8p5g" WorkloadEndpoint="localhost-k8s-csi--node--driver--k8p5g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--k8p5g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ff96eac2-a650-4688-baf4-c624d8dfca9d", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 15, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"994674e4866866499f74ff8c7dc8a69e8270e2d5d86d28a23b4bdee2b7c2667a", Pod:"csi-node-driver-k8p5g", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6fcb0921607", MAC:"b6:79:8d:25:8a:52", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:15:55.250367 containerd[1473]: 2025-08-13 07:15:55.245 [INFO][4460] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="994674e4866866499f74ff8c7dc8a69e8270e2d5d86d28a23b4bdee2b7c2667a" Namespace="calico-system" Pod="csi-node-driver-k8p5g" WorkloadEndpoint="localhost-k8s-csi--node--driver--k8p5g-eth0" Aug 13 07:15:55.277578 containerd[1473]: time="2025-08-13T07:15:55.277271204Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:15:55.277578 containerd[1473]: time="2025-08-13T07:15:55.277340073Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:15:55.277578 containerd[1473]: time="2025-08-13T07:15:55.277354650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:15:55.277578 containerd[1473]: time="2025-08-13T07:15:55.277461622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:15:55.305269 systemd[1]: Started cri-containerd-994674e4866866499f74ff8c7dc8a69e8270e2d5d86d28a23b4bdee2b7c2667a.scope - libcontainer container 994674e4866866499f74ff8c7dc8a69e8270e2d5d86d28a23b4bdee2b7c2667a. Aug 13 07:15:55.341352 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:15:55.343130 systemd-networkd[1400]: calib2e2882a5aa: Link UP Aug 13 07:15:55.344454 systemd-networkd[1400]: calib2e2882a5aa: Gained carrier Aug 13 07:15:55.359561 containerd[1473]: 2025-08-13 07:15:55.165 [INFO][4481] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7bc76445cf--6gxtp-eth0 calico-apiserver-7bc76445cf- calico-apiserver f71c796a-2b24-4955-a685-11764bd3ee81 988 0 2025-08-13 07:15:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7bc76445cf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7bc76445cf-6gxtp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib2e2882a5aa [] [] }} ContainerID="e5411b843cec3ea131080fdeef93d045aeb1f6eebdbbf7c62bb0c096070b71f3" Namespace="calico-apiserver" Pod="calico-apiserver-7bc76445cf-6gxtp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bc76445cf--6gxtp-" Aug 13 07:15:55.359561 containerd[1473]: 2025-08-13 07:15:55.165 [INFO][4481] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e5411b843cec3ea131080fdeef93d045aeb1f6eebdbbf7c62bb0c096070b71f3" Namespace="calico-apiserver" Pod="calico-apiserver-7bc76445cf-6gxtp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bc76445cf--6gxtp-eth0" Aug 13 07:15:55.359561 containerd[1473]: 2025-08-13 07:15:55.222 [INFO][4530] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e5411b843cec3ea131080fdeef93d045aeb1f6eebdbbf7c62bb0c096070b71f3" HandleID="k8s-pod-network.e5411b843cec3ea131080fdeef93d045aeb1f6eebdbbf7c62bb0c096070b71f3" Workload="localhost-k8s-calico--apiserver--7bc76445cf--6gxtp-eth0" Aug 13 07:15:55.359561 containerd[1473]: 2025-08-13 07:15:55.223 [INFO][4530] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e5411b843cec3ea131080fdeef93d045aeb1f6eebdbbf7c62bb0c096070b71f3" HandleID="k8s-pod-network.e5411b843cec3ea131080fdeef93d045aeb1f6eebdbbf7c62bb0c096070b71f3" Workload="localhost-k8s-calico--apiserver--7bc76445cf--6gxtp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df750), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7bc76445cf-6gxtp", "timestamp":"2025-08-13 07:15:55.221993122 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:15:55.359561 containerd[1473]: 2025-08-13 07:15:55.223 [INFO][4530] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:15:55.359561 containerd[1473]: 2025-08-13 07:15:55.223 [INFO][4530] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:15:55.359561 containerd[1473]: 2025-08-13 07:15:55.223 [INFO][4530] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 07:15:55.359561 containerd[1473]: 2025-08-13 07:15:55.293 [INFO][4530] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e5411b843cec3ea131080fdeef93d045aeb1f6eebdbbf7c62bb0c096070b71f3" host="localhost" Aug 13 07:15:55.359561 containerd[1473]: 2025-08-13 07:15:55.299 [INFO][4530] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 07:15:55.359561 containerd[1473]: 2025-08-13 07:15:55.306 [INFO][4530] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 07:15:55.359561 containerd[1473]: 2025-08-13 07:15:55.308 [INFO][4530] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 07:15:55.359561 containerd[1473]: 2025-08-13 07:15:55.311 [INFO][4530] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 07:15:55.359561 containerd[1473]: 2025-08-13 07:15:55.311 [INFO][4530] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e5411b843cec3ea131080fdeef93d045aeb1f6eebdbbf7c62bb0c096070b71f3" host="localhost" Aug 13 07:15:55.359561 containerd[1473]: 2025-08-13 07:15:55.312 [INFO][4530] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e5411b843cec3ea131080fdeef93d045aeb1f6eebdbbf7c62bb0c096070b71f3 Aug 13 07:15:55.359561 containerd[1473]: 2025-08-13 07:15:55.317 [INFO][4530] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e5411b843cec3ea131080fdeef93d045aeb1f6eebdbbf7c62bb0c096070b71f3" host="localhost" Aug 13 07:15:55.359561 containerd[1473]: 2025-08-13 07:15:55.327 [INFO][4530] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.e5411b843cec3ea131080fdeef93d045aeb1f6eebdbbf7c62bb0c096070b71f3" host="localhost" Aug 13 07:15:55.359561 containerd[1473]: 2025-08-13 07:15:55.327 [INFO][4530] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.e5411b843cec3ea131080fdeef93d045aeb1f6eebdbbf7c62bb0c096070b71f3" host="localhost" Aug 13 07:15:55.359561 containerd[1473]: 2025-08-13 07:15:55.327 [INFO][4530] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:15:55.359561 containerd[1473]: 2025-08-13 07:15:55.327 [INFO][4530] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="e5411b843cec3ea131080fdeef93d045aeb1f6eebdbbf7c62bb0c096070b71f3" HandleID="k8s-pod-network.e5411b843cec3ea131080fdeef93d045aeb1f6eebdbbf7c62bb0c096070b71f3" Workload="localhost-k8s-calico--apiserver--7bc76445cf--6gxtp-eth0" Aug 13 07:15:55.360295 containerd[1473]: 2025-08-13 07:15:55.338 [INFO][4481] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e5411b843cec3ea131080fdeef93d045aeb1f6eebdbbf7c62bb0c096070b71f3" Namespace="calico-apiserver" Pod="calico-apiserver-7bc76445cf-6gxtp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bc76445cf--6gxtp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7bc76445cf--6gxtp-eth0", GenerateName:"calico-apiserver-7bc76445cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"f71c796a-2b24-4955-a685-11764bd3ee81", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 15, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bc76445cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7bc76445cf-6gxtp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib2e2882a5aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:15:55.360295 containerd[1473]: 2025-08-13 07:15:55.338 [INFO][4481] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="e5411b843cec3ea131080fdeef93d045aeb1f6eebdbbf7c62bb0c096070b71f3" Namespace="calico-apiserver" Pod="calico-apiserver-7bc76445cf-6gxtp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bc76445cf--6gxtp-eth0" Aug 13 07:15:55.360295 containerd[1473]: 2025-08-13 07:15:55.338 [INFO][4481] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib2e2882a5aa ContainerID="e5411b843cec3ea131080fdeef93d045aeb1f6eebdbbf7c62bb0c096070b71f3" Namespace="calico-apiserver" Pod="calico-apiserver-7bc76445cf-6gxtp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bc76445cf--6gxtp-eth0" Aug 13 07:15:55.360295 containerd[1473]: 2025-08-13 07:15:55.343 [INFO][4481] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e5411b843cec3ea131080fdeef93d045aeb1f6eebdbbf7c62bb0c096070b71f3" Namespace="calico-apiserver" Pod="calico-apiserver-7bc76445cf-6gxtp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bc76445cf--6gxtp-eth0" Aug 13 07:15:55.360295 containerd[1473]: 2025-08-13 07:15:55.344 [INFO][4481] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e5411b843cec3ea131080fdeef93d045aeb1f6eebdbbf7c62bb0c096070b71f3" Namespace="calico-apiserver" Pod="calico-apiserver-7bc76445cf-6gxtp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bc76445cf--6gxtp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7bc76445cf--6gxtp-eth0", GenerateName:"calico-apiserver-7bc76445cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"f71c796a-2b24-4955-a685-11764bd3ee81", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 15, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bc76445cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e5411b843cec3ea131080fdeef93d045aeb1f6eebdbbf7c62bb0c096070b71f3", Pod:"calico-apiserver-7bc76445cf-6gxtp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib2e2882a5aa", MAC:"e6:1a:6d:79:4e:2e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:15:55.360295 containerd[1473]: 2025-08-13 07:15:55.355 [INFO][4481] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e5411b843cec3ea131080fdeef93d045aeb1f6eebdbbf7c62bb0c096070b71f3" Namespace="calico-apiserver" Pod="calico-apiserver-7bc76445cf-6gxtp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bc76445cf--6gxtp-eth0" Aug 13 07:15:55.361411 containerd[1473]: time="2025-08-13T07:15:55.361365781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k8p5g,Uid:ff96eac2-a650-4688-baf4-c624d8dfca9d,Namespace:calico-system,Attempt:1,} returns sandbox id \"994674e4866866499f74ff8c7dc8a69e8270e2d5d86d28a23b4bdee2b7c2667a\"" Aug 13 07:15:55.437135 containerd[1473]: time="2025-08-13T07:15:55.436989750Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:15:55.437135 containerd[1473]: time="2025-08-13T07:15:55.437086852Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:15:55.437135 containerd[1473]: time="2025-08-13T07:15:55.437101741Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:15:55.438136 containerd[1473]: time="2025-08-13T07:15:55.438015837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:15:55.450003 systemd-networkd[1400]: cali9d94a92aeb6: Link UP Aug 13 07:15:55.451501 systemd-networkd[1400]: cali9d94a92aeb6: Gained carrier Aug 13 07:15:55.468128 systemd[1]: Started cri-containerd-e5411b843cec3ea131080fdeef93d045aeb1f6eebdbbf7c62bb0c096070b71f3.scope - libcontainer container e5411b843cec3ea131080fdeef93d045aeb1f6eebdbbf7c62bb0c096070b71f3. Aug 13 07:15:55.476062 containerd[1473]: 2025-08-13 07:15:55.185 [INFO][4501] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7bc76445cf--h7vjm-eth0 calico-apiserver-7bc76445cf- calico-apiserver 6371703e-a0f7-43f3-a612-88598f32a9f9 987 0 2025-08-13 07:15:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7bc76445cf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7bc76445cf-h7vjm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9d94a92aeb6 [] [] }} ContainerID="ba64923fa59bf5c18a7461554d85044a5c4bf5a9fa169cafaec7a1b85d13b22d" Namespace="calico-apiserver" Pod="calico-apiserver-7bc76445cf-h7vjm" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bc76445cf--h7vjm-" Aug 13 07:15:55.476062 containerd[1473]: 2025-08-13 07:15:55.185 [INFO][4501] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ba64923fa59bf5c18a7461554d85044a5c4bf5a9fa169cafaec7a1b85d13b22d" Namespace="calico-apiserver" Pod="calico-apiserver-7bc76445cf-h7vjm" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bc76445cf--h7vjm-eth0" Aug 13 07:15:55.476062 containerd[1473]: 2025-08-13 07:15:55.264 [INFO][4541] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ba64923fa59bf5c18a7461554d85044a5c4bf5a9fa169cafaec7a1b85d13b22d" HandleID="k8s-pod-network.ba64923fa59bf5c18a7461554d85044a5c4bf5a9fa169cafaec7a1b85d13b22d" Workload="localhost-k8s-calico--apiserver--7bc76445cf--h7vjm-eth0" Aug 13 07:15:55.476062 containerd[1473]: 2025-08-13 07:15:55.264 [INFO][4541] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ba64923fa59bf5c18a7461554d85044a5c4bf5a9fa169cafaec7a1b85d13b22d" HandleID="k8s-pod-network.ba64923fa59bf5c18a7461554d85044a5c4bf5a9fa169cafaec7a1b85d13b22d" Workload="localhost-k8s-calico--apiserver--7bc76445cf--h7vjm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003e7e00), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7bc76445cf-h7vjm", "timestamp":"2025-08-13 07:15:55.264141221 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:15:55.476062 containerd[1473]: 2025-08-13 07:15:55.264 [INFO][4541] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:15:55.476062 containerd[1473]: 2025-08-13 07:15:55.327 [INFO][4541] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:15:55.476062 containerd[1473]: 2025-08-13 07:15:55.328 [INFO][4541] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 07:15:55.476062 containerd[1473]: 2025-08-13 07:15:55.392 [INFO][4541] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ba64923fa59bf5c18a7461554d85044a5c4bf5a9fa169cafaec7a1b85d13b22d" host="localhost" Aug 13 07:15:55.476062 containerd[1473]: 2025-08-13 07:15:55.401 [INFO][4541] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 07:15:55.476062 containerd[1473]: 2025-08-13 07:15:55.405 [INFO][4541] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 07:15:55.476062 containerd[1473]: 2025-08-13 07:15:55.415 [INFO][4541] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 07:15:55.476062 containerd[1473]: 2025-08-13 07:15:55.420 [INFO][4541] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 07:15:55.476062 containerd[1473]: 2025-08-13 07:15:55.420 [INFO][4541] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ba64923fa59bf5c18a7461554d85044a5c4bf5a9fa169cafaec7a1b85d13b22d" host="localhost" Aug 13 07:15:55.476062 containerd[1473]: 2025-08-13 07:15:55.422 [INFO][4541] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ba64923fa59bf5c18a7461554d85044a5c4bf5a9fa169cafaec7a1b85d13b22d Aug 13 07:15:55.476062 containerd[1473]: 2025-08-13 07:15:55.426 [INFO][4541] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ba64923fa59bf5c18a7461554d85044a5c4bf5a9fa169cafaec7a1b85d13b22d" host="localhost" Aug 13 07:15:55.476062 containerd[1473]: 2025-08-13 07:15:55.434 [INFO][4541] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.ba64923fa59bf5c18a7461554d85044a5c4bf5a9fa169cafaec7a1b85d13b22d" host="localhost" Aug 13 07:15:55.476062 containerd[1473]: 2025-08-13 07:15:55.434 [INFO][4541] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.ba64923fa59bf5c18a7461554d85044a5c4bf5a9fa169cafaec7a1b85d13b22d" host="localhost" Aug 13 07:15:55.476062 containerd[1473]: 2025-08-13 07:15:55.434 [INFO][4541] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:15:55.476062 containerd[1473]: 2025-08-13 07:15:55.434 [INFO][4541] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="ba64923fa59bf5c18a7461554d85044a5c4bf5a9fa169cafaec7a1b85d13b22d" HandleID="k8s-pod-network.ba64923fa59bf5c18a7461554d85044a5c4bf5a9fa169cafaec7a1b85d13b22d" Workload="localhost-k8s-calico--apiserver--7bc76445cf--h7vjm-eth0" Aug 13 07:15:55.476652 containerd[1473]: 2025-08-13 07:15:55.442 [INFO][4501] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ba64923fa59bf5c18a7461554d85044a5c4bf5a9fa169cafaec7a1b85d13b22d" Namespace="calico-apiserver" Pod="calico-apiserver-7bc76445cf-h7vjm" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bc76445cf--h7vjm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7bc76445cf--h7vjm-eth0", GenerateName:"calico-apiserver-7bc76445cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"6371703e-a0f7-43f3-a612-88598f32a9f9", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 15, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bc76445cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7bc76445cf-h7vjm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9d94a92aeb6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:15:55.476652 containerd[1473]: 2025-08-13 07:15:55.442 [INFO][4501] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="ba64923fa59bf5c18a7461554d85044a5c4bf5a9fa169cafaec7a1b85d13b22d" Namespace="calico-apiserver" Pod="calico-apiserver-7bc76445cf-h7vjm" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bc76445cf--h7vjm-eth0" Aug 13 07:15:55.476652 containerd[1473]: 2025-08-13 07:15:55.442 [INFO][4501] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9d94a92aeb6 ContainerID="ba64923fa59bf5c18a7461554d85044a5c4bf5a9fa169cafaec7a1b85d13b22d" Namespace="calico-apiserver" Pod="calico-apiserver-7bc76445cf-h7vjm" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bc76445cf--h7vjm-eth0" Aug 13 07:15:55.476652 containerd[1473]: 2025-08-13 07:15:55.450 [INFO][4501] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ba64923fa59bf5c18a7461554d85044a5c4bf5a9fa169cafaec7a1b85d13b22d" Namespace="calico-apiserver" Pod="calico-apiserver-7bc76445cf-h7vjm" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bc76445cf--h7vjm-eth0" Aug 13 07:15:55.476652 containerd[1473]: 2025-08-13 07:15:55.454 [INFO][4501] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ba64923fa59bf5c18a7461554d85044a5c4bf5a9fa169cafaec7a1b85d13b22d" Namespace="calico-apiserver" Pod="calico-apiserver-7bc76445cf-h7vjm" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bc76445cf--h7vjm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7bc76445cf--h7vjm-eth0", GenerateName:"calico-apiserver-7bc76445cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"6371703e-a0f7-43f3-a612-88598f32a9f9", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 15, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bc76445cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ba64923fa59bf5c18a7461554d85044a5c4bf5a9fa169cafaec7a1b85d13b22d", Pod:"calico-apiserver-7bc76445cf-h7vjm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9d94a92aeb6", MAC:"6e:64:a6:c1:6a:a6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:15:55.476652 containerd[1473]: 2025-08-13 07:15:55.468 [INFO][4501] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ba64923fa59bf5c18a7461554d85044a5c4bf5a9fa169cafaec7a1b85d13b22d" Namespace="calico-apiserver" Pod="calico-apiserver-7bc76445cf-h7vjm" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bc76445cf--h7vjm-eth0" Aug 13 07:15:55.489553 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:15:55.518345 containerd[1473]: time="2025-08-13T07:15:55.518281758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bc76445cf-6gxtp,Uid:f71c796a-2b24-4955-a685-11764bd3ee81,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e5411b843cec3ea131080fdeef93d045aeb1f6eebdbbf7c62bb0c096070b71f3\"" Aug 13 07:15:55.569650 systemd-networkd[1400]: cali7cf31b1fb23: Link UP Aug 13 07:15:55.570032 containerd[1473]: time="2025-08-13T07:15:55.569756421Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:15:55.570364 containerd[1473]: time="2025-08-13T07:15:55.569828526Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:15:55.570364 containerd[1473]: time="2025-08-13T07:15:55.570275144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:15:55.572879 containerd[1473]: time="2025-08-13T07:15:55.570396773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:15:55.571360 systemd-networkd[1400]: cali7cf31b1fb23: Gained carrier Aug 13 07:15:55.598276 containerd[1473]: 2025-08-13 07:15:55.219 [INFO][4514] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--d5b45b8d4--f96tt-eth0 calico-kube-controllers-d5b45b8d4- calico-system 5c8ac06a-59ac-4ad7-851b-39e9a256e71f 989 0 2025-08-13 07:15:29 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:d5b45b8d4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-d5b45b8d4-f96tt eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7cf31b1fb23 [] [] }} ContainerID="012560864cbec3ca6114bcf85dc491b54c195f8552605ba69087214d59d921d8" Namespace="calico-system" Pod="calico-kube-controllers-d5b45b8d4-f96tt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d5b45b8d4--f96tt-" Aug 13 07:15:55.598276 containerd[1473]: 2025-08-13 07:15:55.220 [INFO][4514] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="012560864cbec3ca6114bcf85dc491b54c195f8552605ba69087214d59d921d8" Namespace="calico-system" Pod="calico-kube-controllers-d5b45b8d4-f96tt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d5b45b8d4--f96tt-eth0" Aug 13 07:15:55.598276 containerd[1473]: 2025-08-13 07:15:55.275 [INFO][4555] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="012560864cbec3ca6114bcf85dc491b54c195f8552605ba69087214d59d921d8" HandleID="k8s-pod-network.012560864cbec3ca6114bcf85dc491b54c195f8552605ba69087214d59d921d8" Workload="localhost-k8s-calico--kube--controllers--d5b45b8d4--f96tt-eth0" Aug 13 07:15:55.598276 containerd[1473]: 2025-08-13 07:15:55.276 [INFO][4555] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="012560864cbec3ca6114bcf85dc491b54c195f8552605ba69087214d59d921d8" HandleID="k8s-pod-network.012560864cbec3ca6114bcf85dc491b54c195f8552605ba69087214d59d921d8" Workload="localhost-k8s-calico--kube--controllers--d5b45b8d4--f96tt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004ecfe0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-d5b45b8d4-f96tt", "timestamp":"2025-08-13 07:15:55.275089869 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:15:55.598276 containerd[1473]: 2025-08-13 07:15:55.276 [INFO][4555] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:15:55.598276 containerd[1473]: 2025-08-13 07:15:55.434 [INFO][4555] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:15:55.598276 containerd[1473]: 2025-08-13 07:15:55.434 [INFO][4555] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 07:15:55.598276 containerd[1473]: 2025-08-13 07:15:55.493 [INFO][4555] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.012560864cbec3ca6114bcf85dc491b54c195f8552605ba69087214d59d921d8" host="localhost" Aug 13 07:15:55.598276 containerd[1473]: 2025-08-13 07:15:55.500 [INFO][4555] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 07:15:55.598276 containerd[1473]: 2025-08-13 07:15:55.506 [INFO][4555] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 07:15:55.598276 containerd[1473]: 2025-08-13 07:15:55.509 [INFO][4555] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 07:15:55.598276 containerd[1473]: 2025-08-13 07:15:55.512 [INFO][4555] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 07:15:55.598276 containerd[1473]: 2025-08-13 07:15:55.512 [INFO][4555] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.012560864cbec3ca6114bcf85dc491b54c195f8552605ba69087214d59d921d8" host="localhost" Aug 13 07:15:55.598276 containerd[1473]: 2025-08-13 07:15:55.515 [INFO][4555] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.012560864cbec3ca6114bcf85dc491b54c195f8552605ba69087214d59d921d8 Aug 13 07:15:55.598276 containerd[1473]: 2025-08-13 07:15:55.550 [INFO][4555] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.012560864cbec3ca6114bcf85dc491b54c195f8552605ba69087214d59d921d8" host="localhost" Aug 13 07:15:55.598276 containerd[1473]: 2025-08-13 07:15:55.558 [INFO][4555] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.012560864cbec3ca6114bcf85dc491b54c195f8552605ba69087214d59d921d8" host="localhost" Aug 13 07:15:55.598276 containerd[1473]: 2025-08-13 07:15:55.558 [INFO][4555] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.012560864cbec3ca6114bcf85dc491b54c195f8552605ba69087214d59d921d8" host="localhost" Aug 13 07:15:55.598276 containerd[1473]: 2025-08-13 07:15:55.558 [INFO][4555] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:15:55.598276 containerd[1473]: 2025-08-13 07:15:55.558 [INFO][4555] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="012560864cbec3ca6114bcf85dc491b54c195f8552605ba69087214d59d921d8" HandleID="k8s-pod-network.012560864cbec3ca6114bcf85dc491b54c195f8552605ba69087214d59d921d8" Workload="localhost-k8s-calico--kube--controllers--d5b45b8d4--f96tt-eth0" Aug 13 07:15:55.599316 containerd[1473]: 2025-08-13 07:15:55.564 [INFO][4514] cni-plugin/k8s.go 418: Populated endpoint ContainerID="012560864cbec3ca6114bcf85dc491b54c195f8552605ba69087214d59d921d8" Namespace="calico-system" Pod="calico-kube-controllers-d5b45b8d4-f96tt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d5b45b8d4--f96tt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--d5b45b8d4--f96tt-eth0", GenerateName:"calico-kube-controllers-d5b45b8d4-", Namespace:"calico-system", SelfLink:"", UID:"5c8ac06a-59ac-4ad7-851b-39e9a256e71f", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 15, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d5b45b8d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-d5b45b8d4-f96tt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7cf31b1fb23", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:15:55.599316 containerd[1473]: 2025-08-13 07:15:55.564 [INFO][4514] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="012560864cbec3ca6114bcf85dc491b54c195f8552605ba69087214d59d921d8" Namespace="calico-system" Pod="calico-kube-controllers-d5b45b8d4-f96tt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d5b45b8d4--f96tt-eth0" Aug 13 07:15:55.599316 containerd[1473]: 2025-08-13 07:15:55.565 [INFO][4514] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7cf31b1fb23 ContainerID="012560864cbec3ca6114bcf85dc491b54c195f8552605ba69087214d59d921d8" Namespace="calico-system" Pod="calico-kube-controllers-d5b45b8d4-f96tt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d5b45b8d4--f96tt-eth0" Aug 13 07:15:55.599316 containerd[1473]: 2025-08-13 07:15:55.575 [INFO][4514] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="012560864cbec3ca6114bcf85dc491b54c195f8552605ba69087214d59d921d8" Namespace="calico-system" Pod="calico-kube-controllers-d5b45b8d4-f96tt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d5b45b8d4--f96tt-eth0" Aug 13 07:15:55.599316 containerd[1473]: 2025-08-13 07:15:55.578 [INFO][4514] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="012560864cbec3ca6114bcf85dc491b54c195f8552605ba69087214d59d921d8" Namespace="calico-system" Pod="calico-kube-controllers-d5b45b8d4-f96tt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d5b45b8d4--f96tt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--d5b45b8d4--f96tt-eth0", GenerateName:"calico-kube-controllers-d5b45b8d4-", Namespace:"calico-system", SelfLink:"", UID:"5c8ac06a-59ac-4ad7-851b-39e9a256e71f", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 15, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d5b45b8d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"012560864cbec3ca6114bcf85dc491b54c195f8552605ba69087214d59d921d8", Pod:"calico-kube-controllers-d5b45b8d4-f96tt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7cf31b1fb23", MAC:"ce:6d:a6:15:e2:0a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:15:55.599316 containerd[1473]: 2025-08-13 07:15:55.592 [INFO][4514] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="012560864cbec3ca6114bcf85dc491b54c195f8552605ba69087214d59d921d8" Namespace="calico-system" Pod="calico-kube-controllers-d5b45b8d4-f96tt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d5b45b8d4--f96tt-eth0" Aug 13 07:15:55.600027 systemd[1]: Started cri-containerd-ba64923fa59bf5c18a7461554d85044a5c4bf5a9fa169cafaec7a1b85d13b22d.scope - libcontainer container ba64923fa59bf5c18a7461554d85044a5c4bf5a9fa169cafaec7a1b85d13b22d. Aug 13 07:15:55.618212 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:15:55.627380 containerd[1473]: time="2025-08-13T07:15:55.627236004Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:15:55.627380 containerd[1473]: time="2025-08-13T07:15:55.627297941Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:15:55.627380 containerd[1473]: time="2025-08-13T07:15:55.627327987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:15:55.627593 containerd[1473]: time="2025-08-13T07:15:55.627448624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:15:55.649100 systemd[1]: Started cri-containerd-012560864cbec3ca6114bcf85dc491b54c195f8552605ba69087214d59d921d8.scope - libcontainer container 012560864cbec3ca6114bcf85dc491b54c195f8552605ba69087214d59d921d8. Aug 13 07:15:55.654663 containerd[1473]: time="2025-08-13T07:15:55.654622717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bc76445cf-h7vjm,Uid:6371703e-a0f7-43f3-a612-88598f32a9f9,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"ba64923fa59bf5c18a7461554d85044a5c4bf5a9fa169cafaec7a1b85d13b22d\"" Aug 13 07:15:55.668650 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:15:55.698148 containerd[1473]: time="2025-08-13T07:15:55.698107564Z" level=info msg="StopPodSandbox for \"d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5\"" Aug 13 07:15:55.699530 containerd[1473]: time="2025-08-13T07:15:55.699502452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d5b45b8d4-f96tt,Uid:5c8ac06a-59ac-4ad7-851b-39e9a256e71f,Namespace:calico-system,Attempt:1,} returns sandbox id \"012560864cbec3ca6114bcf85dc491b54c195f8552605ba69087214d59d921d8\"" Aug 13 07:15:55.784004 containerd[1473]: 2025-08-13 07:15:55.748 [INFO][4770] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5" Aug 13 07:15:55.784004 containerd[1473]: 2025-08-13 07:15:55.748 [INFO][4770] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5" iface="eth0" netns="/var/run/netns/cni-528aec9c-ea5d-f8bc-8e9b-ef5c041d67c6" Aug 13 07:15:55.784004 containerd[1473]: 2025-08-13 07:15:55.749 [INFO][4770] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5" iface="eth0" netns="/var/run/netns/cni-528aec9c-ea5d-f8bc-8e9b-ef5c041d67c6" Aug 13 07:15:55.784004 containerd[1473]: 2025-08-13 07:15:55.749 [INFO][4770] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5" iface="eth0" netns="/var/run/netns/cni-528aec9c-ea5d-f8bc-8e9b-ef5c041d67c6" Aug 13 07:15:55.784004 containerd[1473]: 2025-08-13 07:15:55.749 [INFO][4770] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5" Aug 13 07:15:55.784004 containerd[1473]: 2025-08-13 07:15:55.749 [INFO][4770] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5" Aug 13 07:15:55.784004 containerd[1473]: 2025-08-13 07:15:55.770 [INFO][4779] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5" HandleID="k8s-pod-network.d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5" Workload="localhost-k8s-coredns--668d6bf9bc--8bmkx-eth0" Aug 13 07:15:55.784004 containerd[1473]: 2025-08-13 07:15:55.770 [INFO][4779] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:15:55.784004 containerd[1473]: 2025-08-13 07:15:55.770 [INFO][4779] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:15:55.784004 containerd[1473]: 2025-08-13 07:15:55.775 [WARNING][4779] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5" HandleID="k8s-pod-network.d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5" Workload="localhost-k8s-coredns--668d6bf9bc--8bmkx-eth0" Aug 13 07:15:55.784004 containerd[1473]: 2025-08-13 07:15:55.775 [INFO][4779] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5" HandleID="k8s-pod-network.d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5" Workload="localhost-k8s-coredns--668d6bf9bc--8bmkx-eth0" Aug 13 07:15:55.784004 containerd[1473]: 2025-08-13 07:15:55.777 [INFO][4779] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:15:55.784004 containerd[1473]: 2025-08-13 07:15:55.781 [INFO][4770] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5" Aug 13 07:15:55.819686 containerd[1473]: time="2025-08-13T07:15:55.784263019Z" level=info msg="TearDown network for sandbox \"d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5\" successfully" Aug 13 07:15:55.819686 containerd[1473]: time="2025-08-13T07:15:55.784301591Z" level=info msg="StopPodSandbox for \"d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5\" returns successfully" Aug 13 07:15:55.819686 containerd[1473]: time="2025-08-13T07:15:55.785223241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8bmkx,Uid:0f44421e-e215-4efb-b425-a905d3215525,Namespace:kube-system,Attempt:1,}" Aug 13 07:15:55.797699 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1798055177.mount: Deactivated successfully. Aug 13 07:15:55.819975 kubelet[2504]: E0813 07:15:55.784683 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:15:55.797822 systemd[1]: run-netns-cni\x2da244f2b7\x2d5660\x2d3238\x2d2144\x2d79170e1e852b.mount: Deactivated successfully. Aug 13 07:15:55.797923 systemd[1]: run-netns-cni\x2d528aec9c\x2dea5d\x2df8bc\x2d8e9b\x2def5c041d67c6.mount: Deactivated successfully. Aug 13 07:15:55.797996 systemd[1]: run-netns-cni\x2d60030e14\x2d512e\x2d3e34\x2d7305\x2d55c592eb2103.mount: Deactivated successfully. Aug 13 07:15:55.798068 systemd[1]: run-netns-cni\x2d99dba3ea\x2dce0d\x2d9fcc\x2d61f9\x2db67707c9cd6c.mount: Deactivated successfully. Aug 13 07:15:55.850081 containerd[1473]: time="2025-08-13T07:15:55.850038143Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:55.850760 containerd[1473]: time="2025-08-13T07:15:55.850710325Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Aug 13 07:15:55.856946 containerd[1473]: time="2025-08-13T07:15:55.856918016Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:55.859503 containerd[1473]: time="2025-08-13T07:15:55.859322819Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:55.860025 containerd[1473]: time="2025-08-13T07:15:55.859984582Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 3.069566058s" Aug 13 07:15:55.860077 containerd[1473]: time="2025-08-13T07:15:55.860025809Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Aug 13 07:15:55.861301 containerd[1473]: time="2025-08-13T07:15:55.861078194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 07:15:55.863521 containerd[1473]: time="2025-08-13T07:15:55.863478569Z" level=info msg="CreateContainer within sandbox \"437ec73ea2bc730344a386be11cc79bbaf31621b6796be8241b42a3b5d079369\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 13 07:15:55.878928 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1054854054.mount: Deactivated successfully. Aug 13 07:15:55.884059 containerd[1473]: time="2025-08-13T07:15:55.884019913Z" level=info msg="CreateContainer within sandbox \"437ec73ea2bc730344a386be11cc79bbaf31621b6796be8241b42a3b5d079369\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"e586a7a9326b37c16a7cd243deb409110e9a4d8219202314f8080a2ebbf7b524\"" Aug 13 07:15:55.885353 containerd[1473]: time="2025-08-13T07:15:55.885307109Z" level=info msg="StartContainer for \"e586a7a9326b37c16a7cd243deb409110e9a4d8219202314f8080a2ebbf7b524\"" Aug 13 07:15:55.918050 systemd[1]: Started cri-containerd-e586a7a9326b37c16a7cd243deb409110e9a4d8219202314f8080a2ebbf7b524.scope - libcontainer container e586a7a9326b37c16a7cd243deb409110e9a4d8219202314f8080a2ebbf7b524. Aug 13 07:15:55.944940 kubelet[2504]: E0813 07:15:55.944519 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:15:55.956110 systemd-networkd[1400]: calif0757c609ce: Link UP Aug 13 07:15:55.957598 systemd-networkd[1400]: calif0757c609ce: Gained carrier Aug 13 07:15:55.975531 containerd[1473]: 2025-08-13 07:15:55.873 [INFO][4789] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--8bmkx-eth0 coredns-668d6bf9bc- kube-system 0f44421e-e215-4efb-b425-a905d3215525 1020 0 2025-08-13 07:15:17 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-8bmkx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif0757c609ce [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7fb344a88fb59437f50ebab784dacbc5b9700aa22d6c210953f8b1534132eeaa" Namespace="kube-system" Pod="coredns-668d6bf9bc-8bmkx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8bmkx-" Aug 13 07:15:55.975531 containerd[1473]: 2025-08-13 07:15:55.873 [INFO][4789] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7fb344a88fb59437f50ebab784dacbc5b9700aa22d6c210953f8b1534132eeaa" Namespace="kube-system" Pod="coredns-668d6bf9bc-8bmkx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8bmkx-eth0" Aug 13 07:15:55.975531 containerd[1473]: 2025-08-13 07:15:55.908 [INFO][4805] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7fb344a88fb59437f50ebab784dacbc5b9700aa22d6c210953f8b1534132eeaa" HandleID="k8s-pod-network.7fb344a88fb59437f50ebab784dacbc5b9700aa22d6c210953f8b1534132eeaa" Workload="localhost-k8s-coredns--668d6bf9bc--8bmkx-eth0" Aug 13 07:15:55.975531 containerd[1473]: 2025-08-13 07:15:55.908 [INFO][4805] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7fb344a88fb59437f50ebab784dacbc5b9700aa22d6c210953f8b1534132eeaa" HandleID="k8s-pod-network.7fb344a88fb59437f50ebab784dacbc5b9700aa22d6c210953f8b1534132eeaa" Workload="localhost-k8s-coredns--668d6bf9bc--8bmkx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139590), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-8bmkx", "timestamp":"2025-08-13 07:15:55.908346622 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:15:55.975531 containerd[1473]: 2025-08-13 07:15:55.908 [INFO][4805] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:15:55.975531 containerd[1473]: 2025-08-13 07:15:55.908 [INFO][4805] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:15:55.975531 containerd[1473]: 2025-08-13 07:15:55.908 [INFO][4805] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 07:15:55.975531 containerd[1473]: 2025-08-13 07:15:55.918 [INFO][4805] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7fb344a88fb59437f50ebab784dacbc5b9700aa22d6c210953f8b1534132eeaa" host="localhost" Aug 13 07:15:55.975531 containerd[1473]: 2025-08-13 07:15:55.923 [INFO][4805] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 07:15:55.975531 containerd[1473]: 2025-08-13 07:15:55.927 [INFO][4805] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 07:15:55.975531 containerd[1473]: 2025-08-13 07:15:55.929 [INFO][4805] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 07:15:55.975531 containerd[1473]: 2025-08-13 07:15:55.931 [INFO][4805] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 07:15:55.975531 containerd[1473]: 2025-08-13 07:15:55.931 [INFO][4805] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7fb344a88fb59437f50ebab784dacbc5b9700aa22d6c210953f8b1534132eeaa" host="localhost" Aug 13 07:15:55.975531 containerd[1473]: 2025-08-13 07:15:55.932 [INFO][4805] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7fb344a88fb59437f50ebab784dacbc5b9700aa22d6c210953f8b1534132eeaa Aug 13 07:15:55.975531 containerd[1473]: 2025-08-13 07:15:55.938 [INFO][4805] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7fb344a88fb59437f50ebab784dacbc5b9700aa22d6c210953f8b1534132eeaa" host="localhost" Aug 13 07:15:55.975531 containerd[1473]: 2025-08-13 07:15:55.946 [INFO][4805] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.7fb344a88fb59437f50ebab784dacbc5b9700aa22d6c210953f8b1534132eeaa" host="localhost" Aug 13 07:15:55.975531 containerd[1473]: 2025-08-13 07:15:55.946 [INFO][4805] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.7fb344a88fb59437f50ebab784dacbc5b9700aa22d6c210953f8b1534132eeaa" host="localhost" Aug 13 07:15:55.975531 containerd[1473]: 2025-08-13 07:15:55.946 [INFO][4805] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:15:55.975531 containerd[1473]: 2025-08-13 07:15:55.946 [INFO][4805] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="7fb344a88fb59437f50ebab784dacbc5b9700aa22d6c210953f8b1534132eeaa" HandleID="k8s-pod-network.7fb344a88fb59437f50ebab784dacbc5b9700aa22d6c210953f8b1534132eeaa" Workload="localhost-k8s-coredns--668d6bf9bc--8bmkx-eth0" Aug 13 07:15:55.976264 containerd[1473]: 2025-08-13 07:15:55.952 [INFO][4789] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7fb344a88fb59437f50ebab784dacbc5b9700aa22d6c210953f8b1534132eeaa" Namespace="kube-system" Pod="coredns-668d6bf9bc-8bmkx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8bmkx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--8bmkx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0f44421e-e215-4efb-b425-a905d3215525", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 15, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-8bmkx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif0757c609ce", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:15:55.976264 containerd[1473]: 2025-08-13 07:15:55.952 [INFO][4789] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="7fb344a88fb59437f50ebab784dacbc5b9700aa22d6c210953f8b1534132eeaa" Namespace="kube-system" Pod="coredns-668d6bf9bc-8bmkx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8bmkx-eth0" Aug 13 07:15:55.976264 containerd[1473]: 2025-08-13 07:15:55.952 [INFO][4789] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif0757c609ce ContainerID="7fb344a88fb59437f50ebab784dacbc5b9700aa22d6c210953f8b1534132eeaa" Namespace="kube-system" Pod="coredns-668d6bf9bc-8bmkx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8bmkx-eth0" Aug 13 07:15:55.976264 containerd[1473]: 2025-08-13 07:15:55.957 [INFO][4789] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7fb344a88fb59437f50ebab784dacbc5b9700aa22d6c210953f8b1534132eeaa" Namespace="kube-system" Pod="coredns-668d6bf9bc-8bmkx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8bmkx-eth0" Aug 13 07:15:55.976264 containerd[1473]: 2025-08-13 07:15:55.957 [INFO][4789] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7fb344a88fb59437f50ebab784dacbc5b9700aa22d6c210953f8b1534132eeaa" Namespace="kube-system" Pod="coredns-668d6bf9bc-8bmkx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8bmkx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--8bmkx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0f44421e-e215-4efb-b425-a905d3215525", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 15, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7fb344a88fb59437f50ebab784dacbc5b9700aa22d6c210953f8b1534132eeaa", Pod:"coredns-668d6bf9bc-8bmkx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif0757c609ce", MAC:"9e:c4:8e:58:e4:47", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:15:55.976264 containerd[1473]: 2025-08-13 07:15:55.969 [INFO][4789] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7fb344a88fb59437f50ebab784dacbc5b9700aa22d6c210953f8b1534132eeaa" Namespace="kube-system" Pod="coredns-668d6bf9bc-8bmkx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8bmkx-eth0" Aug 13 07:15:55.978198 containerd[1473]: time="2025-08-13T07:15:55.978025503Z" level=info msg="StartContainer for \"e586a7a9326b37c16a7cd243deb409110e9a4d8219202314f8080a2ebbf7b524\" returns successfully" Aug 13 07:15:55.998915 containerd[1473]: time="2025-08-13T07:15:55.998763215Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:15:55.998915 containerd[1473]: time="2025-08-13T07:15:55.998854007Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:15:55.998915 containerd[1473]: time="2025-08-13T07:15:55.998883702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:15:55.999281 containerd[1473]: time="2025-08-13T07:15:55.999145002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:15:56.022446 systemd[1]: Started cri-containerd-7fb344a88fb59437f50ebab784dacbc5b9700aa22d6c210953f8b1534132eeaa.scope - libcontainer container 7fb344a88fb59437f50ebab784dacbc5b9700aa22d6c210953f8b1534132eeaa. Aug 13 07:15:56.036282 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:15:56.062126 containerd[1473]: time="2025-08-13T07:15:56.062088693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8bmkx,Uid:0f44421e-e215-4efb-b425-a905d3215525,Namespace:kube-system,Attempt:1,} returns sandbox id \"7fb344a88fb59437f50ebab784dacbc5b9700aa22d6c210953f8b1534132eeaa\"" Aug 13 07:15:56.062944 kubelet[2504]: E0813 07:15:56.062917 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:15:56.066225 containerd[1473]: time="2025-08-13T07:15:56.066188087Z" level=info msg="CreateContainer within sandbox \"7fb344a88fb59437f50ebab784dacbc5b9700aa22d6c210953f8b1534132eeaa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 07:15:56.081259 containerd[1473]: time="2025-08-13T07:15:56.081121193Z" level=info msg="CreateContainer within sandbox \"7fb344a88fb59437f50ebab784dacbc5b9700aa22d6c210953f8b1534132eeaa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4add97425f409d7f19b28c8a83c50b9f20bb938c8e7b55781d8f0d03ba040cca\"" Aug 13 07:15:56.082103 containerd[1473]: time="2025-08-13T07:15:56.082068731Z" level=info msg="StartContainer for \"4add97425f409d7f19b28c8a83c50b9f20bb938c8e7b55781d8f0d03ba040cca\"" Aug 13 07:15:56.116159 systemd[1]: Started cri-containerd-4add97425f409d7f19b28c8a83c50b9f20bb938c8e7b55781d8f0d03ba040cca.scope - libcontainer container 4add97425f409d7f19b28c8a83c50b9f20bb938c8e7b55781d8f0d03ba040cca. Aug 13 07:15:56.146998 containerd[1473]: time="2025-08-13T07:15:56.146941232Z" level=info msg="StartContainer for \"4add97425f409d7f19b28c8a83c50b9f20bb938c8e7b55781d8f0d03ba040cca\" returns successfully" Aug 13 07:15:56.698793 containerd[1473]: time="2025-08-13T07:15:56.698667621Z" level=info msg="StopPodSandbox for \"86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff\"" Aug 13 07:15:56.739059 systemd-networkd[1400]: calib2e2882a5aa: Gained IPv6LL Aug 13 07:15:56.783719 containerd[1473]: 2025-08-13 07:15:56.742 [INFO][4951] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff" Aug 13 07:15:56.783719 containerd[1473]: 2025-08-13 07:15:56.743 [INFO][4951] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff" iface="eth0" netns="/var/run/netns/cni-3adac977-b33a-e17c-1b70-902954dfbb3e" Aug 13 07:15:56.783719 containerd[1473]: 2025-08-13 07:15:56.744 [INFO][4951] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff" iface="eth0" netns="/var/run/netns/cni-3adac977-b33a-e17c-1b70-902954dfbb3e" Aug 13 07:15:56.783719 containerd[1473]: 2025-08-13 07:15:56.744 [INFO][4951] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff" iface="eth0" netns="/var/run/netns/cni-3adac977-b33a-e17c-1b70-902954dfbb3e" Aug 13 07:15:56.783719 containerd[1473]: 2025-08-13 07:15:56.744 [INFO][4951] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff" Aug 13 07:15:56.783719 containerd[1473]: 2025-08-13 07:15:56.744 [INFO][4951] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff" Aug 13 07:15:56.783719 containerd[1473]: 2025-08-13 07:15:56.767 [INFO][4961] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff" HandleID="k8s-pod-network.86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff" Workload="localhost-k8s-goldmane--768f4c5c69--ww6f8-eth0" Aug 13 07:15:56.783719 containerd[1473]: 2025-08-13 07:15:56.767 [INFO][4961] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:15:56.783719 containerd[1473]: 2025-08-13 07:15:56.767 [INFO][4961] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:15:56.783719 containerd[1473]: 2025-08-13 07:15:56.775 [WARNING][4961] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff" HandleID="k8s-pod-network.86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff" Workload="localhost-k8s-goldmane--768f4c5c69--ww6f8-eth0" Aug 13 07:15:56.783719 containerd[1473]: 2025-08-13 07:15:56.776 [INFO][4961] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff" HandleID="k8s-pod-network.86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff" Workload="localhost-k8s-goldmane--768f4c5c69--ww6f8-eth0" Aug 13 07:15:56.783719 containerd[1473]: 2025-08-13 07:15:56.777 [INFO][4961] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:15:56.783719 containerd[1473]: 2025-08-13 07:15:56.780 [INFO][4951] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff" Aug 13 07:15:56.784503 containerd[1473]: time="2025-08-13T07:15:56.783906484Z" level=info msg="TearDown network for sandbox \"86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff\" successfully" Aug 13 07:15:56.784503 containerd[1473]: time="2025-08-13T07:15:56.783934036Z" level=info msg="StopPodSandbox for \"86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff\" returns successfully" Aug 13 07:15:56.784686 containerd[1473]: time="2025-08-13T07:15:56.784661652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-ww6f8,Uid:142cde8b-2616-412a-b265-085158d0383f,Namespace:calico-system,Attempt:1,}" Aug 13 07:15:56.793186 systemd[1]: run-netns-cni\x2d3adac977\x2db33a\x2de17c\x2d1b70\x2d902954dfbb3e.mount: Deactivated successfully. Aug 13 07:15:56.867143 systemd-networkd[1400]: cali6fcb0921607: Gained IPv6LL Aug 13 07:15:56.909903 systemd-networkd[1400]: cali0b50c4a81ab: Link UP Aug 13 07:15:56.910164 systemd-networkd[1400]: cali0b50c4a81ab: Gained carrier Aug 13 07:15:56.923905 containerd[1473]: 2025-08-13 07:15:56.837 [INFO][4969] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--768f4c5c69--ww6f8-eth0 goldmane-768f4c5c69- calico-system 142cde8b-2616-412a-b265-085158d0383f 1041 0 2025-08-13 07:15:28 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-768f4c5c69-ww6f8 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali0b50c4a81ab [] [] }} ContainerID="6ed0de1fe5057f0038a03a38efebd2efd4fc0f8eb185032ec9a8a3c01ac7c6cf" Namespace="calico-system" Pod="goldmane-768f4c5c69-ww6f8" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--ww6f8-" Aug 13 07:15:56.923905 containerd[1473]: 2025-08-13 07:15:56.837 [INFO][4969] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6ed0de1fe5057f0038a03a38efebd2efd4fc0f8eb185032ec9a8a3c01ac7c6cf" Namespace="calico-system" Pod="goldmane-768f4c5c69-ww6f8" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--ww6f8-eth0" Aug 13 07:15:56.923905 containerd[1473]: 2025-08-13 07:15:56.863 [INFO][4986] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6ed0de1fe5057f0038a03a38efebd2efd4fc0f8eb185032ec9a8a3c01ac7c6cf" HandleID="k8s-pod-network.6ed0de1fe5057f0038a03a38efebd2efd4fc0f8eb185032ec9a8a3c01ac7c6cf" Workload="localhost-k8s-goldmane--768f4c5c69--ww6f8-eth0" Aug 13 07:15:56.923905 containerd[1473]: 2025-08-13 07:15:56.864 [INFO][4986] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6ed0de1fe5057f0038a03a38efebd2efd4fc0f8eb185032ec9a8a3c01ac7c6cf" HandleID="k8s-pod-network.6ed0de1fe5057f0038a03a38efebd2efd4fc0f8eb185032ec9a8a3c01ac7c6cf" Workload="localhost-k8s-goldmane--768f4c5c69--ww6f8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f850), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-768f4c5c69-ww6f8", "timestamp":"2025-08-13 07:15:56.863937794 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:15:56.923905 containerd[1473]: 2025-08-13 07:15:56.864 [INFO][4986] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:15:56.923905 containerd[1473]: 2025-08-13 07:15:56.864 [INFO][4986] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:15:56.923905 containerd[1473]: 2025-08-13 07:15:56.864 [INFO][4986] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 07:15:56.923905 containerd[1473]: 2025-08-13 07:15:56.874 [INFO][4986] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6ed0de1fe5057f0038a03a38efebd2efd4fc0f8eb185032ec9a8a3c01ac7c6cf" host="localhost" Aug 13 07:15:56.923905 containerd[1473]: 2025-08-13 07:15:56.878 [INFO][4986] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 07:15:56.923905 containerd[1473]: 2025-08-13 07:15:56.882 [INFO][4986] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 07:15:56.923905 containerd[1473]: 2025-08-13 07:15:56.884 [INFO][4986] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 07:15:56.923905 containerd[1473]: 2025-08-13 07:15:56.886 [INFO][4986] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 07:15:56.923905 containerd[1473]: 2025-08-13 07:15:56.886 [INFO][4986] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6ed0de1fe5057f0038a03a38efebd2efd4fc0f8eb185032ec9a8a3c01ac7c6cf" host="localhost" Aug 13 07:15:56.923905 containerd[1473]: 2025-08-13 07:15:56.888 [INFO][4986] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6ed0de1fe5057f0038a03a38efebd2efd4fc0f8eb185032ec9a8a3c01ac7c6cf Aug 13 07:15:56.923905 containerd[1473]: 2025-08-13 07:15:56.892 [INFO][4986] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6ed0de1fe5057f0038a03a38efebd2efd4fc0f8eb185032ec9a8a3c01ac7c6cf" host="localhost" Aug 13 07:15:56.923905 containerd[1473]: 2025-08-13 07:15:56.902 [INFO][4986] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.6ed0de1fe5057f0038a03a38efebd2efd4fc0f8eb185032ec9a8a3c01ac7c6cf" host="localhost" Aug 13 07:15:56.923905 containerd[1473]: 2025-08-13 07:15:56.902 [INFO][4986] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.6ed0de1fe5057f0038a03a38efebd2efd4fc0f8eb185032ec9a8a3c01ac7c6cf" host="localhost" Aug 13 07:15:56.923905 containerd[1473]: 2025-08-13 07:15:56.902 [INFO][4986] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:15:56.923905 containerd[1473]: 2025-08-13 07:15:56.902 [INFO][4986] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="6ed0de1fe5057f0038a03a38efebd2efd4fc0f8eb185032ec9a8a3c01ac7c6cf" HandleID="k8s-pod-network.6ed0de1fe5057f0038a03a38efebd2efd4fc0f8eb185032ec9a8a3c01ac7c6cf" Workload="localhost-k8s-goldmane--768f4c5c69--ww6f8-eth0" Aug 13 07:15:56.924472 containerd[1473]: 2025-08-13 07:15:56.906 [INFO][4969] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6ed0de1fe5057f0038a03a38efebd2efd4fc0f8eb185032ec9a8a3c01ac7c6cf" Namespace="calico-system" Pod="goldmane-768f4c5c69-ww6f8" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--ww6f8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--ww6f8-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"142cde8b-2616-412a-b265-085158d0383f", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 15, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-768f4c5c69-ww6f8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0b50c4a81ab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:15:56.924472 containerd[1473]: 2025-08-13 07:15:56.906 [INFO][4969] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="6ed0de1fe5057f0038a03a38efebd2efd4fc0f8eb185032ec9a8a3c01ac7c6cf" Namespace="calico-system" Pod="goldmane-768f4c5c69-ww6f8" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--ww6f8-eth0" Aug 13 07:15:56.924472 containerd[1473]: 2025-08-13 07:15:56.906 [INFO][4969] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0b50c4a81ab ContainerID="6ed0de1fe5057f0038a03a38efebd2efd4fc0f8eb185032ec9a8a3c01ac7c6cf" Namespace="calico-system" Pod="goldmane-768f4c5c69-ww6f8" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--ww6f8-eth0" Aug 13 07:15:56.924472 containerd[1473]: 2025-08-13 07:15:56.909 [INFO][4969] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6ed0de1fe5057f0038a03a38efebd2efd4fc0f8eb185032ec9a8a3c01ac7c6cf" Namespace="calico-system" Pod="goldmane-768f4c5c69-ww6f8" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--ww6f8-eth0" Aug 13 07:15:56.924472 containerd[1473]: 2025-08-13 07:15:56.909 [INFO][4969] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6ed0de1fe5057f0038a03a38efebd2efd4fc0f8eb185032ec9a8a3c01ac7c6cf" Namespace="calico-system" Pod="goldmane-768f4c5c69-ww6f8" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--ww6f8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--ww6f8-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"142cde8b-2616-412a-b265-085158d0383f", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 15, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6ed0de1fe5057f0038a03a38efebd2efd4fc0f8eb185032ec9a8a3c01ac7c6cf", Pod:"goldmane-768f4c5c69-ww6f8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0b50c4a81ab", MAC:"7a:31:a4:d5:53:58", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:15:56.924472 containerd[1473]: 2025-08-13 07:15:56.918 [INFO][4969] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6ed0de1fe5057f0038a03a38efebd2efd4fc0f8eb185032ec9a8a3c01ac7c6cf" Namespace="calico-system" Pod="goldmane-768f4c5c69-ww6f8" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--ww6f8-eth0" Aug 13 07:15:56.944461 containerd[1473]: time="2025-08-13T07:15:56.944325042Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:15:56.944461 containerd[1473]: time="2025-08-13T07:15:56.944425821Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:15:56.944461 containerd[1473]: time="2025-08-13T07:15:56.944459354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:15:56.944750 containerd[1473]: time="2025-08-13T07:15:56.944633612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:15:56.960011 kubelet[2504]: E0813 07:15:56.957646 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:15:56.969297 kubelet[2504]: E0813 07:15:56.968939 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:15:56.974186 systemd[1]: Started cri-containerd-6ed0de1fe5057f0038a03a38efebd2efd4fc0f8eb185032ec9a8a3c01ac7c6cf.scope - libcontainer container 6ed0de1fe5057f0038a03a38efebd2efd4fc0f8eb185032ec9a8a3c01ac7c6cf. Aug 13 07:15:56.974547 kubelet[2504]: I0813 07:15:56.974503 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-8bmkx" podStartSLOduration=39.974450701 podStartE2EDuration="39.974450701s" podCreationTimestamp="2025-08-13 07:15:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:15:56.973989666 +0000 UTC m=+46.368081118" watchObservedRunningTime="2025-08-13 07:15:56.974450701 +0000 UTC m=+46.368542153" Aug 13 07:15:56.998279 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:15:57.011973 kubelet[2504]: I0813 07:15:57.011896 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-787f667446-wf97w" podStartSLOduration=3.556869372 podStartE2EDuration="9.011875146s" podCreationTimestamp="2025-08-13 07:15:48 +0000 UTC" firstStartedPulling="2025-08-13 07:15:50.405902972 +0000 UTC m=+39.799994424" lastFinishedPulling="2025-08-13 07:15:55.860908746 +0000 UTC m=+45.255000198" observedRunningTime="2025-08-13 07:15:57.011478201 +0000 UTC m=+46.405569673" watchObservedRunningTime="2025-08-13 07:15:57.011875146 +0000 UTC m=+46.405966598" Aug 13 07:15:57.037440 containerd[1473]: time="2025-08-13T07:15:57.037349442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-ww6f8,Uid:142cde8b-2616-412a-b265-085158d0383f,Namespace:calico-system,Attempt:1,} returns sandbox id \"6ed0de1fe5057f0038a03a38efebd2efd4fc0f8eb185032ec9a8a3c01ac7c6cf\"" Aug 13 07:15:57.188207 systemd-networkd[1400]: calif0757c609ce: Gained IPv6LL Aug 13 07:15:57.188561 systemd-networkd[1400]: cali9d94a92aeb6: Gained IPv6LL Aug 13 07:15:57.251116 systemd-networkd[1400]: cali7cf31b1fb23: Gained IPv6LL Aug 13 07:15:57.307499 systemd[1]: Started sshd@8-10.0.0.120:22-10.0.0.1:59586.service - OpenSSH per-connection server daemon (10.0.0.1:59586). Aug 13 07:15:57.363600 sshd[5051]: Accepted publickey for core from 10.0.0.1 port 59586 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:15:57.365603 sshd[5051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:15:57.369622 systemd-logind[1449]: New session 9 of user core. Aug 13 07:15:57.379018 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 07:15:57.514649 sshd[5051]: pam_unix(sshd:session): session closed for user core Aug 13 07:15:57.518958 systemd[1]: sshd@8-10.0.0.120:22-10.0.0.1:59586.service: Deactivated successfully. Aug 13 07:15:57.521176 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 07:15:57.522017 systemd-logind[1449]: Session 9 logged out. Waiting for processes to exit. Aug 13 07:15:57.523016 systemd-logind[1449]: Removed session 9. Aug 13 07:15:57.972379 kubelet[2504]: E0813 07:15:57.972213 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:15:58.340068 systemd-networkd[1400]: cali0b50c4a81ab: Gained IPv6LL Aug 13 07:15:58.488832 containerd[1473]: time="2025-08-13T07:15:58.488774571Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:58.489476 containerd[1473]: time="2025-08-13T07:15:58.489418339Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Aug 13 07:15:58.490633 containerd[1473]: time="2025-08-13T07:15:58.490596429Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:58.492832 containerd[1473]: time="2025-08-13T07:15:58.492793883Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:15:58.493405 containerd[1473]: time="2025-08-13T07:15:58.493371567Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 2.632263658s" Aug 13 07:15:58.493445 containerd[1473]: time="2025-08-13T07:15:58.493403698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Aug 13 07:15:58.494330 containerd[1473]: time="2025-08-13T07:15:58.494293928Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 07:15:58.495363 containerd[1473]: time="2025-08-13T07:15:58.495330503Z" level=info msg="CreateContainer within sandbox \"994674e4866866499f74ff8c7dc8a69e8270e2d5d86d28a23b4bdee2b7c2667a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 13 07:15:58.516571 containerd[1473]: time="2025-08-13T07:15:58.516535595Z" level=info msg="CreateContainer within sandbox \"994674e4866866499f74ff8c7dc8a69e8270e2d5d86d28a23b4bdee2b7c2667a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"dafd6859fd8f1a53736dab57fe68552976706cd0ed9c5eea72ccaf07813f180e\"" Aug 13 07:15:58.517055 containerd[1473]: time="2025-08-13T07:15:58.517019002Z" level=info msg="StartContainer for \"dafd6859fd8f1a53736dab57fe68552976706cd0ed9c5eea72ccaf07813f180e\"" Aug 13 07:15:58.550039 systemd[1]: Started cri-containerd-dafd6859fd8f1a53736dab57fe68552976706cd0ed9c5eea72ccaf07813f180e.scope - libcontainer container dafd6859fd8f1a53736dab57fe68552976706cd0ed9c5eea72ccaf07813f180e. Aug 13 07:15:58.643336 containerd[1473]: time="2025-08-13T07:15:58.642651532Z" level=info msg="StartContainer for \"dafd6859fd8f1a53736dab57fe68552976706cd0ed9c5eea72ccaf07813f180e\" returns successfully" Aug 13 07:15:58.976833 kubelet[2504]: E0813 07:15:58.976664 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:16:01.634672 containerd[1473]: time="2025-08-13T07:16:01.634543259Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:01.635547 containerd[1473]: time="2025-08-13T07:16:01.635456162Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Aug 13 07:16:01.636566 containerd[1473]: time="2025-08-13T07:16:01.636524927Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:01.638965 containerd[1473]: time="2025-08-13T07:16:01.638925881Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:01.639806 containerd[1473]: time="2025-08-13T07:16:01.639767580Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 3.145443486s" Aug 13 07:16:01.639837 containerd[1473]: time="2025-08-13T07:16:01.639806423Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 07:16:01.641255 containerd[1473]: time="2025-08-13T07:16:01.641066457Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 07:16:01.642356 containerd[1473]: time="2025-08-13T07:16:01.642316663Z" level=info msg="CreateContainer within sandbox \"e5411b843cec3ea131080fdeef93d045aeb1f6eebdbbf7c62bb0c096070b71f3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 07:16:01.656815 containerd[1473]: time="2025-08-13T07:16:01.656750553Z" level=info msg="CreateContainer within sandbox \"e5411b843cec3ea131080fdeef93d045aeb1f6eebdbbf7c62bb0c096070b71f3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b1e69219ccce06e40edb4e41f6e876c850526f2ba15920a1c86640b60a7bb6b5\"" Aug 13 07:16:01.657630 containerd[1473]: time="2025-08-13T07:16:01.657410662Z" level=info msg="StartContainer for \"b1e69219ccce06e40edb4e41f6e876c850526f2ba15920a1c86640b60a7bb6b5\"" Aug 13 07:16:01.695055 systemd[1]: Started cri-containerd-b1e69219ccce06e40edb4e41f6e876c850526f2ba15920a1c86640b60a7bb6b5.scope - libcontainer container b1e69219ccce06e40edb4e41f6e876c850526f2ba15920a1c86640b60a7bb6b5. Aug 13 07:16:01.755702 containerd[1473]: time="2025-08-13T07:16:01.755624428Z" level=info msg="StartContainer for \"b1e69219ccce06e40edb4e41f6e876c850526f2ba15920a1c86640b60a7bb6b5\" returns successfully" Aug 13 07:16:02.216981 containerd[1473]: time="2025-08-13T07:16:02.216914952Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:02.217930 containerd[1473]: time="2025-08-13T07:16:02.217837092Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Aug 13 07:16:02.220286 containerd[1473]: time="2025-08-13T07:16:02.220244448Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 579.139999ms" Aug 13 07:16:02.220346 containerd[1473]: time="2025-08-13T07:16:02.220290895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 07:16:02.221752 containerd[1473]: time="2025-08-13T07:16:02.221564644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 07:16:02.222503 containerd[1473]: time="2025-08-13T07:16:02.222472879Z" level=info msg="CreateContainer within sandbox \"ba64923fa59bf5c18a7461554d85044a5c4bf5a9fa169cafaec7a1b85d13b22d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 07:16:02.236889 containerd[1473]: time="2025-08-13T07:16:02.236831176Z" level=info msg="CreateContainer within sandbox \"ba64923fa59bf5c18a7461554d85044a5c4bf5a9fa169cafaec7a1b85d13b22d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"dbbfcf23a345b97d5b7b85249aa14ff94be20307d6b2bab58453b8f54965c9c8\"" Aug 13 07:16:02.238243 containerd[1473]: time="2025-08-13T07:16:02.237465546Z" level=info msg="StartContainer for \"dbbfcf23a345b97d5b7b85249aa14ff94be20307d6b2bab58453b8f54965c9c8\"" Aug 13 07:16:02.270048 systemd[1]: Started cri-containerd-dbbfcf23a345b97d5b7b85249aa14ff94be20307d6b2bab58453b8f54965c9c8.scope - libcontainer container dbbfcf23a345b97d5b7b85249aa14ff94be20307d6b2bab58453b8f54965c9c8. Aug 13 07:16:02.316414 containerd[1473]: time="2025-08-13T07:16:02.316358315Z" level=info msg="StartContainer for \"dbbfcf23a345b97d5b7b85249aa14ff94be20307d6b2bab58453b8f54965c9c8\" returns successfully" Aug 13 07:16:02.529945 systemd[1]: Started sshd@9-10.0.0.120:22-10.0.0.1:60058.service - OpenSSH per-connection server daemon (10.0.0.1:60058). Aug 13 07:16:02.598833 sshd[5209]: Accepted publickey for core from 10.0.0.1 port 60058 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:16:02.601308 sshd[5209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:16:02.608940 systemd-logind[1449]: New session 10 of user core. Aug 13 07:16:02.615838 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 07:16:02.967422 sshd[5209]: pam_unix(sshd:session): session closed for user core Aug 13 07:16:02.971268 systemd[1]: sshd@9-10.0.0.120:22-10.0.0.1:60058.service: Deactivated successfully. Aug 13 07:16:02.974492 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 07:16:02.976462 systemd-logind[1449]: Session 10 logged out. Waiting for processes to exit. Aug 13 07:16:02.978437 systemd-logind[1449]: Removed session 10. Aug 13 07:16:03.002198 kubelet[2504]: I0813 07:16:03.002117 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7bc76445cf-6gxtp" podStartSLOduration=30.881179903 podStartE2EDuration="37.002084404s" podCreationTimestamp="2025-08-13 07:15:26 +0000 UTC" firstStartedPulling="2025-08-13 07:15:55.519844421 +0000 UTC m=+44.913935873" lastFinishedPulling="2025-08-13 07:16:01.640748912 +0000 UTC m=+51.034840374" observedRunningTime="2025-08-13 07:16:01.999986044 +0000 UTC m=+51.394077496" watchObservedRunningTime="2025-08-13 07:16:03.002084404 +0000 UTC m=+52.396175856" Aug 13 07:16:03.580015 kubelet[2504]: I0813 07:16:03.579709 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7bc76445cf-h7vjm" podStartSLOduration=31.015244966 podStartE2EDuration="37.579690649s" podCreationTimestamp="2025-08-13 07:15:26 +0000 UTC" firstStartedPulling="2025-08-13 07:15:55.65684055 +0000 UTC m=+45.050932012" lastFinishedPulling="2025-08-13 07:16:02.221286243 +0000 UTC m=+51.615377695" observedRunningTime="2025-08-13 07:16:03.003787318 +0000 UTC m=+52.397878770" watchObservedRunningTime="2025-08-13 07:16:03.579690649 +0000 UTC m=+52.973782101" Aug 13 07:16:06.543016 containerd[1473]: time="2025-08-13T07:16:06.542954434Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:06.543703 containerd[1473]: time="2025-08-13T07:16:06.543658315Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Aug 13 07:16:06.544816 containerd[1473]: time="2025-08-13T07:16:06.544785880Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:06.547199 containerd[1473]: time="2025-08-13T07:16:06.547148371Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:06.547703 containerd[1473]: time="2025-08-13T07:16:06.547672574Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 4.326075308s" Aug 13 07:16:06.547734 containerd[1473]: time="2025-08-13T07:16:06.547703893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Aug 13 07:16:06.548549 containerd[1473]: time="2025-08-13T07:16:06.548516467Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 13 07:16:06.558934 containerd[1473]: time="2025-08-13T07:16:06.558897921Z" level=info msg="CreateContainer within sandbox \"012560864cbec3ca6114bcf85dc491b54c195f8552605ba69087214d59d921d8\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 13 07:16:06.574367 containerd[1473]: time="2025-08-13T07:16:06.574319097Z" level=info msg="CreateContainer within sandbox \"012560864cbec3ca6114bcf85dc491b54c195f8552605ba69087214d59d921d8\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"da36da4e303557c2c79d8741b4f7440da10feeb761ea790f81b5be5584bbdfb8\"" Aug 13 07:16:06.575533 containerd[1473]: time="2025-08-13T07:16:06.575507796Z" level=info msg="StartContainer for \"da36da4e303557c2c79d8741b4f7440da10feeb761ea790f81b5be5584bbdfb8\"" Aug 13 07:16:06.605008 systemd[1]: Started cri-containerd-da36da4e303557c2c79d8741b4f7440da10feeb761ea790f81b5be5584bbdfb8.scope - libcontainer container da36da4e303557c2c79d8741b4f7440da10feeb761ea790f81b5be5584bbdfb8. Aug 13 07:16:06.645129 containerd[1473]: time="2025-08-13T07:16:06.645079627Z" level=info msg="StartContainer for \"da36da4e303557c2c79d8741b4f7440da10feeb761ea790f81b5be5584bbdfb8\" returns successfully" Aug 13 07:16:07.019618 kubelet[2504]: I0813 07:16:07.018978 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-d5b45b8d4-f96tt" podStartSLOduration=27.172674979 podStartE2EDuration="38.018960754s" podCreationTimestamp="2025-08-13 07:15:29 +0000 UTC" firstStartedPulling="2025-08-13 07:15:55.702078116 +0000 UTC m=+45.096169568" lastFinishedPulling="2025-08-13 07:16:06.548363891 +0000 UTC m=+55.942455343" observedRunningTime="2025-08-13 07:16:07.017938176 +0000 UTC m=+56.412029638" watchObservedRunningTime="2025-08-13 07:16:07.018960754 +0000 UTC m=+56.413052216" Aug 13 07:16:07.980137 systemd[1]: Started sshd@10-10.0.0.120:22-10.0.0.1:45928.service - OpenSSH per-connection server daemon (10.0.0.1:45928). Aug 13 07:16:08.053327 sshd[5312]: Accepted publickey for core from 10.0.0.1 port 45928 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:16:08.055905 sshd[5312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:16:08.061541 systemd-logind[1449]: New session 11 of user core. Aug 13 07:16:08.068061 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 07:16:08.278732 sshd[5312]: pam_unix(sshd:session): session closed for user core Aug 13 07:16:08.289975 systemd[1]: sshd@10-10.0.0.120:22-10.0.0.1:45928.service: Deactivated successfully. Aug 13 07:16:08.292495 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 07:16:08.294647 systemd-logind[1449]: Session 11 logged out. Waiting for processes to exit. Aug 13 07:16:08.303132 systemd[1]: Started sshd@11-10.0.0.120:22-10.0.0.1:45936.service - OpenSSH per-connection server daemon (10.0.0.1:45936). Aug 13 07:16:08.304232 systemd-logind[1449]: Removed session 11. Aug 13 07:16:08.334626 sshd[5329]: Accepted publickey for core from 10.0.0.1 port 45936 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:16:08.336482 sshd[5329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:16:08.341328 systemd-logind[1449]: New session 12 of user core. Aug 13 07:16:08.353140 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 07:16:08.503573 sshd[5329]: pam_unix(sshd:session): session closed for user core Aug 13 07:16:08.514773 systemd[1]: sshd@11-10.0.0.120:22-10.0.0.1:45936.service: Deactivated successfully. Aug 13 07:16:08.518361 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 07:16:08.522664 systemd-logind[1449]: Session 12 logged out. Waiting for processes to exit. Aug 13 07:16:08.529735 systemd[1]: Started sshd@12-10.0.0.120:22-10.0.0.1:45940.service - OpenSSH per-connection server daemon (10.0.0.1:45940). Aug 13 07:16:08.533731 systemd-logind[1449]: Removed session 12. Aug 13 07:16:08.563998 sshd[5342]: Accepted publickey for core from 10.0.0.1 port 45940 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:16:08.565822 sshd[5342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:16:08.570522 systemd-logind[1449]: New session 13 of user core. Aug 13 07:16:08.576022 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 07:16:08.703568 sshd[5342]: pam_unix(sshd:session): session closed for user core Aug 13 07:16:08.709302 systemd[1]: sshd@12-10.0.0.120:22-10.0.0.1:45940.service: Deactivated successfully. Aug 13 07:16:08.711787 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 07:16:08.712691 systemd-logind[1449]: Session 13 logged out. Waiting for processes to exit. Aug 13 07:16:08.714108 systemd-logind[1449]: Removed session 13. Aug 13 07:16:10.830660 containerd[1473]: time="2025-08-13T07:16:10.830538019Z" level=info msg="StopPodSandbox for \"0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1\"" Aug 13 07:16:10.940582 containerd[1473]: 2025-08-13 07:16:10.890 [WARNING][5379] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7bc76445cf--6gxtp-eth0", GenerateName:"calico-apiserver-7bc76445cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"f71c796a-2b24-4955-a685-11764bd3ee81", ResourceVersion:"1111", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 15, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bc76445cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e5411b843cec3ea131080fdeef93d045aeb1f6eebdbbf7c62bb0c096070b71f3", Pod:"calico-apiserver-7bc76445cf-6gxtp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib2e2882a5aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:16:10.940582 containerd[1473]: 2025-08-13 07:16:10.891 [INFO][5379] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1" Aug 13 07:16:10.940582 containerd[1473]: 2025-08-13 07:16:10.891 [INFO][5379] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1" iface="eth0" netns="" Aug 13 07:16:10.940582 containerd[1473]: 2025-08-13 07:16:10.891 [INFO][5379] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1" Aug 13 07:16:10.940582 containerd[1473]: 2025-08-13 07:16:10.891 [INFO][5379] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1" Aug 13 07:16:10.940582 containerd[1473]: 2025-08-13 07:16:10.921 [INFO][5388] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1" HandleID="k8s-pod-network.0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1" Workload="localhost-k8s-calico--apiserver--7bc76445cf--6gxtp-eth0" Aug 13 07:16:10.940582 containerd[1473]: 2025-08-13 07:16:10.923 [INFO][5388] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:16:10.940582 containerd[1473]: 2025-08-13 07:16:10.923 [INFO][5388] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:16:10.940582 containerd[1473]: 2025-08-13 07:16:10.930 [WARNING][5388] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1" HandleID="k8s-pod-network.0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1" Workload="localhost-k8s-calico--apiserver--7bc76445cf--6gxtp-eth0" Aug 13 07:16:10.940582 containerd[1473]: 2025-08-13 07:16:10.931 [INFO][5388] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1" HandleID="k8s-pod-network.0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1" Workload="localhost-k8s-calico--apiserver--7bc76445cf--6gxtp-eth0" Aug 13 07:16:10.940582 containerd[1473]: 2025-08-13 07:16:10.932 [INFO][5388] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:16:10.940582 containerd[1473]: 2025-08-13 07:16:10.936 [INFO][5379] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1" Aug 13 07:16:10.941216 containerd[1473]: time="2025-08-13T07:16:10.940643860Z" level=info msg="TearDown network for sandbox \"0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1\" successfully" Aug 13 07:16:10.941216 containerd[1473]: time="2025-08-13T07:16:10.940677384Z" level=info msg="StopPodSandbox for \"0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1\" returns successfully" Aug 13 07:16:10.967519 containerd[1473]: time="2025-08-13T07:16:10.967090355Z" level=info msg="RemovePodSandbox for \"0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1\"" Aug 13 07:16:10.969747 containerd[1473]: time="2025-08-13T07:16:10.969707912Z" level=info msg="Forcibly stopping sandbox \"0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1\"" Aug 13 07:16:11.069323 containerd[1473]: 2025-08-13 07:16:11.026 [WARNING][5407] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7bc76445cf--6gxtp-eth0", GenerateName:"calico-apiserver-7bc76445cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"f71c796a-2b24-4955-a685-11764bd3ee81", ResourceVersion:"1111", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 15, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bc76445cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e5411b843cec3ea131080fdeef93d045aeb1f6eebdbbf7c62bb0c096070b71f3", Pod:"calico-apiserver-7bc76445cf-6gxtp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib2e2882a5aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:16:11.069323 containerd[1473]: 2025-08-13 07:16:11.027 [INFO][5407] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1" Aug 13 07:16:11.069323 containerd[1473]: 2025-08-13 07:16:11.027 [INFO][5407] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1" iface="eth0" netns="" Aug 13 07:16:11.069323 containerd[1473]: 2025-08-13 07:16:11.027 [INFO][5407] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1" Aug 13 07:16:11.069323 containerd[1473]: 2025-08-13 07:16:11.027 [INFO][5407] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1" Aug 13 07:16:11.069323 containerd[1473]: 2025-08-13 07:16:11.055 [INFO][5415] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1" HandleID="k8s-pod-network.0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1" Workload="localhost-k8s-calico--apiserver--7bc76445cf--6gxtp-eth0" Aug 13 07:16:11.069323 containerd[1473]: 2025-08-13 07:16:11.055 [INFO][5415] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:16:11.069323 containerd[1473]: 2025-08-13 07:16:11.055 [INFO][5415] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:16:11.069323 containerd[1473]: 2025-08-13 07:16:11.061 [WARNING][5415] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1" HandleID="k8s-pod-network.0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1" Workload="localhost-k8s-calico--apiserver--7bc76445cf--6gxtp-eth0" Aug 13 07:16:11.069323 containerd[1473]: 2025-08-13 07:16:11.061 [INFO][5415] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1" HandleID="k8s-pod-network.0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1" Workload="localhost-k8s-calico--apiserver--7bc76445cf--6gxtp-eth0" Aug 13 07:16:11.069323 containerd[1473]: 2025-08-13 07:16:11.062 [INFO][5415] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:16:11.069323 containerd[1473]: 2025-08-13 07:16:11.066 [INFO][5407] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1" Aug 13 07:16:11.069777 containerd[1473]: time="2025-08-13T07:16:11.069398448Z" level=info msg="TearDown network for sandbox \"0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1\" successfully" Aug 13 07:16:11.087509 containerd[1473]: time="2025-08-13T07:16:11.087369813Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:16:11.087509 containerd[1473]: time="2025-08-13T07:16:11.087478754Z" level=info msg="RemovePodSandbox \"0516cc73d62fa3844ef4af432e9e0aaa4de321f4117788d8711f599ac53fafb1\" returns successfully" Aug 13 07:16:11.100031 containerd[1473]: time="2025-08-13T07:16:11.099964338Z" level=info msg="StopPodSandbox for \"25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede\"" Aug 13 07:16:11.175920 containerd[1473]: 2025-08-13 07:16:11.134 [WARNING][5433] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede" WorkloadEndpoint="localhost-k8s-whisker--6b6cf55fc8--x54ms-eth0" Aug 13 07:16:11.175920 containerd[1473]: 2025-08-13 07:16:11.134 [INFO][5433] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede" Aug 13 07:16:11.175920 containerd[1473]: 2025-08-13 07:16:11.134 [INFO][5433] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede" iface="eth0" netns="" Aug 13 07:16:11.175920 containerd[1473]: 2025-08-13 07:16:11.134 [INFO][5433] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede" Aug 13 07:16:11.175920 containerd[1473]: 2025-08-13 07:16:11.134 [INFO][5433] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede" Aug 13 07:16:11.175920 containerd[1473]: 2025-08-13 07:16:11.157 [INFO][5441] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede" HandleID="k8s-pod-network.25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede" Workload="localhost-k8s-whisker--6b6cf55fc8--x54ms-eth0" Aug 13 07:16:11.175920 containerd[1473]: 2025-08-13 07:16:11.157 [INFO][5441] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:16:11.175920 containerd[1473]: 2025-08-13 07:16:11.157 [INFO][5441] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:16:11.175920 containerd[1473]: 2025-08-13 07:16:11.166 [WARNING][5441] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede" HandleID="k8s-pod-network.25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede" Workload="localhost-k8s-whisker--6b6cf55fc8--x54ms-eth0" Aug 13 07:16:11.175920 containerd[1473]: 2025-08-13 07:16:11.167 [INFO][5441] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede" HandleID="k8s-pod-network.25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede" Workload="localhost-k8s-whisker--6b6cf55fc8--x54ms-eth0" Aug 13 07:16:11.175920 containerd[1473]: 2025-08-13 07:16:11.169 [INFO][5441] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:16:11.175920 containerd[1473]: 2025-08-13 07:16:11.172 [INFO][5433] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede" Aug 13 07:16:11.176397 containerd[1473]: time="2025-08-13T07:16:11.175992767Z" level=info msg="TearDown network for sandbox \"25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede\" successfully" Aug 13 07:16:11.176397 containerd[1473]: time="2025-08-13T07:16:11.176020280Z" level=info msg="StopPodSandbox for \"25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede\" returns successfully" Aug 13 07:16:11.176700 containerd[1473]: time="2025-08-13T07:16:11.176673245Z" level=info msg="RemovePodSandbox for \"25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede\"" Aug 13 07:16:11.176742 containerd[1473]: time="2025-08-13T07:16:11.176700077Z" level=info msg="Forcibly stopping sandbox \"25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede\"" Aug 13 07:16:11.260771 containerd[1473]: 2025-08-13 07:16:11.217 [WARNING][5459] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede" WorkloadEndpoint="localhost-k8s-whisker--6b6cf55fc8--x54ms-eth0" Aug 13 07:16:11.260771 containerd[1473]: 2025-08-13 07:16:11.217 [INFO][5459] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede" Aug 13 07:16:11.260771 containerd[1473]: 2025-08-13 07:16:11.217 [INFO][5459] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede" iface="eth0" netns="" Aug 13 07:16:11.260771 containerd[1473]: 2025-08-13 07:16:11.217 [INFO][5459] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede" Aug 13 07:16:11.260771 containerd[1473]: 2025-08-13 07:16:11.217 [INFO][5459] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede" Aug 13 07:16:11.260771 containerd[1473]: 2025-08-13 07:16:11.242 [INFO][5467] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede" HandleID="k8s-pod-network.25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede" Workload="localhost-k8s-whisker--6b6cf55fc8--x54ms-eth0" Aug 13 07:16:11.260771 containerd[1473]: 2025-08-13 07:16:11.242 [INFO][5467] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:16:11.260771 containerd[1473]: 2025-08-13 07:16:11.242 [INFO][5467] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:16:11.260771 containerd[1473]: 2025-08-13 07:16:11.250 [WARNING][5467] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede" HandleID="k8s-pod-network.25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede" Workload="localhost-k8s-whisker--6b6cf55fc8--x54ms-eth0" Aug 13 07:16:11.260771 containerd[1473]: 2025-08-13 07:16:11.250 [INFO][5467] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede" HandleID="k8s-pod-network.25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede" Workload="localhost-k8s-whisker--6b6cf55fc8--x54ms-eth0" Aug 13 07:16:11.260771 containerd[1473]: 2025-08-13 07:16:11.252 [INFO][5467] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:16:11.260771 containerd[1473]: 2025-08-13 07:16:11.257 [INFO][5459] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede" Aug 13 07:16:11.261330 containerd[1473]: time="2025-08-13T07:16:11.260823461Z" level=info msg="TearDown network for sandbox \"25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede\" successfully" Aug 13 07:16:11.273456 containerd[1473]: time="2025-08-13T07:16:11.273359945Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:16:11.273456 containerd[1473]: time="2025-08-13T07:16:11.273460599Z" level=info msg="RemovePodSandbox \"25d6c24ca73512c7f2fdb738c040ad68dfae8189c554b4114192ac65efddfede\" returns successfully" Aug 13 07:16:11.274179 containerd[1473]: time="2025-08-13T07:16:11.274143472Z" level=info msg="StopPodSandbox for \"d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5\"" Aug 13 07:16:11.327672 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount712667404.mount: Deactivated successfully. Aug 13 07:16:11.362957 containerd[1473]: 2025-08-13 07:16:11.313 [WARNING][5484] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--8bmkx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0f44421e-e215-4efb-b425-a905d3215525", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 15, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7fb344a88fb59437f50ebab784dacbc5b9700aa22d6c210953f8b1534132eeaa", Pod:"coredns-668d6bf9bc-8bmkx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif0757c609ce", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:16:11.362957 containerd[1473]: 2025-08-13 07:16:11.313 [INFO][5484] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5" Aug 13 07:16:11.362957 containerd[1473]: 2025-08-13 07:16:11.313 [INFO][5484] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5" iface="eth0" netns="" Aug 13 07:16:11.362957 containerd[1473]: 2025-08-13 07:16:11.313 [INFO][5484] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5" Aug 13 07:16:11.362957 containerd[1473]: 2025-08-13 07:16:11.313 [INFO][5484] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5" Aug 13 07:16:11.362957 containerd[1473]: 2025-08-13 07:16:11.348 [INFO][5492] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5" HandleID="k8s-pod-network.d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5" Workload="localhost-k8s-coredns--668d6bf9bc--8bmkx-eth0" Aug 13 07:16:11.362957 containerd[1473]: 2025-08-13 07:16:11.348 [INFO][5492] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:16:11.362957 containerd[1473]: 2025-08-13 07:16:11.348 [INFO][5492] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:16:11.362957 containerd[1473]: 2025-08-13 07:16:11.354 [WARNING][5492] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5" HandleID="k8s-pod-network.d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5" Workload="localhost-k8s-coredns--668d6bf9bc--8bmkx-eth0" Aug 13 07:16:11.362957 containerd[1473]: 2025-08-13 07:16:11.354 [INFO][5492] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5" HandleID="k8s-pod-network.d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5" Workload="localhost-k8s-coredns--668d6bf9bc--8bmkx-eth0" Aug 13 07:16:11.362957 containerd[1473]: 2025-08-13 07:16:11.355 [INFO][5492] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:16:11.362957 containerd[1473]: 2025-08-13 07:16:11.359 [INFO][5484] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5" Aug 13 07:16:11.362957 containerd[1473]: time="2025-08-13T07:16:11.362878934Z" level=info msg="TearDown network for sandbox \"d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5\" successfully" Aug 13 07:16:11.362957 containerd[1473]: time="2025-08-13T07:16:11.362909944Z" level=info msg="StopPodSandbox for \"d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5\" returns successfully" Aug 13 07:16:11.363627 containerd[1473]: time="2025-08-13T07:16:11.363511500Z" level=info msg="RemovePodSandbox for \"d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5\"" Aug 13 07:16:11.363627 containerd[1473]: time="2025-08-13T07:16:11.363548151Z" level=info msg="Forcibly stopping sandbox \"d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5\"" Aug 13 07:16:11.450081 containerd[1473]: 2025-08-13 07:16:11.408 [WARNING][5510] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--8bmkx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0f44421e-e215-4efb-b425-a905d3215525", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 15, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7fb344a88fb59437f50ebab784dacbc5b9700aa22d6c210953f8b1534132eeaa", Pod:"coredns-668d6bf9bc-8bmkx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif0757c609ce", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:16:11.450081 containerd[1473]: 2025-08-13 07:16:11.408 [INFO][5510] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5" Aug 13 07:16:11.450081 containerd[1473]: 2025-08-13 07:16:11.408 [INFO][5510] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5" iface="eth0" netns="" Aug 13 07:16:11.450081 containerd[1473]: 2025-08-13 07:16:11.408 [INFO][5510] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5" Aug 13 07:16:11.450081 containerd[1473]: 2025-08-13 07:16:11.408 [INFO][5510] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5" Aug 13 07:16:11.450081 containerd[1473]: 2025-08-13 07:16:11.434 [INFO][5521] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5" HandleID="k8s-pod-network.d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5" Workload="localhost-k8s-coredns--668d6bf9bc--8bmkx-eth0" Aug 13 07:16:11.450081 containerd[1473]: 2025-08-13 07:16:11.434 [INFO][5521] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:16:11.450081 containerd[1473]: 2025-08-13 07:16:11.434 [INFO][5521] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:16:11.450081 containerd[1473]: 2025-08-13 07:16:11.441 [WARNING][5521] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5" HandleID="k8s-pod-network.d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5" Workload="localhost-k8s-coredns--668d6bf9bc--8bmkx-eth0" Aug 13 07:16:11.450081 containerd[1473]: 2025-08-13 07:16:11.441 [INFO][5521] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5" HandleID="k8s-pod-network.d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5" Workload="localhost-k8s-coredns--668d6bf9bc--8bmkx-eth0" Aug 13 07:16:11.450081 containerd[1473]: 2025-08-13 07:16:11.443 [INFO][5521] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:16:11.450081 containerd[1473]: 2025-08-13 07:16:11.446 [INFO][5510] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5" Aug 13 07:16:11.450614 containerd[1473]: time="2025-08-13T07:16:11.450344424Z" level=info msg="TearDown network for sandbox \"d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5\" successfully" Aug 13 07:16:11.454934 containerd[1473]: time="2025-08-13T07:16:11.454883801Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:16:11.455022 containerd[1473]: time="2025-08-13T07:16:11.454985859Z" level=info msg="RemovePodSandbox \"d1bbde35cbf8cff45e3606b99c16787a92415c1ed48add720c0683fc2330bab5\" returns successfully" Aug 13 07:16:11.455548 containerd[1473]: time="2025-08-13T07:16:11.455475017Z" level=info msg="StopPodSandbox for \"86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff\"" Aug 13 07:16:11.570550 containerd[1473]: 2025-08-13 07:16:11.500 [WARNING][5538] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--ww6f8-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"142cde8b-2616-412a-b265-085158d0383f", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 15, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6ed0de1fe5057f0038a03a38efebd2efd4fc0f8eb185032ec9a8a3c01ac7c6cf", Pod:"goldmane-768f4c5c69-ww6f8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0b50c4a81ab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:16:11.570550 containerd[1473]: 2025-08-13 07:16:11.501 [INFO][5538] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff" Aug 13 07:16:11.570550 containerd[1473]: 2025-08-13 07:16:11.501 [INFO][5538] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff" iface="eth0" netns="" Aug 13 07:16:11.570550 containerd[1473]: 2025-08-13 07:16:11.501 [INFO][5538] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff" Aug 13 07:16:11.570550 containerd[1473]: 2025-08-13 07:16:11.501 [INFO][5538] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff" Aug 13 07:16:11.570550 containerd[1473]: 2025-08-13 07:16:11.549 [INFO][5547] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff" HandleID="k8s-pod-network.86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff" Workload="localhost-k8s-goldmane--768f4c5c69--ww6f8-eth0" Aug 13 07:16:11.570550 containerd[1473]: 2025-08-13 07:16:11.549 [INFO][5547] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:16:11.570550 containerd[1473]: 2025-08-13 07:16:11.550 [INFO][5547] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:16:11.570550 containerd[1473]: 2025-08-13 07:16:11.557 [WARNING][5547] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff" HandleID="k8s-pod-network.86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff" Workload="localhost-k8s-goldmane--768f4c5c69--ww6f8-eth0" Aug 13 07:16:11.570550 containerd[1473]: 2025-08-13 07:16:11.557 [INFO][5547] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff" HandleID="k8s-pod-network.86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff" Workload="localhost-k8s-goldmane--768f4c5c69--ww6f8-eth0" Aug 13 07:16:11.570550 containerd[1473]: 2025-08-13 07:16:11.561 [INFO][5547] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:16:11.570550 containerd[1473]: 2025-08-13 07:16:11.565 [INFO][5538] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff" Aug 13 07:16:11.570550 containerd[1473]: time="2025-08-13T07:16:11.570414967Z" level=info msg="TearDown network for sandbox \"86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff\" successfully" Aug 13 07:16:11.570550 containerd[1473]: time="2025-08-13T07:16:11.570445325Z" level=info msg="StopPodSandbox for \"86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff\" returns successfully" Aug 13 07:16:11.577033 containerd[1473]: time="2025-08-13T07:16:11.576993394Z" level=info msg="RemovePodSandbox for \"86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff\"" Aug 13 07:16:11.577033 containerd[1473]: time="2025-08-13T07:16:11.577032691Z" level=info msg="Forcibly stopping sandbox \"86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff\"" Aug 13 07:16:12.279196 containerd[1473]: 2025-08-13 07:16:12.242 [WARNING][5564] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--ww6f8-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"142cde8b-2616-412a-b265-085158d0383f", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 15, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6ed0de1fe5057f0038a03a38efebd2efd4fc0f8eb185032ec9a8a3c01ac7c6cf", Pod:"goldmane-768f4c5c69-ww6f8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0b50c4a81ab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:16:12.279196 containerd[1473]: 2025-08-13 07:16:12.242 [INFO][5564] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff" Aug 13 07:16:12.279196 containerd[1473]: 2025-08-13 07:16:12.242 [INFO][5564] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff" iface="eth0" netns="" Aug 13 07:16:12.279196 containerd[1473]: 2025-08-13 07:16:12.242 [INFO][5564] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff" Aug 13 07:16:12.279196 containerd[1473]: 2025-08-13 07:16:12.242 [INFO][5564] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff" Aug 13 07:16:12.279196 containerd[1473]: 2025-08-13 07:16:12.265 [INFO][5573] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff" HandleID="k8s-pod-network.86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff" Workload="localhost-k8s-goldmane--768f4c5c69--ww6f8-eth0" Aug 13 07:16:12.279196 containerd[1473]: 2025-08-13 07:16:12.265 [INFO][5573] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:16:12.279196 containerd[1473]: 2025-08-13 07:16:12.265 [INFO][5573] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:16:12.279196 containerd[1473]: 2025-08-13 07:16:12.271 [WARNING][5573] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff" HandleID="k8s-pod-network.86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff" Workload="localhost-k8s-goldmane--768f4c5c69--ww6f8-eth0" Aug 13 07:16:12.279196 containerd[1473]: 2025-08-13 07:16:12.271 [INFO][5573] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff" HandleID="k8s-pod-network.86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff" Workload="localhost-k8s-goldmane--768f4c5c69--ww6f8-eth0" Aug 13 07:16:12.279196 containerd[1473]: 2025-08-13 07:16:12.273 [INFO][5573] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:16:12.279196 containerd[1473]: 2025-08-13 07:16:12.276 [INFO][5564] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff" Aug 13 07:16:12.280258 containerd[1473]: time="2025-08-13T07:16:12.279226250Z" level=info msg="TearDown network for sandbox \"86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff\" successfully" Aug 13 07:16:12.302733 containerd[1473]: time="2025-08-13T07:16:12.302648176Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:16:12.302733 containerd[1473]: time="2025-08-13T07:16:12.302749142Z" level=info msg="RemovePodSandbox \"86617824b28fda07cb790495ac76cda3443c052bef6fad57e8b11514e8c541ff\" returns successfully" Aug 13 07:16:12.303424 containerd[1473]: time="2025-08-13T07:16:12.303358432Z" level=info msg="StopPodSandbox for \"354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e\"" Aug 13 07:16:12.319714 containerd[1473]: time="2025-08-13T07:16:12.319644462Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:12.320900 containerd[1473]: time="2025-08-13T07:16:12.320819637Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Aug 13 07:16:12.322442 containerd[1473]: time="2025-08-13T07:16:12.322407762Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:12.325370 containerd[1473]: time="2025-08-13T07:16:12.325096487Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:12.326067 containerd[1473]: time="2025-08-13T07:16:12.326026738Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 5.777478281s" Aug 13 07:16:12.326277 containerd[1473]: time="2025-08-13T07:16:12.326157302Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Aug 13 07:16:12.328984 containerd[1473]: time="2025-08-13T07:16:12.328951260Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 07:16:12.330313 containerd[1473]: time="2025-08-13T07:16:12.330267198Z" level=info msg="CreateContainer within sandbox \"6ed0de1fe5057f0038a03a38efebd2efd4fc0f8eb185032ec9a8a3c01ac7c6cf\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Aug 13 07:16:12.423761 containerd[1473]: time="2025-08-13T07:16:12.423564598Z" level=info msg="CreateContainer within sandbox \"6ed0de1fe5057f0038a03a38efebd2efd4fc0f8eb185032ec9a8a3c01ac7c6cf\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"8a55aa47c916bf35c31f885abdf40333dc5b9e3bd5185ac0a036ecc44ae1b1fb\"" Aug 13 07:16:12.425444 containerd[1473]: time="2025-08-13T07:16:12.425166809Z" level=info msg="StartContainer for \"8a55aa47c916bf35c31f885abdf40333dc5b9e3bd5185ac0a036ecc44ae1b1fb\"" Aug 13 07:16:12.439760 containerd[1473]: 2025-08-13 07:16:12.346 [WARNING][5596] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--d5b45b8d4--f96tt-eth0", GenerateName:"calico-kube-controllers-d5b45b8d4-", Namespace:"calico-system", SelfLink:"", UID:"5c8ac06a-59ac-4ad7-851b-39e9a256e71f", ResourceVersion:"1151", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 15, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d5b45b8d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"012560864cbec3ca6114bcf85dc491b54c195f8552605ba69087214d59d921d8", Pod:"calico-kube-controllers-d5b45b8d4-f96tt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7cf31b1fb23", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:16:12.439760 containerd[1473]: 2025-08-13 07:16:12.346 [INFO][5596] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e" Aug 13 07:16:12.439760 containerd[1473]: 2025-08-13 07:16:12.346 [INFO][5596] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e" iface="eth0" netns="" Aug 13 07:16:12.439760 containerd[1473]: 2025-08-13 07:16:12.346 [INFO][5596] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e" Aug 13 07:16:12.439760 containerd[1473]: 2025-08-13 07:16:12.346 [INFO][5596] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e" Aug 13 07:16:12.439760 containerd[1473]: 2025-08-13 07:16:12.369 [INFO][5604] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e" HandleID="k8s-pod-network.354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e" Workload="localhost-k8s-calico--kube--controllers--d5b45b8d4--f96tt-eth0" Aug 13 07:16:12.439760 containerd[1473]: 2025-08-13 07:16:12.369 [INFO][5604] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:16:12.439760 containerd[1473]: 2025-08-13 07:16:12.369 [INFO][5604] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:16:12.439760 containerd[1473]: 2025-08-13 07:16:12.376 [WARNING][5604] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e" HandleID="k8s-pod-network.354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e" Workload="localhost-k8s-calico--kube--controllers--d5b45b8d4--f96tt-eth0" Aug 13 07:16:12.439760 containerd[1473]: 2025-08-13 07:16:12.376 [INFO][5604] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e" HandleID="k8s-pod-network.354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e" Workload="localhost-k8s-calico--kube--controllers--d5b45b8d4--f96tt-eth0" Aug 13 07:16:12.439760 containerd[1473]: 2025-08-13 07:16:12.377 [INFO][5604] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:16:12.439760 containerd[1473]: 2025-08-13 07:16:12.382 [INFO][5596] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e" Aug 13 07:16:12.439760 containerd[1473]: time="2025-08-13T07:16:12.439613739Z" level=info msg="TearDown network for sandbox \"354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e\" successfully" Aug 13 07:16:12.439760 containerd[1473]: time="2025-08-13T07:16:12.439652855Z" level=info msg="StopPodSandbox for \"354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e\" returns successfully" Aug 13 07:16:12.440714 containerd[1473]: time="2025-08-13T07:16:12.440220054Z" level=info msg="RemovePodSandbox for \"354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e\"" Aug 13 07:16:12.440714 containerd[1473]: time="2025-08-13T07:16:12.440248167Z" level=info msg="Forcibly stopping sandbox \"354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e\"" Aug 13 07:16:12.518683 containerd[1473]: 2025-08-13 07:16:12.477 [WARNING][5624] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--d5b45b8d4--f96tt-eth0", GenerateName:"calico-kube-controllers-d5b45b8d4-", Namespace:"calico-system", SelfLink:"", UID:"5c8ac06a-59ac-4ad7-851b-39e9a256e71f", ResourceVersion:"1151", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 15, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d5b45b8d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"012560864cbec3ca6114bcf85dc491b54c195f8552605ba69087214d59d921d8", Pod:"calico-kube-controllers-d5b45b8d4-f96tt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7cf31b1fb23", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:16:12.518683 containerd[1473]: 2025-08-13 07:16:12.477 [INFO][5624] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e" Aug 13 07:16:12.518683 containerd[1473]: 2025-08-13 07:16:12.477 [INFO][5624] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e" iface="eth0" netns="" Aug 13 07:16:12.518683 containerd[1473]: 2025-08-13 07:16:12.477 [INFO][5624] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e" Aug 13 07:16:12.518683 containerd[1473]: 2025-08-13 07:16:12.478 [INFO][5624] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e" Aug 13 07:16:12.518683 containerd[1473]: 2025-08-13 07:16:12.503 [INFO][5633] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e" HandleID="k8s-pod-network.354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e" Workload="localhost-k8s-calico--kube--controllers--d5b45b8d4--f96tt-eth0" Aug 13 07:16:12.518683 containerd[1473]: 2025-08-13 07:16:12.504 [INFO][5633] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:16:12.518683 containerd[1473]: 2025-08-13 07:16:12.504 [INFO][5633] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:16:12.518683 containerd[1473]: 2025-08-13 07:16:12.510 [WARNING][5633] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e" HandleID="k8s-pod-network.354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e" Workload="localhost-k8s-calico--kube--controllers--d5b45b8d4--f96tt-eth0" Aug 13 07:16:12.518683 containerd[1473]: 2025-08-13 07:16:12.510 [INFO][5633] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e" HandleID="k8s-pod-network.354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e" Workload="localhost-k8s-calico--kube--controllers--d5b45b8d4--f96tt-eth0" Aug 13 07:16:12.518683 containerd[1473]: 2025-08-13 07:16:12.512 [INFO][5633] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:16:12.518683 containerd[1473]: 2025-08-13 07:16:12.515 [INFO][5624] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e" Aug 13 07:16:12.519350 containerd[1473]: time="2025-08-13T07:16:12.519314705Z" level=info msg="TearDown network for sandbox \"354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e\" successfully" Aug 13 07:16:12.521174 systemd[1]: Started cri-containerd-8a55aa47c916bf35c31f885abdf40333dc5b9e3bd5185ac0a036ecc44ae1b1fb.scope - libcontainer container 8a55aa47c916bf35c31f885abdf40333dc5b9e3bd5185ac0a036ecc44ae1b1fb. Aug 13 07:16:12.524267 containerd[1473]: time="2025-08-13T07:16:12.524225272Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:16:12.524267 containerd[1473]: time="2025-08-13T07:16:12.524302832Z" level=info msg="RemovePodSandbox \"354d3e4924c7d6df4a89b27334e647a6586f05633df0f2adb1ae20655e3bfe5e\" returns successfully" Aug 13 07:16:12.525345 containerd[1473]: time="2025-08-13T07:16:12.525021714Z" level=info msg="StopPodSandbox for \"31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4\"" Aug 13 07:16:12.578506 containerd[1473]: time="2025-08-13T07:16:12.578460865Z" level=info msg="StartContainer for \"8a55aa47c916bf35c31f885abdf40333dc5b9e3bd5185ac0a036ecc44ae1b1fb\" returns successfully" Aug 13 07:16:12.616728 containerd[1473]: 2025-08-13 07:16:12.570 [WARNING][5675] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--kkjwm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2f61af08-7750-47f4-b608-c1bb42e1730d", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 15, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bd211d12464cb0a79d9fa2dceb2f71ede9b9ce092ea53a82d6fac290c68d5e76", Pod:"coredns-668d6bf9bc-kkjwm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali58cadadb49f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:16:12.616728 containerd[1473]: 2025-08-13 07:16:12.571 [INFO][5675] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4" Aug 13 07:16:12.616728 containerd[1473]: 2025-08-13 07:16:12.571 [INFO][5675] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4" iface="eth0" netns="" Aug 13 07:16:12.616728 containerd[1473]: 2025-08-13 07:16:12.571 [INFO][5675] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4" Aug 13 07:16:12.616728 containerd[1473]: 2025-08-13 07:16:12.571 [INFO][5675] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4" Aug 13 07:16:12.616728 containerd[1473]: 2025-08-13 07:16:12.596 [INFO][5690] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4" HandleID="k8s-pod-network.31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4" Workload="localhost-k8s-coredns--668d6bf9bc--kkjwm-eth0" Aug 13 07:16:12.616728 containerd[1473]: 2025-08-13 07:16:12.596 [INFO][5690] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:16:12.616728 containerd[1473]: 2025-08-13 07:16:12.596 [INFO][5690] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:16:12.616728 containerd[1473]: 2025-08-13 07:16:12.603 [WARNING][5690] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4" HandleID="k8s-pod-network.31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4" Workload="localhost-k8s-coredns--668d6bf9bc--kkjwm-eth0" Aug 13 07:16:12.616728 containerd[1473]: 2025-08-13 07:16:12.603 [INFO][5690] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4" HandleID="k8s-pod-network.31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4" Workload="localhost-k8s-coredns--668d6bf9bc--kkjwm-eth0" Aug 13 07:16:12.616728 containerd[1473]: 2025-08-13 07:16:12.604 [INFO][5690] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:16:12.616728 containerd[1473]: 2025-08-13 07:16:12.609 [INFO][5675] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4" Aug 13 07:16:12.617309 containerd[1473]: time="2025-08-13T07:16:12.616749163Z" level=info msg="TearDown network for sandbox \"31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4\" successfully" Aug 13 07:16:12.617309 containerd[1473]: time="2025-08-13T07:16:12.616776356Z" level=info msg="StopPodSandbox for \"31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4\" returns successfully" Aug 13 07:16:12.617459 containerd[1473]: time="2025-08-13T07:16:12.617409723Z" level=info msg="RemovePodSandbox for \"31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4\"" Aug 13 07:16:12.617491 containerd[1473]: time="2025-08-13T07:16:12.617458697Z" level=info msg="Forcibly stopping sandbox \"31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4\"" Aug 13 07:16:12.687600 containerd[1473]: 2025-08-13 07:16:12.650 [WARNING][5716] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--kkjwm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2f61af08-7750-47f4-b608-c1bb42e1730d", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 15, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bd211d12464cb0a79d9fa2dceb2f71ede9b9ce092ea53a82d6fac290c68d5e76", Pod:"coredns-668d6bf9bc-kkjwm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali58cadadb49f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:16:12.687600 containerd[1473]: 2025-08-13 07:16:12.651 [INFO][5716] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4" Aug 13 07:16:12.687600 containerd[1473]: 2025-08-13 07:16:12.652 [INFO][5716] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4" iface="eth0" netns="" Aug 13 07:16:12.687600 containerd[1473]: 2025-08-13 07:16:12.652 [INFO][5716] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4" Aug 13 07:16:12.687600 containerd[1473]: 2025-08-13 07:16:12.652 [INFO][5716] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4" Aug 13 07:16:12.687600 containerd[1473]: 2025-08-13 07:16:12.673 [INFO][5725] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4" HandleID="k8s-pod-network.31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4" Workload="localhost-k8s-coredns--668d6bf9bc--kkjwm-eth0" Aug 13 07:16:12.687600 containerd[1473]: 2025-08-13 07:16:12.673 [INFO][5725] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:16:12.687600 containerd[1473]: 2025-08-13 07:16:12.674 [INFO][5725] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:16:12.687600 containerd[1473]: 2025-08-13 07:16:12.680 [WARNING][5725] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4" HandleID="k8s-pod-network.31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4" Workload="localhost-k8s-coredns--668d6bf9bc--kkjwm-eth0" Aug 13 07:16:12.687600 containerd[1473]: 2025-08-13 07:16:12.680 [INFO][5725] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4" HandleID="k8s-pod-network.31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4" Workload="localhost-k8s-coredns--668d6bf9bc--kkjwm-eth0" Aug 13 07:16:12.687600 containerd[1473]: 2025-08-13 07:16:12.681 [INFO][5725] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:16:12.687600 containerd[1473]: 2025-08-13 07:16:12.684 [INFO][5716] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4" Aug 13 07:16:12.688075 containerd[1473]: time="2025-08-13T07:16:12.687633740Z" level=info msg="TearDown network for sandbox \"31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4\" successfully" Aug 13 07:16:12.692175 containerd[1473]: time="2025-08-13T07:16:12.692117100Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:16:12.692345 containerd[1473]: time="2025-08-13T07:16:12.692276899Z" level=info msg="RemovePodSandbox \"31cfadcfd51decf5c424fb72c350b901bdd40d3b22380dfa93d70d32195ab2a4\" returns successfully" Aug 13 07:16:12.692793 containerd[1473]: time="2025-08-13T07:16:12.692766196Z" level=info msg="StopPodSandbox for \"4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57\"" Aug 13 07:16:12.763401 containerd[1473]: 2025-08-13 07:16:12.727 [WARNING][5743] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--k8p5g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ff96eac2-a650-4688-baf4-c624d8dfca9d", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 15, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"994674e4866866499f74ff8c7dc8a69e8270e2d5d86d28a23b4bdee2b7c2667a", Pod:"csi-node-driver-k8p5g", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6fcb0921607", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:16:12.763401 containerd[1473]: 2025-08-13 07:16:12.727 [INFO][5743] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57" Aug 13 07:16:12.763401 containerd[1473]: 2025-08-13 07:16:12.727 [INFO][5743] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57" iface="eth0" netns="" Aug 13 07:16:12.763401 containerd[1473]: 2025-08-13 07:16:12.727 [INFO][5743] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57" Aug 13 07:16:12.763401 containerd[1473]: 2025-08-13 07:16:12.727 [INFO][5743] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57" Aug 13 07:16:12.763401 containerd[1473]: 2025-08-13 07:16:12.749 [INFO][5751] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57" HandleID="k8s-pod-network.4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57" Workload="localhost-k8s-csi--node--driver--k8p5g-eth0" Aug 13 07:16:12.763401 containerd[1473]: 2025-08-13 07:16:12.750 [INFO][5751] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:16:12.763401 containerd[1473]: 2025-08-13 07:16:12.750 [INFO][5751] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:16:12.763401 containerd[1473]: 2025-08-13 07:16:12.755 [WARNING][5751] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57" HandleID="k8s-pod-network.4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57" Workload="localhost-k8s-csi--node--driver--k8p5g-eth0" Aug 13 07:16:12.763401 containerd[1473]: 2025-08-13 07:16:12.755 [INFO][5751] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57" HandleID="k8s-pod-network.4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57" Workload="localhost-k8s-csi--node--driver--k8p5g-eth0" Aug 13 07:16:12.763401 containerd[1473]: 2025-08-13 07:16:12.756 [INFO][5751] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:16:12.763401 containerd[1473]: 2025-08-13 07:16:12.760 [INFO][5743] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57" Aug 13 07:16:12.764001 containerd[1473]: time="2025-08-13T07:16:12.763478640Z" level=info msg="TearDown network for sandbox \"4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57\" successfully" Aug 13 07:16:12.764001 containerd[1473]: time="2025-08-13T07:16:12.763507997Z" level=info msg="StopPodSandbox for \"4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57\" returns successfully" Aug 13 07:16:12.764118 containerd[1473]: time="2025-08-13T07:16:12.764091436Z" level=info msg="RemovePodSandbox for \"4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57\"" Aug 13 07:16:12.764150 containerd[1473]: time="2025-08-13T07:16:12.764125102Z" level=info msg="Forcibly stopping sandbox \"4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57\"" Aug 13 07:16:12.844610 containerd[1473]: 2025-08-13 07:16:12.804 [WARNING][5768] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--k8p5g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ff96eac2-a650-4688-baf4-c624d8dfca9d", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 15, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"994674e4866866499f74ff8c7dc8a69e8270e2d5d86d28a23b4bdee2b7c2667a", Pod:"csi-node-driver-k8p5g", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6fcb0921607", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:16:12.844610 containerd[1473]: 2025-08-13 07:16:12.805 [INFO][5768] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57" Aug 13 07:16:12.844610 containerd[1473]: 2025-08-13 07:16:12.805 [INFO][5768] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57" iface="eth0" netns="" Aug 13 07:16:12.844610 containerd[1473]: 2025-08-13 07:16:12.805 [INFO][5768] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57" Aug 13 07:16:12.844610 containerd[1473]: 2025-08-13 07:16:12.805 [INFO][5768] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57" Aug 13 07:16:12.844610 containerd[1473]: 2025-08-13 07:16:12.830 [INFO][5777] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57" HandleID="k8s-pod-network.4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57" Workload="localhost-k8s-csi--node--driver--k8p5g-eth0" Aug 13 07:16:12.844610 containerd[1473]: 2025-08-13 07:16:12.830 [INFO][5777] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:16:12.844610 containerd[1473]: 2025-08-13 07:16:12.830 [INFO][5777] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:16:12.844610 containerd[1473]: 2025-08-13 07:16:12.837 [WARNING][5777] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57" HandleID="k8s-pod-network.4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57" Workload="localhost-k8s-csi--node--driver--k8p5g-eth0" Aug 13 07:16:12.844610 containerd[1473]: 2025-08-13 07:16:12.837 [INFO][5777] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57" HandleID="k8s-pod-network.4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57" Workload="localhost-k8s-csi--node--driver--k8p5g-eth0" Aug 13 07:16:12.844610 containerd[1473]: 2025-08-13 07:16:12.838 [INFO][5777] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:16:12.844610 containerd[1473]: 2025-08-13 07:16:12.841 [INFO][5768] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57" Aug 13 07:16:12.844610 containerd[1473]: time="2025-08-13T07:16:12.844553075Z" level=info msg="TearDown network for sandbox \"4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57\" successfully" Aug 13 07:16:12.849790 containerd[1473]: time="2025-08-13T07:16:12.849751198Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:16:12.849900 containerd[1473]: time="2025-08-13T07:16:12.849820082Z" level=info msg="RemovePodSandbox \"4239e285303916236b5a77cd4f40bc9fd293625a5c18a8aef21c2854e0f18b57\" returns successfully" Aug 13 07:16:12.850336 containerd[1473]: time="2025-08-13T07:16:12.850310011Z" level=info msg="StopPodSandbox for \"6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9\"" Aug 13 07:16:12.928003 containerd[1473]: 2025-08-13 07:16:12.889 [WARNING][5795] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7bc76445cf--h7vjm-eth0", GenerateName:"calico-apiserver-7bc76445cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"6371703e-a0f7-43f3-a612-88598f32a9f9", ResourceVersion:"1119", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 15, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bc76445cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ba64923fa59bf5c18a7461554d85044a5c4bf5a9fa169cafaec7a1b85d13b22d", Pod:"calico-apiserver-7bc76445cf-h7vjm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9d94a92aeb6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:16:12.928003 containerd[1473]: 2025-08-13 07:16:12.890 [INFO][5795] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9" Aug 13 07:16:12.928003 containerd[1473]: 2025-08-13 07:16:12.890 [INFO][5795] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9" iface="eth0" netns="" Aug 13 07:16:12.928003 containerd[1473]: 2025-08-13 07:16:12.890 [INFO][5795] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9" Aug 13 07:16:12.928003 containerd[1473]: 2025-08-13 07:16:12.890 [INFO][5795] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9" Aug 13 07:16:12.928003 containerd[1473]: 2025-08-13 07:16:12.914 [INFO][5804] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9" HandleID="k8s-pod-network.6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9" Workload="localhost-k8s-calico--apiserver--7bc76445cf--h7vjm-eth0" Aug 13 07:16:12.928003 containerd[1473]: 2025-08-13 07:16:12.914 [INFO][5804] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:16:12.928003 containerd[1473]: 2025-08-13 07:16:12.914 [INFO][5804] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:16:12.928003 containerd[1473]: 2025-08-13 07:16:12.919 [WARNING][5804] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9" HandleID="k8s-pod-network.6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9" Workload="localhost-k8s-calico--apiserver--7bc76445cf--h7vjm-eth0" Aug 13 07:16:12.928003 containerd[1473]: 2025-08-13 07:16:12.919 [INFO][5804] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9" HandleID="k8s-pod-network.6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9" Workload="localhost-k8s-calico--apiserver--7bc76445cf--h7vjm-eth0" Aug 13 07:16:12.928003 containerd[1473]: 2025-08-13 07:16:12.921 [INFO][5804] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:16:12.928003 containerd[1473]: 2025-08-13 07:16:12.924 [INFO][5795] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9" Aug 13 07:16:12.928422 containerd[1473]: time="2025-08-13T07:16:12.928055981Z" level=info msg="TearDown network for sandbox \"6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9\" successfully" Aug 13 07:16:12.928422 containerd[1473]: time="2025-08-13T07:16:12.928087902Z" level=info msg="StopPodSandbox for \"6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9\" returns successfully" Aug 13 07:16:12.929011 containerd[1473]: time="2025-08-13T07:16:12.928634341Z" level=info msg="RemovePodSandbox for \"6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9\"" Aug 13 07:16:12.929011 containerd[1473]: time="2025-08-13T07:16:12.928665110Z" level=info msg="Forcibly stopping sandbox \"6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9\"" Aug 13 07:16:13.013896 containerd[1473]: 2025-08-13 07:16:12.977 [WARNING][5822] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7bc76445cf--h7vjm-eth0", GenerateName:"calico-apiserver-7bc76445cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"6371703e-a0f7-43f3-a612-88598f32a9f9", ResourceVersion:"1119", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 15, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bc76445cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ba64923fa59bf5c18a7461554d85044a5c4bf5a9fa169cafaec7a1b85d13b22d", Pod:"calico-apiserver-7bc76445cf-h7vjm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9d94a92aeb6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:16:13.013896 containerd[1473]: 2025-08-13 07:16:12.977 [INFO][5822] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9" Aug 13 07:16:13.013896 containerd[1473]: 2025-08-13 07:16:12.977 [INFO][5822] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9" iface="eth0" netns="" Aug 13 07:16:13.013896 containerd[1473]: 2025-08-13 07:16:12.977 [INFO][5822] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9" Aug 13 07:16:13.013896 containerd[1473]: 2025-08-13 07:16:12.977 [INFO][5822] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9" Aug 13 07:16:13.013896 containerd[1473]: 2025-08-13 07:16:13.000 [INFO][5831] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9" HandleID="k8s-pod-network.6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9" Workload="localhost-k8s-calico--apiserver--7bc76445cf--h7vjm-eth0" Aug 13 07:16:13.013896 containerd[1473]: 2025-08-13 07:16:13.000 [INFO][5831] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:16:13.013896 containerd[1473]: 2025-08-13 07:16:13.000 [INFO][5831] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:16:13.013896 containerd[1473]: 2025-08-13 07:16:13.006 [WARNING][5831] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9" HandleID="k8s-pod-network.6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9" Workload="localhost-k8s-calico--apiserver--7bc76445cf--h7vjm-eth0" Aug 13 07:16:13.013896 containerd[1473]: 2025-08-13 07:16:13.006 [INFO][5831] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9" HandleID="k8s-pod-network.6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9" Workload="localhost-k8s-calico--apiserver--7bc76445cf--h7vjm-eth0" Aug 13 07:16:13.013896 containerd[1473]: 2025-08-13 07:16:13.007 [INFO][5831] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:16:13.013896 containerd[1473]: 2025-08-13 07:16:13.010 [INFO][5822] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9" Aug 13 07:16:13.014326 containerd[1473]: time="2025-08-13T07:16:13.013945075Z" level=info msg="TearDown network for sandbox \"6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9\" successfully" Aug 13 07:16:13.018370 containerd[1473]: time="2025-08-13T07:16:13.018335919Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:16:13.018442 containerd[1473]: time="2025-08-13T07:16:13.018393460Z" level=info msg="RemovePodSandbox \"6ebd36d0006d21c22851c24d2ea136662309751edd000883f7c548bba2c94bc9\" returns successfully" Aug 13 07:16:13.052512 kubelet[2504]: I0813 07:16:13.052059 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-ww6f8" podStartSLOduration=29.763624835999998 podStartE2EDuration="45.052022701s" podCreationTimestamp="2025-08-13 07:15:28 +0000 UTC" firstStartedPulling="2025-08-13 07:15:57.039384682 +0000 UTC m=+46.433476124" lastFinishedPulling="2025-08-13 07:16:12.327782537 +0000 UTC m=+61.721873989" observedRunningTime="2025-08-13 07:16:13.047489352 +0000 UTC m=+62.441580804" watchObservedRunningTime="2025-08-13 07:16:13.052022701 +0000 UTC m=+62.446114153" Aug 13 07:16:13.715373 systemd[1]: Started sshd@13-10.0.0.120:22-10.0.0.1:45944.service - OpenSSH per-connection server daemon (10.0.0.1:45944). Aug 13 07:16:13.774538 sshd[5863]: Accepted publickey for core from 10.0.0.1 port 45944 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:16:13.776765 sshd[5863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:16:13.781708 systemd-logind[1449]: New session 14 of user core. Aug 13 07:16:13.798119 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 07:16:14.072276 systemd[1]: run-containerd-runc-k8s.io-8a55aa47c916bf35c31f885abdf40333dc5b9e3bd5185ac0a036ecc44ae1b1fb-runc.8JSfVF.mount: Deactivated successfully. Aug 13 07:16:14.209941 sshd[5863]: pam_unix(sshd:session): session closed for user core Aug 13 07:16:14.214949 systemd[1]: sshd@13-10.0.0.120:22-10.0.0.1:45944.service: Deactivated successfully. Aug 13 07:16:14.217345 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 07:16:14.218229 systemd-logind[1449]: Session 14 logged out. Waiting for processes to exit. Aug 13 07:16:14.219396 systemd-logind[1449]: Removed session 14. Aug 13 07:16:15.905538 containerd[1473]: time="2025-08-13T07:16:15.905488674Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:15.906509 containerd[1473]: time="2025-08-13T07:16:15.906461613Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Aug 13 07:16:15.907739 containerd[1473]: time="2025-08-13T07:16:15.907692460Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:15.909812 containerd[1473]: time="2025-08-13T07:16:15.909771996Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:15.910418 containerd[1473]: time="2025-08-13T07:16:15.910384438Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 3.581083832s" Aug 13 07:16:15.910418 containerd[1473]: time="2025-08-13T07:16:15.910410679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Aug 13 07:16:15.917044 containerd[1473]: time="2025-08-13T07:16:15.916998642Z" level=info msg="CreateContainer within sandbox \"994674e4866866499f74ff8c7dc8a69e8270e2d5d86d28a23b4bdee2b7c2667a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 13 07:16:15.936525 containerd[1473]: time="2025-08-13T07:16:15.936481917Z" level=info msg="CreateContainer within sandbox \"994674e4866866499f74ff8c7dc8a69e8270e2d5d86d28a23b4bdee2b7c2667a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"5f563be9eb9c6d9d6a26c4c6bf7e8d45204baa2d03d56f41a19184bfe5b60765\"" Aug 13 07:16:15.938747 containerd[1473]: time="2025-08-13T07:16:15.937165768Z" level=info msg="StartContainer for \"5f563be9eb9c6d9d6a26c4c6bf7e8d45204baa2d03d56f41a19184bfe5b60765\"" Aug 13 07:16:15.988010 systemd[1]: Started cri-containerd-5f563be9eb9c6d9d6a26c4c6bf7e8d45204baa2d03d56f41a19184bfe5b60765.scope - libcontainer container 5f563be9eb9c6d9d6a26c4c6bf7e8d45204baa2d03d56f41a19184bfe5b60765. Aug 13 07:16:16.022752 containerd[1473]: time="2025-08-13T07:16:16.022689904Z" level=info msg="StartContainer for \"5f563be9eb9c6d9d6a26c4c6bf7e8d45204baa2d03d56f41a19184bfe5b60765\" returns successfully" Aug 13 07:16:16.805695 kubelet[2504]: I0813 07:16:16.805632 2504 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 13 07:16:16.805695 kubelet[2504]: I0813 07:16:16.805704 2504 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 13 07:16:19.226217 systemd[1]: Started sshd@14-10.0.0.120:22-10.0.0.1:56370.service - OpenSSH per-connection server daemon (10.0.0.1:56370). Aug 13 07:16:19.271524 sshd[5993]: Accepted publickey for core from 10.0.0.1 port 56370 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:16:19.273220 sshd[5993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:16:19.277591 systemd-logind[1449]: New session 15 of user core. Aug 13 07:16:19.291019 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 07:16:19.430057 sshd[5993]: pam_unix(sshd:session): session closed for user core Aug 13 07:16:19.433960 systemd[1]: sshd@14-10.0.0.120:22-10.0.0.1:56370.service: Deactivated successfully. Aug 13 07:16:19.435968 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 07:16:19.436668 systemd-logind[1449]: Session 15 logged out. Waiting for processes to exit. Aug 13 07:16:19.437646 systemd-logind[1449]: Removed session 15. Aug 13 07:16:24.456547 systemd[1]: Started sshd@15-10.0.0.120:22-10.0.0.1:56382.service - OpenSSH per-connection server daemon (10.0.0.1:56382). Aug 13 07:16:24.492251 sshd[6008]: Accepted publickey for core from 10.0.0.1 port 56382 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:16:24.494241 sshd[6008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:16:24.499059 systemd-logind[1449]: New session 16 of user core. Aug 13 07:16:24.510000 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 07:16:24.630660 sshd[6008]: pam_unix(sshd:session): session closed for user core Aug 13 07:16:24.635530 systemd[1]: sshd@15-10.0.0.120:22-10.0.0.1:56382.service: Deactivated successfully. Aug 13 07:16:24.638191 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 07:16:24.638980 systemd-logind[1449]: Session 16 logged out. Waiting for processes to exit. Aug 13 07:16:24.640053 systemd-logind[1449]: Removed session 16. Aug 13 07:16:29.652250 systemd[1]: Started sshd@16-10.0.0.120:22-10.0.0.1:56680.service - OpenSSH per-connection server daemon (10.0.0.1:56680). Aug 13 07:16:29.711222 sshd[6044]: Accepted publickey for core from 10.0.0.1 port 56680 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:16:29.713219 sshd[6044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:16:29.717699 systemd-logind[1449]: New session 17 of user core. Aug 13 07:16:29.724064 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 07:16:29.856350 sshd[6044]: pam_unix(sshd:session): session closed for user core Aug 13 07:16:29.860648 systemd[1]: sshd@16-10.0.0.120:22-10.0.0.1:56680.service: Deactivated successfully. Aug 13 07:16:29.862656 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 07:16:29.863453 systemd-logind[1449]: Session 17 logged out. Waiting for processes to exit. Aug 13 07:16:29.864592 systemd-logind[1449]: Removed session 17. Aug 13 07:16:32.697799 kubelet[2504]: E0813 07:16:32.697757 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:16:34.873125 systemd[1]: Started sshd@17-10.0.0.120:22-10.0.0.1:56684.service - OpenSSH per-connection server daemon (10.0.0.1:56684). Aug 13 07:16:34.928341 sshd[6064]: Accepted publickey for core from 10.0.0.1 port 56684 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:16:34.930220 sshd[6064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:16:34.934301 systemd-logind[1449]: New session 18 of user core. Aug 13 07:16:34.948017 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 07:16:35.175303 sshd[6064]: pam_unix(sshd:session): session closed for user core Aug 13 07:16:35.188819 systemd[1]: sshd@17-10.0.0.120:22-10.0.0.1:56684.service: Deactivated successfully. Aug 13 07:16:35.190711 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 07:16:35.192192 systemd-logind[1449]: Session 18 logged out. Waiting for processes to exit. Aug 13 07:16:35.201148 systemd[1]: Started sshd@18-10.0.0.120:22-10.0.0.1:56696.service - OpenSSH per-connection server daemon (10.0.0.1:56696). Aug 13 07:16:35.202027 systemd-logind[1449]: Removed session 18. Aug 13 07:16:35.233743 sshd[6078]: Accepted publickey for core from 10.0.0.1 port 56696 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:16:35.235506 sshd[6078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:16:35.240329 systemd-logind[1449]: New session 19 of user core. Aug 13 07:16:35.251025 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 07:16:36.160057 sshd[6078]: pam_unix(sshd:session): session closed for user core Aug 13 07:16:36.169234 systemd[1]: sshd@18-10.0.0.120:22-10.0.0.1:56696.service: Deactivated successfully. Aug 13 07:16:36.171375 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 07:16:36.173284 systemd-logind[1449]: Session 19 logged out. Waiting for processes to exit. Aug 13 07:16:36.179150 systemd[1]: Started sshd@19-10.0.0.120:22-10.0.0.1:56704.service - OpenSSH per-connection server daemon (10.0.0.1:56704). Aug 13 07:16:36.180387 systemd-logind[1449]: Removed session 19. Aug 13 07:16:36.238791 sshd[6091]: Accepted publickey for core from 10.0.0.1 port 56704 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:16:36.240645 sshd[6091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:16:36.245565 systemd-logind[1449]: New session 20 of user core. Aug 13 07:16:36.255078 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 07:16:36.854430 sshd[6091]: pam_unix(sshd:session): session closed for user core Aug 13 07:16:36.865283 systemd[1]: sshd@19-10.0.0.120:22-10.0.0.1:56704.service: Deactivated successfully. Aug 13 07:16:36.867377 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 07:16:36.868186 systemd-logind[1449]: Session 20 logged out. Waiting for processes to exit. Aug 13 07:16:36.875310 systemd[1]: Started sshd@20-10.0.0.120:22-10.0.0.1:56716.service - OpenSSH per-connection server daemon (10.0.0.1:56716). Aug 13 07:16:36.877229 systemd-logind[1449]: Removed session 20. Aug 13 07:16:36.922691 sshd[6113]: Accepted publickey for core from 10.0.0.1 port 56716 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:16:36.924537 sshd[6113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:16:36.929905 systemd-logind[1449]: New session 21 of user core. Aug 13 07:16:36.941168 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 07:16:37.474537 sshd[6113]: pam_unix(sshd:session): session closed for user core Aug 13 07:16:37.492070 systemd[1]: sshd@20-10.0.0.120:22-10.0.0.1:56716.service: Deactivated successfully. Aug 13 07:16:37.494178 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 07:16:37.495941 systemd-logind[1449]: Session 21 logged out. Waiting for processes to exit. Aug 13 07:16:37.506359 systemd[1]: Started sshd@21-10.0.0.120:22-10.0.0.1:56732.service - OpenSSH per-connection server daemon (10.0.0.1:56732). Aug 13 07:16:37.507550 systemd-logind[1449]: Removed session 21. Aug 13 07:16:37.550607 sshd[6148]: Accepted publickey for core from 10.0.0.1 port 56732 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:16:37.552426 sshd[6148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:16:37.556489 systemd-logind[1449]: New session 22 of user core. Aug 13 07:16:37.567048 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 07:16:37.687224 sshd[6148]: pam_unix(sshd:session): session closed for user core Aug 13 07:16:37.691194 systemd[1]: sshd@21-10.0.0.120:22-10.0.0.1:56732.service: Deactivated successfully. Aug 13 07:16:37.693496 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 07:16:37.694208 systemd-logind[1449]: Session 22 logged out. Waiting for processes to exit. Aug 13 07:16:37.695027 systemd-logind[1449]: Removed session 22. Aug 13 07:16:41.698472 kubelet[2504]: E0813 07:16:41.698417 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:16:42.703698 systemd[1]: Started sshd@22-10.0.0.120:22-10.0.0.1:39390.service - OpenSSH per-connection server daemon (10.0.0.1:39390). Aug 13 07:16:42.761596 sshd[6164]: Accepted publickey for core from 10.0.0.1 port 39390 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:16:42.763502 sshd[6164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:16:42.767545 systemd-logind[1449]: New session 23 of user core. Aug 13 07:16:42.774037 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 07:16:42.938808 sshd[6164]: pam_unix(sshd:session): session closed for user core Aug 13 07:16:42.942610 systemd[1]: sshd@22-10.0.0.120:22-10.0.0.1:39390.service: Deactivated successfully. Aug 13 07:16:42.944559 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 07:16:42.945197 systemd-logind[1449]: Session 23 logged out. Waiting for processes to exit. Aug 13 07:16:42.946136 systemd-logind[1449]: Removed session 23. Aug 13 07:16:44.156192 kubelet[2504]: I0813 07:16:44.156090 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-k8p5g" podStartSLOduration=54.60825948 podStartE2EDuration="1m15.156054844s" podCreationTimestamp="2025-08-13 07:15:29 +0000 UTC" firstStartedPulling="2025-08-13 07:15:55.363362919 +0000 UTC m=+44.757454371" lastFinishedPulling="2025-08-13 07:16:15.911158283 +0000 UTC m=+65.305249735" observedRunningTime="2025-08-13 07:16:16.075517441 +0000 UTC m=+65.469608893" watchObservedRunningTime="2025-08-13 07:16:44.156054844 +0000 UTC m=+93.550146296" Aug 13 07:16:46.698774 kubelet[2504]: E0813 07:16:46.698718 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:16:47.951116 systemd[1]: Started sshd@23-10.0.0.120:22-10.0.0.1:39406.service - OpenSSH per-connection server daemon (10.0.0.1:39406). Aug 13 07:16:47.992314 sshd[6202]: Accepted publickey for core from 10.0.0.1 port 39406 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:16:47.994310 sshd[6202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:16:47.998745 systemd-logind[1449]: New session 24 of user core. Aug 13 07:16:48.015167 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 07:16:48.162464 sshd[6202]: pam_unix(sshd:session): session closed for user core Aug 13 07:16:48.167037 systemd[1]: sshd@23-10.0.0.120:22-10.0.0.1:39406.service: Deactivated successfully. Aug 13 07:16:48.169587 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 07:16:48.170382 systemd-logind[1449]: Session 24 logged out. Waiting for processes to exit. Aug 13 07:16:48.171498 systemd-logind[1449]: Removed session 24. Aug 13 07:16:48.698512 kubelet[2504]: E0813 07:16:48.698465 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:16:50.698609 kubelet[2504]: E0813 07:16:50.698560 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:16:53.177470 systemd[1]: Started sshd@24-10.0.0.120:22-10.0.0.1:37452.service - OpenSSH per-connection server daemon (10.0.0.1:37452). Aug 13 07:16:53.220086 sshd[6241]: Accepted publickey for core from 10.0.0.1 port 37452 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:16:53.221910 sshd[6241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:16:53.225984 systemd-logind[1449]: New session 25 of user core. Aug 13 07:16:53.236993 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 13 07:16:53.580972 sshd[6241]: pam_unix(sshd:session): session closed for user core Aug 13 07:16:53.585186 systemd[1]: sshd@24-10.0.0.120:22-10.0.0.1:37452.service: Deactivated successfully. Aug 13 07:16:53.587306 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 07:16:53.588006 systemd-logind[1449]: Session 25 logged out. Waiting for processes to exit. Aug 13 07:16:53.589127 systemd-logind[1449]: Removed session 25. Aug 13 07:16:58.592049 systemd[1]: Started sshd@25-10.0.0.120:22-10.0.0.1:42176.service - OpenSSH per-connection server daemon (10.0.0.1:42176). Aug 13 07:16:58.633798 sshd[6255]: Accepted publickey for core from 10.0.0.1 port 42176 ssh2: RSA SHA256:CMfoLhPNmBOOiskIU7y9xMX9q9TU1tPTT3rYgwbB2Y8 Aug 13 07:16:58.635630 sshd[6255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:16:58.639998 systemd-logind[1449]: New session 26 of user core. Aug 13 07:16:58.646039 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 13 07:16:58.946841 sshd[6255]: pam_unix(sshd:session): session closed for user core Aug 13 07:16:58.952002 systemd[1]: sshd@25-10.0.0.120:22-10.0.0.1:42176.service: Deactivated successfully. Aug 13 07:16:58.954062 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 07:16:58.954797 systemd-logind[1449]: Session 26 logged out. Waiting for processes to exit. Aug 13 07:16:58.956338 systemd-logind[1449]: Removed session 26. Aug 13 07:17:00.702162 kubelet[2504]: E0813 07:17:00.702119 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"